#StackBounty: #estimation #taylor-series #iteration-methods Iterated estimation of Taylor series

Bounty: 50

Say your data generating process is given by the function $y=f(x|theta)$, where $y$ and $x$ represent variables (data) and $theta$ represent parameter(s). For convergence reasons (e.g. $f(cdot)$ is highly non-linear on parameters and a GMM estimator does not converge), you decide to estimate a Taylor series expansion of $f(cdot)$ around $theta=theta_0$. Let’s denote this approximated function as $y approx g(x|theta)_{theta_0}$.

Say you estimate $theta$ in $g(cdot)$ based on a random sample of ${y,x}$, and you get $hattheta_1$. Then, you recompute the Taylor series approximation around this point estimate (keeping the Taylor series order constant), and produce $y approx g(x|theta)_{hattheta_1}$. Then, you estimate again, yielding $hattheta_2$. You iterate until

$$ (hattheta_n – hattheta_{n+1})^2 < epsilon $$

for an arbitrary threshold $epsilon > 0$.

Convergence (in terms of the optimisation criterion above) is of course of paramount importance. Notice that for an arbitrarily large $epsilon$ there is always a solution, as long as $hattheta$ can be computed, which itself depends on the properties of $g(cdot)$, e.g. on the order of the Taylor expansion; a linear model is always estimable, beyond trivial issues like multicolinearity.

My question is, is the method above a thing? I’ve searched for “iterated estimation of taylor series” on Google, in this forum and in Math.SE and cannot find anything about this. Maybe the method is just plainly wrong, e.g. convergence is not assured by any known theorem.


More details on the method

For instance, consider a CES production function:

$$ Y = left(alpha K^theta+ (1-alpha)L^thetaright)^{1/theta} $$

where Y, L and K are variables, and $alpha$ and $theta$ are parameters. Say you produce a 2nd order Taylor series expansion of the above, around $theta= 0$. The new formula (called the translog production function) is:

$$ln(Y) approx alpha ln(K) + (1-alpha)ln(L) + 0.5thetaalpha(1-alpha)left(ln(K) – ln(L) right)^2 $$

So, you estimate the above equation with a random sample of ${Y,L,K}$, using e.g. non-linear least squares, from where you obtain an estimate of $theta$, $hattheta_1$. The idea is then to produce another Taylor series of $Y$, but this time around $hattheta_1$. Then, estimate the new equation. Iterate until some convergence criterion is fulfilled.


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.