#StackBounty: #maximum-likelihood #least-squares #covariance #uncertainty #hessian Parameter uncertainity in least squares optimization…

Bounty: 50

Given a least squares optimization problem of the form:

$$ C(lambda) = sum_i ||y_i – f(x_i, lambda)||^2$$

I have found in multiple questions/answers (e.g. here) that an estimate for the covariance of the parameters can be computed from the inverse rescaled Hessian at the minimum point:

$$ mathrm{cov}(hatlambda) = hat H^{-1} hatsigma_r^2 = hat H^{-1} frac{sum_i ||y_i – f(x_i, hatlambda)||^2}{N_{DOF}} $$

While I understand why the covariance is related to the inverse Hessian (Fischer information), I haven’t found anywhere a demonstration or explanation for the $hatsigma_r^2$ term, although it appears reasonable to me on intuitive grounds.

Could anybody explain the need for the rescaling by the residual variance and/or provide a reference?


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.