#StackBounty: #maximum-likelihood #image-processing Likelihood – Difference between deducing original signal and deducing parameters of…

Bounty: 50

I make confusions in the using of (1) Maximum Likelihood to find (approximately) the original signal knowing the data observed and (2)the using of Maximum Likelihood to find estimations of parameters of the PSF.

First task 1) Find (up to some point) the original signal :

I start from this general definition (in discretized) : $$y=H,x+wquad(1)$$ (with $w$ a white noise)

Question 1.1) How can I demonstrate this relation ? (it seems that we should start from a discrete convolution product, doesn’t it ? Then, the correct expression would be rather : $y=Hx+w$ with $$ the product covolution
)

For estimation, I have to maximalize the likelihood function :

$$phi(x) = (H cdot x – y)^{T} cdot W cdot (H cdot x – y)$$

with $H$ the PSF, $W$ the inverse of covariance matrix (of data $y$), and $y$ data observed : the goal is to find $x$ original signal.

So the estimator is given by : $$x^{text{(ML)}} = (H^{T}cdot Wcdot H)^{-1}cdot H^{T}cdot W cdot y$$

Question 1.2) Is $x$ vector really the original image (I mean the real image that we want to determinate) ?

Question 1.3) For this task, I don’t know practically how to compute this estimator $x^{text{(ML)}}$ ?

2) Second task : next step, I have to find the parameters on the function with $theta=[a,b]$ parameter vector : this allow to find the best parameters of PSF and gives the best fit as a function of data $y$

Question 2.1) Is this step well formulated ?

I am working with the following PSF :

enter image description here

And for this second task, I have to find the parameters $a$ and $b$ of :

enter image description here

knowing ($r_0,c_0$)

In practise, I have used on Matlab the function to perform the Least Mean squares method between the data observed (PSF with noise) and the raw data (the PSF without noise).

In this way, I can find the two affine parameters $a$ and $b$ (actually, I think this is called a linear regression).

I saw that we could take the vector of parameters $theta[a,b]$ and use the following relation with a matricial form :

$$y=Htheta + wquad (2)$$

Question 2.2) What is the link between $H$ and the PSF used above (Moffat PSF) ?

($theta$ is the vector of parameters to estimate and w the white noise).

Question 2.3) How to demonstrate this important relation ? and what’s the difference between $(1)$ and $(2)$ ?

Question 2.4) I saw that I have to write $(2) under matricial form :

$$y=Htheta + wquad (2)$$

But how to produce this matrix $H$ from $text{PSF(r,c)}$ ?

Finally, I need remarks or help to knowing what the differences between the 2 tasks and what is the right method to apply for each one.

Sorry if there are multiple questions in this post but I need to grasp all the subtilities of this kind of problem.

UPDATE 1 :

You can find here the Matlab script that generates a typical output image (with Moffat PSF and white noise) : Matlab script

Here this typical output image :

Image with Moffat PSF and white noise

Here the estimated image with Maximum Likelihood method (parameters of Moffat are fixed and I want to estimate the original image) :

Image estimated

As you can see, the reconstructed image is very bad.

For this inversion problem, I have taken a $y$ matrix (2D array) in the formula :

$$x^{text{(ML)}} = (H^{T}cdot Wcdot H)^{-1}cdot H^{T}cdot W cdot y$$

I don’t know if it is correct to do this.

This would be fine if someone could help me, Regards.


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.