#StackBounty: #algorithms #expectation-maximization #t-distribution #latent-variable EM for a form of Student distribution

Bounty: 50

I consider $n$ replications from the following sampling density function for $y_iinBbb{R}^p$

f_p(y_ivertmu,Sigma,nu)={displaystyle {frac {Gamma left[(nu +p)/2right]}{Gamma (nu /2)nu ^{d/2}pi ^{p/2}left|{boldsymbol {Sigma }}right|^{1/2}}}left[1+{frac {1}{nu }}({mathbf {y} }-{boldsymbol {beta x_i }})^{T}{boldsymbol {Sigma }}^{-1}({mathbf {y} }-{boldsymbol {beta x_i }})right]^{-(nu +p)/2}}label{eq:2}

Where $beta$ is unknown and the matrix $Sigma$ is parametrized as $sigma^2 Q.$

I would like to perform an EM algorithm.

If I am not mistaken, we have for the log-likelikehood
$$log L(Psi)=frac{-1}{2}nplog(2pi)-frac1{2}nlog vert Sigmavert-frac1{2sigma^2}sum_{j=1}^n z_j(y_j-beta x_i)^TQ^{-1}(y_j-beta x_i)$$

where I forgot everything about $nu$ because it supposed to be known.

Now we have

$$Zvert Y=ysim Gamma(m_1,m_2)$$ with $m_1=frac{1}{2}(nu+p)$ et $m_2=frac{1}{2}(nu+(y-mu)^TSigma(y-mu))$ for general $t_p(mu,Sigma,nu)$ distribution.

If I am doing the chameleon we have

$$Zvert Y=ysim Gamma(m_1,m_2)$$ with $m_1=frac{1}{2}(nu+p)$ et $m_2=frac{1}{2sigma^2}(nu+(y-mu_i)^T Q^{-1}(y-mu_i))$

In general case we have
E_{Psi^{(k)}}(Z_jvert y_j)=frac{nu+p}{nu+Q(y_j,mu^{(k)},Sigma^{(k)})}

I can also do the chameleon here but I am pretty sure I miss something otherwise is trivial.

What am I missing ?

Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.