An adversarial autoencoder helps us to impose a prior distribution $p(z)$ on the encoded values of the inputs, or $q(z)$.
On the contrary, an ordinary autoencoder (which we train like an ordinary neural network (comparing output to input using mean squared error) does not give us any control over the encoded distribution.
How does imposing a prior distribution help improve the accuracy of the GAN? I know both entities (discriminator and generator) play a minimax game boosting each other’s capability, but how does improving discriminator and generator improve the weights for encoder and decoder to match input image $x$ correctly with the output image?
(image taken from Makhzani et al)