Bounty: 50
Introduction:
I am trying to get a CDCGAN (Conditional Deep Convolutional Generative Adversarial Network) to work on the MNIST dataset which should be fairly easy considering that the library (PyTorch) I am using has a tutorial on its website.
But I can’t seem to get It working it just produces garbage or the model collapses or both.
What I tried:
- making the model Conditional semi-supervised learning
- using batch norm
- using dropout on each layer besides the input/output layer on the generator and discriminator
- label smoothing to combat overconfidence
- Adding noise to the images (I guess you call this instance noise) to get a better data distribution
- Use leaky relu to avoid vanishing gradients
- Using a replay buffer to combat forgetting of learned stuff and overfitting
- playing with hyperparameters
- comparing it to the model from PyTorch tutorial
- Basicaly what I did besides some things like Embedding layer ect.
Images my Model generated:
Hyperparameters:
batch_size=50, learning_rate_discrimiantor=0.0001, learning_rate_generator=0.0003, shuffle=True, ndf=64, ngf=64, droupout=0.5
batch_size=50, learning_rate_discriminator=0.0003, learning_rate_generator=0.0003, shuffle=True, ndf=64, ngf=64, dropout=0
Images Pytorch tutorial Model generated:
Code for the pytorch tutorial dcgan model
As comparison here are the images from the DCGAN from the pytorch turoial:
My Code:
Placeholder code. I couldn't get the code formatting to work with my code.
I got the whole time complains:
"Your post appears to contain code that is not properly formatted as code".
First link to my Code (Pastebin)
Second link to my Code (0bin)
Conclusion:
Since I implemented all these things (e.g. label smoothing) which are considered beneficial to a GAN/DCGAN.
And my Model still performs worse than the Tutorial DCGAN from PyTorch I think I might have a bug in my code but I can’t seem to find it.
Reproducibility:
You should be able to just copy the code and run it if you have the libraries that I imported installed to look for yourself if you can find anything.
I appreciate any feedback.