#StackBounty: #neural-networks #loss-functions #gan Training a conditional GAN for image translation

Bounty: 50

I am training a generative adversarial network to perform style transfer from two different image domains (source S and target T). Since I have available class information I have an extra Q network (except G and D) that measures the classification results of the generated images for the target domain and their labels (a LeNet network). From the convergence of the system I have noticed that D is starting always from 8 (the loss function error of the D network) and slightly drops until 4.5 and the G loss function error is starting from 1 and quickly drops to 0.2. Is that behavior an example of mode-collapse? What is exactly the relationship between the errors of D and G? The loss function of D and G I am using can be found here https://github.com/r0nn13/conditional-dcgan-keras while the loss function of Q network is categorical cross-entropy. The loss functions errors are:

enter image description here

Is the behavior of D and G normal? Why D loss function is always high?


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.