*Bounty: 50*

*Bounty: 50*

I am training a generative adversarial network to perform style transfer from two different image domains (source `S`

and target `T`

). Since I have available class information I have an extra `Q`

network (except `G`

and `D`

) that measures the classification results of the generated images for the target domain and their labels (a LeNet network). From the convergence of the system I have noticed that `D`

is starting always from 8 (the loss function error of the `D`

network) and slightly drops until 4.5 and the `G`

loss function error is starting from 1 and quickly drops to 0.2. Is that behavior an example of mode-collapse? What is exactly the relationship between the errors of `D`

and `G`

? The loss function of `D`

and `G`

I am using can be found here https://github.com/r0nn13/conditional-dcgan-keras while the loss function of `Q`

network is categorical cross-entropy. The loss functions errors are:

Is the behavior of `D`

and `G`

normal? Why `D`

loss function is always high?