#StackBounty: #neural-network #deep-learning #lstm #natural-language-process #gan How is the Gaussian noise given to this BLSTM based G…

Bounty: 50

In a conditional GAN, we give a random noise along with a label to the generator as input. In this paper, I don’t understand why in one section of the paper, they say they are giving the random noise as input and the in another section of the paper they are saying it is concatenated to the output.

page 2

page 2

page 2 footnote

page 2 footnote

page 3 model setup section

page 3 model setup section

little overview of the paper: Code switching is a phenomenon in spoken language where we switch between two different languages. Mixed language models improve the accuracy of automatic speech recognition to higher degree but the problem is less availability of mixed language written sentences. Thus, as a data augmentation technique, a conditional GAN is developed to synthesize English, Mandarin mixed sentences from a pure Mandarin sentence. The trained generator acts as an agent telling which words in the Mandarin sentence have to be translated. It outputs a binary array (of length equal to input Mandarin sentence length). Both generator and discriminator are BLSTM networks.


Get this bounty!!!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.