#StackBounty: #natural-language #lstm #seq2seq Do we need to truncate test dataset for seq2seq LSTM?

Bounty: 50

I am running a summarization model which uses a seq2seq biLSTM with an attention mechanism. It is a standard practice to truncate the input dataset during training to 400 – 500 tokens. My question is, during generation on the test dataset (or validation dataset), do I need to truncate that dataset as well?

Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.