#StackBounty: #lstm #tf.keras #keras-layer Keras: Passing an array through a shared LSTM layer?

Bounty: 100

I want to construct a network with time series data, and I’m scaling up a previous problem instance.

# input = { sequence:list of int, time: int, score: float}
embed = Embedding(output_dim=100, input_dim=self.sequence_range + 1, mask_zero=True, name='sequence_embedding')
sq_inpt = Input(shape=(self.MAX_SEQ_LEN, 1), name='sq_inpt')
sq_embed = embed(sq_inpt)
lstm_embed = LSTM(200, go_backwards=False)(sq_embed)

time_inpt = Input(shape=(1,), name='time_inpt')
score_inpt = Input(shape=(1,), name='score_inpt')

state_embed = Concatenate()([lstm_embed, time_inpt, score_inpt])
state_embed = Dense(300, activation='elu', name='state_embed_1')(state_embed)
state_embed = Dense(300, activation='elu', name='state_embed_2')(state_embed)
output = Dense(1, name='output')(state_embed)
model = Model(inputs=[sq_inpt, time_inpt, score_inpt], outputs=output)

My previous network had an embedding layer on the input, which was fed into an LSTM. The output of the LSTM, along with two other numeric inputs was further def to two Dense layers, before receiving a single unit output.

In the new version, I want to pass multiple sequences as input, each of which should independently go through the LSTM layer and then be concatenated before adding to state_embed.

# input = { sequences:list of(list of int), time: int, score: float}
embed = Embedding(output_dim=100, input_dim=self.sequence_range + 1, mask_zero=True, name='sequence_embedding')
sq_inpt = Input(shape=(self.MAX_SEQ_LEN, 1), name='sq_inpt')
sq_embed = embed(sq_inpt)

##CHANGE NEEDED HERE#######
lstm_layer = LSTM(200, go_backwards=False)

lstm_embed = []
for sequence in sq_embed:
    lstm_embed.append(lstm_layer(sequence)

###########################
.
.
.
state_embed = Concatenate()([lstm_embed, time_inpt, score_inpt])
.
.
.
model = Model(inputs=[sq_inpt, time_inpt, score_inpt], outputs=output)

I know the above code does not work, but it is the clearest way I could think of to represent what I want. I don’t want a unique LSTM layer for each of the sequences, as I want to share the weights.

How can I implement this in Keras?


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.