## #StackBounty: #python #pandas #tensorflow #keras #neural-network Tensorflow 1.13.1 tf.data map multiple images with a single row together

### Bounty: 50

I’m building my tf dataset where there are multiple inputs (images and numerical/categorical data). The problem I am having is that multiple images correspond to the same row in the pd.Dataframe I have. I am doing regression.

So how, (even when shuffling all the inputs) do I ensure that each image gets mapped to the correct row?

Again, say I have 10 rows, and 100 images, with 10 images corresponding to a particular row. Now we shuffle the dataset, and we want to make sure that the shuffled images all correspond to their respective row.

I am using `tf.data.Dataset` to do this. I also have a directory structure such that the folder name corresponds to an element in the DataFrame, which is what I was thinking of using if I knew how to do the mapping

i.e. `folder1` would be in the df with cols like `dir_name, feature1, feature2, ...`. Naturally, the `dir_names` should not be passed as data into the model to fit on.

``````#images
path_ds = tf.data.Dataset.from_tensor_slices(paths)

#numerical&categorical features. First remove the dirs
x_train_input = X_train[X_train.columns.difference(['dir_name'])]
x_train_input=np.expand_dims(x_train_input, axis=1)
text_ds = tf.data.Dataset.from_tensor_slices(x_train_input)

#labels, y_train's cols are: 'label' and 'dir_name'
label_ds = tf.data.Dataset.from_tensor_slices(
tf.cast(y_train['label'], tf.float32))

# test creation of dataset without prior shuffling.
xtrain_ = tf.data.Dataset.zip((image_ds, text_ds))
model_ds = tf.data.Dataset.zip((xtrain_, label_ds))

# Shuffling
BATCH_SIZE = 64

# Setting a shuffle buffer size as large as the dataset ensures that
# data is completely shuffled
ds = model_ds.shuffle(buffer_size=len(paths))
ds = ds.repeat()
ds = ds.batch(BATCH_SIZE)
# prefetch lets the dataset fetch batches in the background while the
# model is training
# ds = ds.prefetch(buffer_size=AUTOTUNE)
ds = ds.prefetch(buffer_size=BATCH_SIZE)

``````

Get this bounty!!!

## #StackBounty: #python #tensorflow #keras #neural-network #deep-learning Keras: CNN model is not learning

### Bounty: 50

I want to train a model to predict one’s emotion from the physical signals. I have a physical signal and using it as input feature;

ecg(Electrocardiography)

In my dataset, there are 312 total records belonging to the participants and there are 18000 rows of data in each record. So when I combine them into a single data frame, there are 5616000 rows in total.

Here is my `train_x` dataframe;

``````            ecg
0        0.1912
1        0.3597
2        0.3597
3        0.3597
4        0.3597
5        0.3597
6        0.2739
7        0.1641
8        0.0776
9        0.0005
10      -0.0375
11      -0.0676
12      -0.1071
13      -0.1197
..      .......
..      .......
..      .......
5616000 0.0226
``````

And I have 6 classes which are corresponding to emotions. I have encoded these labels with numbers;

anger = 0, calmness = 1, disgust = 2, fear = 3, happiness = 4, sadness = 5

Here is my train_y;

``````         emotion
0              0
1              0
2              0
3              0
4              0
.              .
.              .
.              .
18001          1
18002          1
18003          1
.              .
.              .
.              .
360001         2
360002         2
360003         2
.              .
.              .
.              .
.              .
5616000        5
``````

To feed my CNN, I am reshaping the train_x and one hot encoding the train_y data.

``````train_x = train_x.values.reshape(312,18000,1)
train_y = train_y.values.reshape(312,18000)
train_y = train_y[:,:1]  # truncated train_y to have single corresponding value to a complete signal.
train_y = pd.DataFrame(train_y)
train_y = pd.get_dummies(train_y[0]) #one hot encoded labels
``````

After these processes, here is how they look like;
train_x after reshape;

``````[[[0.60399908]
[0.79763273]
[0.79763273]
...
[0.09779361]
[0.09779361]
[0.14732245]]

[[0.70386905]
[0.95101687]
[0.95101687]
...
[0.41530258]
[0.41728671]
[0.42261905]]

[[0.75008021]
[1.        ]
[1.        ]
...
[0.46412148]
[0.46412148]
[0.46412148]]

...

[[0.60977509]
[0.7756791 ]
[0.7756791 ]
...
[0.12725148]
[0.02755331]
[0.02755331]]

[[0.59939494]
[0.75514785]
[0.75514785]
...
[0.0391334 ]
[0.0391334 ]
[0.0578706 ]]

[[0.5786066 ]
[0.71539303]
[0.71539303]
...
[0.41355098]
[0.41355098]
[0.4112712 ]]]
``````

train_y after one hot encoding;

``````    0  1  2  3  4  5
0    1  0  0  0  0  0
1    1  0  0  0  0  0
2    0  1  0  0  0  0
3    0  1  0  0  0  0
4    0  0  0  0  0  1
5    0  0  0  0  0  1
6    0  0  1  0  0  0
7    0  0  1  0  0  0
8    0  0  0  1  0  0
9    0  0  0  1  0  0
10   0  0  0  0  1  0
11   0  0  0  0  1  0
12   0  0  0  1  0  0
13   0  0  0  1  0  0
14   0  1  0  0  0  0
15   0  1  0  0  0  0
16   1  0  0  0  0  0
17   1  0  0  0  0  0
18   0  0  1  0  0  0
19   0  0  1  0  0  0
20   0  0  0  0  1  0
21   0  0  0  0  1  0
22   0  0  0  0  0  1
23   0  0  0  0  0  1
24   0  0  0  0  0  1
25   0  0  0  0  0  1
26   0  0  1  0  0  0
27   0  0  1  0  0  0
28   0  1  0  0  0  0
29   0  1  0  0  0  0
..  .. .. .. .. .. ..
282  0  0  0  1  0  0
283  0  0  0  1  0  0
284  1  0  0  0  0  0
285  1  0  0  0  0  0
286  0  0  0  0  1  0
287  0  0  0  0  1  0
288  1  0  0  0  0  0
289  1  0  0  0  0  0
290  0  1  0  0  0  0
291  0  1  0  0  0  0
292  0  0  0  1  0  0
293  0  0  0  1  0  0
294  0  0  1  0  0  0
295  0  0  1  0  0  0
296  0  0  0  0  0  1
297  0  0  0  0  0  1
298  0  0  0  0  1  0
299  0  0  0  0  1  0
300  0  0  0  1  0  0
301  0  0  0  1  0  0
302  0  0  1  0  0  0
303  0  0  1  0  0  0
304  0  0  0  0  0  1
305  0  0  0  0  0  1
306  0  1  0  0  0  0
307  0  1  0  0  0  0
308  0  0  0  0  1  0
309  0  0  0  0  1  0
310  1  0  0  0  0  0
311  1  0  0  0  0  0

[312 rows x 6 columns]
``````

After reshaping, I have created my CNN model;

``````model = Sequential()
model.add(Conv1D(100,700,activation='relu',input_shape=(18000,1))) #kernel_size is 700 because 18000 rows = 60 seconds so 700 rows = ~2.33 seconds and there is two heart beat peak in every 2 second for ecg signal.

model.compile(optimizer = adam, loss = 'categorical_crossentropy', metrics = ['acc'])
model.fit(train_x,train_y,epochs = 50, batch_size = 32, validation_split=0.33, shuffle=False)
``````

The problem is, accuracy is not going more than 0.2 and it is fluctuating up and down. Looks like the model does not learn anything. I have tried to increase layers, play with the learning rate, changing the loss function, changing the optimizer, scaling the data, normalizing the data, but nothing helped me to solve this problem. I also tried more simple Dense models or LSTM models but I can’t find a way which works.

How Can I solve this problem? Thanks in advance.

I wanted to add the training results after 50 epochs;

``````Epoch 1/80
249/249 [==============================] - 24s 96ms/step - loss: 2.3118 - acc: 0.1406 - val_loss: 1.7989 - val_acc: 0.1587
Epoch 2/80
249/249 [==============================] - 19s 76ms/step - loss: 2.0468 - acc: 0.1647 - val_loss: 1.8605 - val_acc: 0.2222
Epoch 3/80
249/249 [==============================] - 19s 76ms/step - loss: 1.9562 - acc: 0.1767 - val_loss: 1.8203 - val_acc: 0.2063
Epoch 4/80
249/249 [==============================] - 19s 75ms/step - loss: 1.9361 - acc: 0.2169 - val_loss: 1.8033 - val_acc: 0.1905
Epoch 5/80
249/249 [==============================] - 19s 74ms/step - loss: 1.8834 - acc: 0.1847 - val_loss: 1.8198 - val_acc: 0.2222
Epoch 6/80
249/249 [==============================] - 19s 75ms/step - loss: 1.8278 - acc: 0.2410 - val_loss: 1.7961 - val_acc: 0.1905
Epoch 7/80
249/249 [==============================] - 19s 75ms/step - loss: 1.8022 - acc: 0.2450 - val_loss: 1.8092 - val_acc: 0.2063
Epoch 8/80
249/249 [==============================] - 19s 75ms/step - loss: 1.7959 - acc: 0.2369 - val_loss: 1.8005 - val_acc: 0.2222
Epoch 9/80
249/249 [==============================] - 19s 75ms/step - loss: 1.7234 - acc: 0.2610 - val_loss: 1.7871 - val_acc: 0.2381
Epoch 10/80
249/249 [==============================] - 19s 75ms/step - loss: 1.6861 - acc: 0.2972 - val_loss: 1.8017 - val_acc: 0.1905
Epoch 11/80
249/249 [==============================] - 19s 75ms/step - loss: 1.6696 - acc: 0.3173 - val_loss: 1.7878 - val_acc: 0.1905
Epoch 12/80
249/249 [==============================] - 19s 75ms/step - loss: 1.5868 - acc: 0.3655 - val_loss: 1.7771 - val_acc: 0.1270
Epoch 13/80
249/249 [==============================] - 19s 75ms/step - loss: 1.5751 - acc: 0.3936 - val_loss: 1.7818 - val_acc: 0.1270
Epoch 14/80
249/249 [==============================] - 19s 75ms/step - loss: 1.5647 - acc: 0.3735 - val_loss: 1.7733 - val_acc: 0.1429
Epoch 15/80
249/249 [==============================] - 19s 75ms/step - loss: 1.4621 - acc: 0.4177 - val_loss: 1.7759 - val_acc: 0.1270
Epoch 16/80
249/249 [==============================] - 19s 75ms/step - loss: 1.4519 - acc: 0.4498 - val_loss: 1.8005 - val_acc: 0.1746
Epoch 17/80
249/249 [==============================] - 19s 75ms/step - loss: 1.4489 - acc: 0.4378 - val_loss: 1.8020 - val_acc: 0.1270
Epoch 18/80
249/249 [==============================] - 19s 75ms/step - loss: 1.4449 - acc: 0.4297 - val_loss: 1.7852 - val_acc: 0.1587
Epoch 19/80
249/249 [==============================] - 19s 75ms/step - loss: 1.3600 - acc: 0.5301 - val_loss: 1.7922 - val_acc: 0.1429
Epoch 20/80
249/249 [==============================] - 19s 75ms/step - loss: 1.3349 - acc: 0.5422 - val_loss: 1.8061 - val_acc: 0.2222
Epoch 21/80
249/249 [==============================] - 19s 75ms/step - loss: 1.2885 - acc: 0.5622 - val_loss: 1.8235 - val_acc: 0.1746
Epoch 22/80
249/249 [==============================] - 19s 75ms/step - loss: 1.2291 - acc: 0.5823 - val_loss: 1.8173 - val_acc: 0.1905
Epoch 23/80
249/249 [==============================] - 19s 75ms/step - loss: 1.1890 - acc: 0.6506 - val_loss: 1.8293 - val_acc: 0.1905
Epoch 24/80
249/249 [==============================] - 19s 75ms/step - loss: 1.1473 - acc: 0.6627 - val_loss: 1.8274 - val_acc: 0.1746
Epoch 25/80
249/249 [==============================] - 19s 75ms/step - loss: 1.1060 - acc: 0.6747 - val_loss: 1.8142 - val_acc: 0.1587
Epoch 26/80
249/249 [==============================] - 19s 75ms/step - loss: 1.0210 - acc: 0.7510 - val_loss: 1.8126 - val_acc: 0.1905
Epoch 27/80
249/249 [==============================] - 19s 75ms/step - loss: 0.9699 - acc: 0.7631 - val_loss: 1.8094 - val_acc: 0.1746
Epoch 28/80
249/249 [==============================] - 19s 75ms/step - loss: 0.9127 - acc: 0.8193 - val_loss: 1.8012 - val_acc: 0.1746
Epoch 29/80
249/249 [==============================] - 19s 75ms/step - loss: 0.9176 - acc: 0.7871 - val_loss: 1.8371 - val_acc: 0.1746
Epoch 30/80
249/249 [==============================] - 19s 75ms/step - loss: 0.8725 - acc: 0.8233 - val_loss: 1.8215 - val_acc: 0.1587
Epoch 31/80
249/249 [==============================] - 19s 75ms/step - loss: 0.8316 - acc: 0.8514 - val_loss: 1.8010 - val_acc: 0.1429
Epoch 32/80
249/249 [==============================] - 19s 75ms/step - loss: 0.7958 - acc: 0.8474 - val_loss: 1.8594 - val_acc: 0.1270
Epoch 33/80
249/249 [==============================] - 19s 75ms/step - loss: 0.7452 - acc: 0.8795 - val_loss: 1.8260 - val_acc: 0.1587
Epoch 34/80
249/249 [==============================] - 19s 75ms/step - loss: 0.7395 - acc: 0.8916 - val_loss: 1.8191 - val_acc: 0.1587
Epoch 35/80
249/249 [==============================] - 19s 75ms/step - loss: 0.6794 - acc: 0.9357 - val_loss: 1.8344 - val_acc: 0.1429
Epoch 36/80
249/249 [==============================] - 19s 75ms/step - loss: 0.6106 - acc: 0.9357 - val_loss: 1.7903 - val_acc: 0.1111
Epoch 37/80
249/249 [==============================] - 19s 75ms/step - loss: 0.5609 - acc: 0.9598 - val_loss: 1.7882 - val_acc: 0.1429
Epoch 38/80
249/249 [==============================] - 19s 75ms/step - loss: 0.5788 - acc: 0.9478 - val_loss: 1.8036 - val_acc: 0.1905
Epoch 39/80
249/249 [==============================] - 19s 75ms/step - loss: 0.5693 - acc: 0.9398 - val_loss: 1.7712 - val_acc: 0.1746
Epoch 40/80
249/249 [==============================] - 19s 75ms/step - loss: 0.4911 - acc: 0.9598 - val_loss: 1.8497 - val_acc: 0.1429
Epoch 41/80
249/249 [==============================] - 19s 75ms/step - loss: 0.4824 - acc: 0.9518 - val_loss: 1.8105 - val_acc: 0.1429
Epoch 42/80
249/249 [==============================] - 19s 75ms/step - loss: 0.4198 - acc: 0.9759 - val_loss: 1.8332 - val_acc: 0.1111
Epoch 43/80
249/249 [==============================] - 19s 75ms/step - loss: 0.3890 - acc: 0.9880 - val_loss: 1.9316 - val_acc: 0.1111
Epoch 44/80
249/249 [==============================] - 19s 75ms/step - loss: 0.3762 - acc: 0.9920 - val_loss: 1.8333 - val_acc: 0.1746
Epoch 45/80
249/249 [==============================] - 19s 75ms/step - loss: 0.3510 - acc: 0.9880 - val_loss: 1.8090 - val_acc: 0.1587
Epoch 46/80
249/249 [==============================] - 19s 75ms/step - loss: 0.3306 - acc: 0.9880 - val_loss: 1.8230 - val_acc: 0.1587
Epoch 47/80
249/249 [==============================] - 19s 75ms/step - loss: 0.2814 - acc: 1.0000 - val_loss: 1.7843 - val_acc: 0.2222
Epoch 48/80
249/249 [==============================] - 19s 75ms/step - loss: 0.2794 - acc: 1.0000 - val_loss: 1.8147 - val_acc: 0.2063
Epoch 49/80
249/249 [==============================] - 19s 75ms/step - loss: 0.2430 - acc: 1.0000 - val_loss: 1.8488 - val_acc: 0.1587
Epoch 50/80
249/249 [==============================] - 19s 75ms/step - loss: 0.2216 - acc: 1.0000 - val_loss: 1.8215 - val_acc: 0.1587
``````

Get this bounty!!!

## #StackBounty: #neural-network #regression #lstm #rnn #word-embeddings Understanding output of LSTM for regression

### Bounty: 50

I am working with embeddings and wanted to see how feasible it is to predict some scores attached to some sequences of words. The details of the scores are not important.

``````Input (tokenized sentence): ('the', 'dog', 'ate', 'the', 'apple')
Output (float): 0.25
``````

I have been following this tutorial which tries to predict part-of-speech tags of such input. In such case, the output of the system is a distribution of all possible tags for all tokens in the sequence, e.g. for three possible POS classes `{'DET': 0, 'NN': 1, 'V': 2}`, the output for `('the', 'dog', 'ate', 'the', 'apple')` could be

``````tensor([[-0.0858, -2.9355, -3.5374],
[-5.2313, -0.0234, -4.0314],
[-3.9098, -4.1279, -0.0368],
[-0.0187, -4.7809, -4.5960],
[-5.8170, -0.0183, -4.1879]])
``````

Each row is a token, the index of the highest value in a token is the best predicted POS tag.

I understand this example relatively well, so I wanted to adapt it to a regression problem. The full code is below, but I am trying to make sense of the output.

``````import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

torch.manual_seed(1)

class LSTMRegressor(nn.Module):
def __init__(self, embedding_dim, hidden_dim, vocab_size):
super(LSTMRegressor, self).__init__()
self.hidden_dim = hidden_dim

self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)

# The LSTM takes word embeddings as inputs, and outputs hidden states
# with dimensionality hidden_dim.
self.lstm = nn.LSTM(embedding_dim, hidden_dim)

# The linear layer that maps from hidden state space to a single output
self.linear = nn.Linear(hidden_dim, 1)
self.hidden = self.init_hidden()

def init_hidden(self):
# Before we've done anything, we dont have any hidden state.
# Refer to the Pytorch documentation to see exactly
# why they have this dimensionality.
# The axes semantics are (num_layers, minibatch_size, hidden_dim)
return (torch.zeros(1, 1, self.hidden_dim),
torch.zeros(1, 1, self.hidden_dim))

def forward(self, sentence):
embeds = self.word_embeddings(sentence)

lstm_out, self.hidden = self.lstm(embeds.view(len(sentence), 1, -1), self.hidden)
regression = F.relu(self.linear(lstm_out.view(len(sentence), -1)))

return regression

def prepare_sequence(seq, to_ix):
idxs = [to_ix[w] for w in seq]

# ================================================

training_data = [
("the dog ate the apple".split(), 0.25),
]

word_to_ix = {}
for sent, tags in training_data:
for word in sent:
if word not in word_to_ix:
word_to_ix[word] = len(word_to_ix)

tag_to_ix = {"DET": 0, "NN": 1, "V": 2}

# ================================================

EMBEDDING_DIM = 6
HIDDEN_DIM = 6

model = LSTMRegressor(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix))
loss_function = nn.MSELoss()

# See what the results are before training
inputs = prepare_sequence(training_data[0][0], word_to_ix)
regr = model(inputs)

print(regr)

for epoch in range(100):  # again, normally you would NOT do 300 epochs, it is toy data
for sentence, target in training_data:
# Step 1. Remember that Pytorch accumulates gradients.
# We need to clear them out before each instance

# Also, we need to clear out the hidden state of the LSTM,
# detaching it from its history on the last instance.
model.hidden = model.init_hidden()

# Step 2. Get our inputs ready for the network, that is, turn them into
# Tensors of word indices.
sentence_in = prepare_sequence(sentence, word_to_ix)
target = torch.tensor(target, dtype=torch.float)

# Step 3. Run our forward pass.
score = model(sentence_in)

# Step 4. Compute the loss, gradients, and update the parameters by
#  calling optimizer.step()
loss = loss_function(score, target)
loss.backward()
optimizer.step()

# See what the results are after training
inputs = prepare_sequence(training_data[0][0], word_to_ix)
regr = model(inputs)

print(regr)
``````

The output is:

``````# Before training
tensor([[0.0000],
[0.0752],
[0.1033],
[0.0088],
[0.1178]])
# After training
tensor([[0.6181],
[0.4987],
[0.3784],
[0.4052],
[0.4311]])
``````

But I don’t understand why. I was expecting a single output. The size of the tensor is the same as the number of tokens of the input. I would, then, guess that for each step in the input, the hidden state is given. Is that correct? Does that mean that the last item in the tensor (`tensor[-1]`, or is it the first `tensor[0]`?) is the final prediction? Why are all outputs given? Or lies my misunderstanding earlier in the forward-pass? Perhaps I should only feed the last item of the LSTM layer to the linear layer?

I am also interested to know how this extrapolates to bidirectional LSTMs and multilayer LSTMs, and even how this would work with GRUs (bidirectional or not).

The bounty will be given to the person who can explain why we would use the last output or the last hidden state or what the difference means from a goal-directed perspective. In addition, some information about multilayer architectures and bidirectional RNNs is welcome. For instance, is it common practice to sum or concatenate the output and hidden state of bidirectional LSTM/GRU to get your data into sensible shape? If so, how do you do it?

Get this bounty!!!

## #StackBounty: #neural-network #statistics #recurrent-neural-net #forecast #forecasting Is an Arma model equivalent to a 1-layer Recurre…

### Bounty: 50

Given a time series $$f(t)$$ to forecast, let us consider an Arma model of the form:
$$f(t) = c + sum_{i=1}^p a_i f(t-i) + e(t) + sum_{j=1}^q b_j e(t-j)$$

where $$e(t)$$ are the forecast errors.

On the train set, if $$f(t)$$ is the ground truth, then we define its estimate obtained with this model as $$widetilde{f}(t) = f(t) + e(t)$$.

Let $$m = min(p,q)$$, we can rewrite the first equation as:
$$widetilde{f}(t) = c + sum_{i=1}^m (a_i + b_i) f(t-i) + sum_{i=m+1}^p a_i f(t-i) – sum_{j=1}^q b_j widetilde{f}(t-j)$$
Then after reparametrization can be rewritten as:
$$widetilde{f}(t) = c + sum_{i=1}^k c_i f(t-i) – sum_{j=1}^q b_j widetilde{f}(t-j)$$
Which is the equation of a 1-layer recurrent neural network (RNN) without activation function.

So, are Arma models a subset of RNNs or is there a flaw in this reasoning ?

Get this bounty!!!