#StackBounty: #deep-learning #classification #keras #convolutional-neural-network #ai Convolutional Neural Network for Signal Modulatio…

Bounty: 100

I recently posted another question and this question is the evolution of that one.

By the way I will resume all the problem below, like if the previous question didn’t ever exist.

Problem description

I’m doing Signal Modulation Classification using a Convolutional Neural Network and I want to improve performance.

Data

Dataset is composed by 220.000 rows like these. Data is perfectly balanced: I have 20.000 datapoints for each label.

Dataset column Type Range Form Notes
Signal i=real, q=real [i_0, i_1, …, i_n], [q_0, q_1, …, q_n] n=127
SNR s=integer [-18, 20] s
Label l=string l They are 11 labels

Lower is the SNR value, and noisier is the signal: classify low SNR signals is not that easy.

Neural Network

Neural Network is a Convolutional Neural Network coded as below:

DROPOUT_RATE = 0.5

iq_in = keras.Input(shape=in_shp, name="IQ")
reshape = Reshape(in_shp + [1])(iq_in)
batch_normalization = BatchNormalization()(reshape)

conv_1 = Convolution2D(16, 4, padding="same", activation="relu")(batch_normalization)
max_pool = MaxPooling2D(padding='same')(conv_1)
batch_normalization_2 = BatchNormalization()(max_pool)
fc1 = Dense(256, activation="relu")(batch_normalization_2)
conv_2 = Convolution2D(32, 2, padding="same", activation="relu")(fc1)
batch_normalization_3 = BatchNormalization()(conv_2)
max_pool_2 = MaxPooling2D(padding='same')(batch_normalization_3)

out_flatten = Flatten()(max_pool_2)
dr = Dropout(DROPOUT_RATE)(out_flatten)
fc2 = Dense(256, activation="relu")(dr)
batch_normalization_4 = BatchNormalization()(fc2)
fc3 = Dense(128, activation="relu")(batch_normalization_4)
output = Dense(11, name="output", activation="softmax")(fc3)

model = keras.Model(inputs=[iq_in], outputs=[output])
model.compile(loss='categorical_crossentropy', optimizer='adam')

model.summary()

NeuralNetwork

Training

Training is being done splitting the data in 75% as Training set, 25% as Test set.

NB_EPOCH = 100     # number of epochs to train on
BATCH_SIZE = 1024  # training batch size

filepath = NEURAL_NETWORK_FILENAME

history = model.fit(
    X_train,
    Y_train,
    batch_size=BATCH_SIZE,
    epochs=NB_EPOCH,
    validation_data=(X_test, Y_test),
    callbacks = [
        keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=True, mode='auto'),
        keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, verbose=0, mode='auto')
    ])

# we re-load the best weights once training is finished
model.load_weights(filepath)

Results

ConfusionMatrix

My evaluation system evaluate how accurate is my Neural Network for classifying signals with different SNR.

Accuracy_2_SNR

What did I try?

Thisis a list of things that I tried and I’m sure that are modifying performances in worse:

  • Reducing batch size (only increases training time without improving test accuracy)
  • Training without too noisy signals (lowers accuracy)
  • Moving the Dropout layer before the Flatten layer

Questions

Any suggestion to get better performances?

Thanks in advance!


Get this bounty!!!

#StackBounty: #machine-learning #deep-learning #keras #r #convolutional-neural-network Calculate importance of input data bands for CNN…

Bounty: 50

I constructed and trained a convolutional neural network using Keras in R with the TensorFlow backend. I feed the network with multispectral images for a simple image classification.

Is there some way to calculate which of the input bands were most important for the classification task? Ideally, I would like to have a plot with some measure of importance, grouped by bands and image classes.

How can I obtain this information? Would it be necessary / possible to calculate saliency maps for every band and picture, and take the mean or sum of these images per class and band?

Are there also other ways to get the information, which band was most important for the classification of an image?

Edit: With saliency maps I mean these or these visualizations. They provide information on which part of the image led the CNN to the conclusion to which class it identifies. However, I always see only one saliency map for the whole image. Is it possible to make one for each input band of an image? For example if I input RGB data, one for each color channel?

(This is inspired by a visualization of this paper. I saw it but I don’t know if it’s valid to do and if yes, how to do.)


Get this bounty!!!

#StackBounty: #deep-learning #neural-network #convolutional-neural-network #autoencoder Autoencoder not learning walk forward image tra…

Bounty: 50

I have a series of 15 frames with (60 rows x 50 columns). Over the course of those 15 frames, the moon moves from the top left to the bottom right.

Data = https://github.com/aiqc/AIQC/tree/main/remote_datum/image/liberty_moon

enter image description here

enter image description here

enter image description here

As my input data I have a 60×50 image. As my evaluation label I have a 60×50 image from 2 frames later. All are divided by 255.

I am attempting an autoencoder.

    model = keras.models.Sequential()
    model.add(layers.Conv1D(64*hp['multiplier'], 3, activation='relu', padding='same'))
    model.add(layers.MaxPool1D( 2, padding='same'))
    model.add(layers.Conv1D(32*hp['multiplier'], 3, activation='relu', padding='same'))
    model.add(layers.MaxPool1D( 2, padding='same'))
    model.add(layers.Conv1D(16*hp['multiplier'], 3, activation='relu', padding='same'))
    model.add(layers.MaxPool1D( 2, padding='same'))

    model.add(layers.Conv1D(16*hp['multiplier'], 3, activation='relu', padding='same'))
    model.add(layers.UpSampling1D(2))
    model.add(layers.Conv1D(32*hp['multiplier'], 3, activation='relu', padding='same'))
    model.add(layers.UpSampling1D(2))
    model.add(layers.Conv1D(64*hp['multiplier'], 3, activation='relu'))
    model.add(layers.UpSampling1D(2))

    model.add(layers.Conv1D(50, 3, activation='sigmoid', padding='same'))
    # last layer tried sigmoid with BCE loss.
    # last layer tried relu with MAE.

Tutorials say to use a final layer of sigmoid and BCE loss, but the values I’m producing must not be between 0-1 because the loss goes way negative.

enter image description here

If I use a final layer of relu with MAE loss it claims to learn something.

enter image description here

But the predicted image is notttt great:

enter image description here


Get this bounty!!!

#StackBounty: #dataset #data-cleaning #convolutional-neural-network Do I need to manually trim 300 videos?

Bounty: 50

I wish to train a model that detects the breed of a dog based on video input. I have a dataset containing 10 classes with 30 videos in each class. The problem is that for each of these videos, the dog is not present throughout the course of the video. The following are examples of 2 videos from the dataset:

Video 1: Video of backyard (first 5 seconds) –> Dog appears (15 seconds) –> Video of surrounding buildings (3 seconds)

Video 2: Video of grass (first 8 seconds) –> Dog appears (3 seconds) –> Video of nearby people (4 seconds)

I presume that my CNN would detect redundant features and hence give incorrect outputs if I trained my model on the videos as is. Hence, do I need to manually trim each of the 300 videos to show only the part where the dog appears or is there an easier way to approach this problem?


Get this bounty!!!

#StackBounty: #python #neural-network #convolutional-neural-network #overfitting Is it possible to use a Neural Network to interpolate …

Bounty: 50

I am completely new to Artificial intelligence and Neural Networks. I am currently working on a plasma physics simulation project which requires a very high resolution data set. We currently have the results of two simulations of the same problem run at different resolutions – with one’s resolution being higher than the other. However, we need an even higher resolution for us to use this data effectively. Unfortunately, it is not possible for us to run a higher resolution simulation because of computation power limitations. So instead, we are trying to somehow interpolate the data we have to get a reasonable estimate of what the simulation result might be if we were to run it at a higher resolution. I tried to interpolate the data using conventional interpolation techniques and functions in SciPy. However, the interpolated result is sometimes off by about 20 to 30 percent at certain points.

Problem Statement and my Idea

So I was wondering if it was possible to use a neural network to generate an output that when fed into the interpolator (code that I have written using SciPy), would yield better results than if I used just the interpolator. Currently, out data when plotted looks like this:

enter image description here

This is the data plotted at a certain time t. However, we have data similar to this for about 30 different times steps – so we have 30 different data sets that look similar to this but are slightly altered. And as I said before, we also have the high resolution and low resolution data sets for each of the 30 timesteps.

My idea for the ANN is as follows: The low resolution data (a 512 x 256 2-D array) can be fed into a the network to output a slightly modified 512 x 256 2-D array. We can then input this modified data set into our interpolator and see if it matches the high resolution data set (1024 x 512). The error function for the network would be a function of the difference of the high data set and the interpolated data set (maybe something like the sum of the squares of the difference of each element in the arrays). This can then be done for all 30 different data sets to minimise the difference in the high res and interpolated data sets.

If this works as planned, I would somehow use this trained ANN to the high resolution data set (1024 x 512) to feed it’s output into the interpolator.

Questions

Is it possible to create a neural network that can do this, and if yes, what type of networks do this?

Even if the neural network can be trained, how do we upgrade to work for the high res data set (1024 x 512) when it was initially trained with the low res data set (512 x 256)?

Is this a trustworthy method to predict simulation results? (All 30 data sets look almost exactly like the image above; including the high res results)

If this is possible, please link a few resources so I can read about this further.


Get this bounty!!!

#StackBounty: #deep-learning #image-classification #convolutional-neural-network #distributed #inference Distributed inference for imag…

Bounty: 50

I would like to take the output of an intermediate layer of a CNN (layer G) and feed it to an intermediate layer of a wider CNN (layer H) to complete the inference.

Challenge: The two layers G, H have different dimensions and thus it can’t be done directly.
Solution: Use a third CNN (call it r) which will take as input the output of layer G and output a valid input for layer H.
Then both the weights of layer G and r will be tuned using the loss function:

$$L(W_G, W_r) = MSE(text{output of layer H}, text{output of r})$$

My question: Will this method only change the layer G’s weights along with r’s weights? Does the whole system require finetuning afterwards to update the weights of the other layers?


Get this bounty!!!

#StackBounty: #deep-learning #neural-network #convolutional-neural-network Non Linearity used in LeNet 5

Bounty: 50

I was looking at the original implementation of LeNet-5 and I noticed a disparity in different sources. Wikipedia suggests that the non linearity used is the same sigmoid in each layer, some blog posts use a combination of Tanh and sigmoid while Andrew NG said it used some crude non linearity which no one uses today without naming it. I looked at the original paper but it’s like 50 pages long and the diagram does not mention the activation functions used explicitly. I searched a bit and the sigmoid function was there and mentioned in context of activations while the tanh function is taken as a squashing function. I’m not sure if that is the same or different as then it used other terms when referring to the sigmoid ones. Anyone knows what’s up with this?


Get this bounty!!!