#StackBounty: #deep-learning #classification #keras #convolutional-neural-network #ai Convolutional Neural Network for Signal Modulatio…

Bounty: 100

I recently posted another question and this question is the evolution of that one.

By the way I will resume all the problem below, like if the previous question didn’t ever exist.

Problem description

I’m doing Signal Modulation Classification using a Convolutional Neural Network and I want to improve performance.

Data

Dataset is composed by 220.000 rows like these. Data is perfectly balanced: I have 20.000 datapoints for each label.

Dataset column Type Range Form Notes
Signal i=real, q=real [i_0, i_1, …, i_n], [q_0, q_1, …, q_n] n=127
SNR s=integer [-18, 20] s
Label l=string l They are 11 labels

Lower is the SNR value, and noisier is the signal: classify low SNR signals is not that easy.

Neural Network

Neural Network is a Convolutional Neural Network coded as below:

DROPOUT_RATE = 0.5

iq_in = keras.Input(shape=in_shp, name="IQ")
reshape = Reshape(in_shp + [1])(iq_in)
batch_normalization = BatchNormalization()(reshape)

conv_1 = Convolution2D(16, 4, padding="same", activation="relu")(batch_normalization)
max_pool = MaxPooling2D(padding='same')(conv_1)
batch_normalization_2 = BatchNormalization()(max_pool)
fc1 = Dense(256, activation="relu")(batch_normalization_2)
conv_2 = Convolution2D(32, 2, padding="same", activation="relu")(fc1)
batch_normalization_3 = BatchNormalization()(conv_2)
max_pool_2 = MaxPooling2D(padding='same')(batch_normalization_3)

out_flatten = Flatten()(max_pool_2)
dr = Dropout(DROPOUT_RATE)(out_flatten)
fc2 = Dense(256, activation="relu")(dr)
batch_normalization_4 = BatchNormalization()(fc2)
fc3 = Dense(128, activation="relu")(batch_normalization_4)
output = Dense(11, name="output", activation="softmax")(fc3)

model = keras.Model(inputs=[iq_in], outputs=[output])
model.compile(loss='categorical_crossentropy', optimizer='adam')

model.summary()

NeuralNetwork

Training

Training is being done splitting the data in 75% as Training set, 25% as Test set.

NB_EPOCH = 100     # number of epochs to train on
BATCH_SIZE = 1024  # training batch size

filepath = NEURAL_NETWORK_FILENAME

history = model.fit(
    X_train,
    Y_train,
    batch_size=BATCH_SIZE,
    epochs=NB_EPOCH,
    validation_data=(X_test, Y_test),
    callbacks = [
        keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=True, mode='auto'),
        keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, verbose=0, mode='auto')
    ])

# we re-load the best weights once training is finished
model.load_weights(filepath)

Results

ConfusionMatrix

My evaluation system evaluate how accurate is my Neural Network for classifying signals with different SNR.

Accuracy_2_SNR

What did I try?

Thisis a list of things that I tried and I’m sure that are modifying performances in worse:

  • Reducing batch size (only increases training time without improving test accuracy)
  • Training without too noisy signals (lowers accuracy)
  • Moving the Dropout layer before the Flatten layer

Questions

Any suggestion to get better performances?

Thanks in advance!


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.