I’m wondering how to interpret a recurrent architecture in an EEG context. Specifically I’m thinking of this as a Recurrent CNN (as opposed to architectures like LSTM), but maybe it applies to other types of recurrent networks as well
When I read about R-CNNs, they’re usually explained in image classification contexts. Theyre typically described as “learning over time” or “including the effect of time-1 on the current input”
This interpretation/explanation gets really confusing when working with EEG data. An example of an R-CNN being used on EEG data can be found here
Imagine I have training examples each consisting of a 1×512 array. This array captures a voltage reading for 1 electrode at 512 consecutive time points. If I use this as input to a Recurrent CNN (using 1D convolutions), the recurrent part of the model isn’t actually capturing “time”, right? (as would be implied by the descriptions/explanations discussed earlier) Because in this context time is already captured by the second dimension of the array
So with a setup like this, what does the recurrent part of the network actually allow us to model that a regular CNN can’t (if not time)?
It seems to me that recurrent just means doing a convolution, adding the result to the original input, and convolving again. This gets repeated for x number of recurrent steps. What advantage does this process actually give?