#StackBounty: #cognitive-psychology #attention #multi-tasking What is the name of inattention/autopilot phenomenon that results in &#39…

Bounty: 50

Suppose I am doing the laundry while simultaneously taking out the trash, so am carrying a bag of dirty clothes in one hand and a bag of rubbish in the other. I might be on autopilot where I lift up the lid of the garbage bin, throw my laundry into it and turn to walk away before then realising what I have done: thrown the wrong bag one into the bin i.e. “right idea, wrong object”.

I know that it’s some sort of inattention effect that is studied in cognitive psychology, but can somebody tell me what the correct name for this phenomenon is?

edit
Dual-task interference is closer to the concept I’m looking for


Get this bounty!!!

#StackBounty: #cognitive-psychology #attention #multi-tasking What is the name of inattention/autopilot phenomenon that results in &#39…

Bounty: 50

Suppose I am doing the laundry while simultaneously taking out the trash, so am carrying a bag of dirty clothes in one hand and a bag of rubbish in the other. I might be on autopilot where I lift up the lid of the garbage bin, throw my laundry into it and turn to walk away before then realising what I have done: thrown the wrong bag one into the bin i.e. “right idea, wrong object”.

I know that it’s some sort of inattention effect that is studied in cognitive psychology, but can somebody tell me what the correct name for this phenomenon is?

edit
Dual-task interference is closer to the concept I’m looking for


Get this bounty!!!

#StackBounty: #cognitive-psychology #attention #multi-tasking What is the name of inattention/autopilot phenomenon that results in &#39…

Bounty: 50

Suppose I am doing the laundry while simultaneously taking out the trash, so am carrying a bag of dirty clothes in one hand and a bag of rubbish in the other. I might be on autopilot where I lift up the lid of the garbage bin, throw my laundry into it and turn to walk away before then realising what I have done: thrown the wrong bag one into the bin i.e. “right idea, wrong object”.

I know that it’s some sort of inattention effect that is studied in cognitive psychology, but can somebody tell me what the correct name for this phenomenon is?

edit
Dual-task interference is closer to the concept I’m looking for


Get this bounty!!!

#StackBounty: #cognitive-psychology #attention #multi-tasking What is the name of inattention/autopilot phenomenon that results in &#39…

Bounty: 50

Suppose I am doing the laundry while simultaneously taking out the trash, so am carrying a bag of dirty clothes in one hand and a bag of rubbish in the other. I might be on autopilot where I lift up the lid of the garbage bin, throw my laundry into it and turn to walk away before then realising what I have done: thrown the wrong bag one into the bin i.e. “right idea, wrong object”.

I know that it’s some sort of inattention effect that is studied in cognitive psychology, but can somebody tell me what the correct name for this phenomenon is?

edit
Dual-task interference is closer to the concept I’m looking for


Get this bounty!!!

#StackBounty: #natural-language #attention What exactly are keys, queries, and values in attention mechanisms?

Bounty: 50

How should one understand the keys, queries, and values that are often mentioned in attention mechanisms?

I’ve tried searching online, but all the resources I find only speak of them as if the reader already knows what they are.

Judging by the paper written by Bahdanau (Neural Machine Translation by Jointly Learning to Align and Translate), it seems as though values are the annotation vector $h$ but it’s not clear as to what is meant by “query” and “key.”

The paper that I mentioned states that attention is calculated by

$$c_i = sum^{T_x}{j = 1} alpha{ij} h_j$$

with

$$alpha_{ij} = frac{e^{e_{ij}}}{sum^{T_x}_{k = 1} e^{ik}}$$

$$e_{ij} = a(s_{i – 1}, h_j)$$

Where are people getting the key, query, and value from these equations?

Thank you.


Get this bounty!!!

#StackBounty: #neural-networks #natural-language #rnn #attention Why do attention models need to choose a maximum sentence length?

Bounty: 50

I was going through the seq2seq-translation tutorial on pytorch and found the following sentence:

Because there are sentences of all sizes in the training data, to actually create and train this layer we have to choose a maximum sentence length (input length, for encoder outputs) that it can apply to. Sentences of the maximum length will use all the attention weights, while shorter sentences will only use the first few.

which didn’t really make sense to me. My understanding of attention is that attention is computed as follows (according to the Pointer Network paper) at time step $t$:

$$ u^{<t,j>} = v^top tanh( W_1 e_j + W_2 d_{t} ) = NN_u(e_j, d_t )$$
$$ alpha^{<t,j>} = softmax( u^{<t,j>} ) = frac{exp(u^{<t,j>})}{Z^{<t>}} = frac{exp(u^{<t,j>})}{sum^{T_x}{k=1} exp( u^{<t,k>} ) } $$
$$ d’
{<i+1>} = sum^{T_x}_{j=1} alpha^{<t,j>} e_j $$

which basically means that a specific attention weight is not dependent on the length of the encoder (i.e. the encoder can change size and the above equation won’t be affected because $T_x$ can be variable size).

If that is true then why does the paper say this maximum sentence length thing?

They also say:

There are other forms of attention that work around the length limitation by using a relative position approach. Read about “local attention” in Effective Approaches to Attention-based Neural Machine Translation.

which also confused me. Any clarification?


Perhaps related:

https://discuss.pytorch.org/t/attentiondecoderrnn-without-max-length/13473


Crossposted:

https://discuss.pytorch.org/t/why-do-attention-models-need-to-choose-a-maximum-sentence-length/47201

https://www.reddit.com/r/deeplearning/comments/bxbypj/why_do_attention_models_need_to_choose_a_maximum/?


Get this bounty!!!

#StackBounty: #neural-networks #online #attention How to use/treat a hidden layer as the new target to predict in a neural network?

Bounty: 50

Let’s suppose I have a neural network with one hidden layer. During training, for a given pair of (input, target), I want to perform several iterations, such that the first iteration would be trying to predict the target, and the second iteration would be to somehow use my prediction (or other information learned from the first iteration) as my new target.

My initial thinking to solve this would be to go through a full epoch using the initial true targets and then at the second epoch, I would be able to use the predictions as the new targets. However, this seems like it could all be integrated in one network, end to end.

Is there a possible way to do this without having some information leakage?

Just for people interested in something like this, I found the following that is similar: https://wayve.ai/blog/dreaming-about-driving-imagination-rl


Get this bounty!!!

#StackBounty: #attention #working-memory What is a validated online tool to administer Stroop and N-back test?

Bounty: 50

I’m trying to conduct a study in the field of cognitive experimental psychiatry. I need an online tool on a PC-based setup to administer Stroop, N-back, and choice reaction time tests. I found Psytoolkit an open-source tool than has Stroop and N-back and it is programmable. But I’m not sure whether this tool is validated or not. By validation, I mean whether N-back test measures working memory (as it is supposed to) or not when running the test on Psytoolkit.
I’m looking for validated online cognitive tests if anyone knows. Thanks.


Get this bounty!!!