#StackBounty: #neural-networks #conv-neural-network #transfer-learning How should I standardize input when fine-tuning a CNN?

Bounty: 50

I want to use the VGG16 model pre-trained on ImageNet and fine-tune some layers to my dataset. The VGG16 paper explains their preprocessing steps which I understand to be important to replicate if one wants to fine-tune someone’s network.

The only preprocessing we do is subtracting the mean RGB value, computed on the training set, from each pixel.

  1. Why didn’t they also divide by the standard deviation? I thought this kind of standardization (i.e. zero center and unit variance) was good practice.

More importantly, since I am fine-tuning the network to my own dataset, I wonder if I should

  1. Standardize the input relative to ImageNet and my dataset, only to ImageNet, or only to my dataset?


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.