Can I use batch normalization layer right after input layer and not normalize my data? May I expect to get similar effect/performance?
In keras functional it would be something like this:
x = Input (...)
x = Batchnorm(...)(x)
...
Can I use batch normalization layer right after input layer and not normalize my data? May I expect to get similar effect/performance?
In keras functional it would be something like this:
x = Input (...)
x = Batchnorm(...)(x)
...
You can do it. But the nice thing about batchnorm, in addition to activation distribution stabilization, is that the mean and std deviation are likely migrate as the network learns.
Effectively, setting the batchnorm right after the input layer is a fancy data pre-processing step. It helps, sometimes a lot (e.g. in linear regression). But it's easier and more efficient to compute the mean and variance of the whole training sample once, than learn it per-batch. Note that batchnorm isn't free in terms of performance and you shouldn't abuse it.
Yes this is possible, and I have used it very successfully for vision models. There are some pros and cons of this approach though, the main advantages being:
The main drawbacks are:
I’ve also written about this subject in detail here: Replace Manual Normalization with Batch Normalization in Vision AI Models. https://towardsdatascience.com/replace-manual-normalization-with-batch-normalization-in-vision-ai-models-e7782e82193c
© 2022 - 2024 — McMap. All rights reserved.