I was wondering: in a multi-layer feed-forward neural network should the input layer include a bias neuron, or this is just useful in hidden layers? If so, why?
Should an input layer include a bias neuron?
Asked Answered
No, an input layer doesn't need a connection to the bias neuron, since any activation it received from the bias neuron would be completely overridden by the actual input.
For example, imagine a network that's trying to solve the classic XOR problem, using this architecture (where the neuron just marked 1 is the bias):
To run this network on input (1,0), you simply clamp the activation of neurons X1=1 and X2=0. Now, if X1 or X2 had also received input from the bias, then that input would be overridden anyways, thus making such a connection pointless.
Excuse me, in your drawing the bias neuron is part of the input layer, since it is forwarded to the hidden layer, am I wrong? –
Chemisette
Eh, kinda. The bias neuron is generally always depicted as being in its own lil layer. I thought your question was about whether or not to CONNECT the bias unit to the units in the input layer. In either case the answer is a firm no; you always just need a single bias unit that has a constant activation and is in its own layer. Generally it connects to all non-input layers. –
Requirement
For further explanation, see this question: https://mcmap.net/q/646028/-why-the-bias-is-necessary-in-ann-should-we-have-separate-bias-for-each-layer/821806. Also, its not my figure, but rather one that I obtained from here: home.agh.edu.pl/~vlsi/AI/xor_t/en/main.htm –
Requirement
The inputs X1, X2 in the above picture are not really neurons, they are ...erm, inputs. Every real neuron (something that takes inputs, has weights, has an activation function, does computation) should have a bias. Mathematically, every neuron computes φ(b + x1 w1 + x2 w2 + ... + xn wn), where b is the bias. But the xi's are just numbers, you can't connect them to anything. –
Hall
© 2022 - 2024 — McMap. All rights reserved.