I am creating a customized activation function, RBF activation function in particular:
from keras import backend as K
from keras.layers import Lambda
l2_norm = lambda a,b: K.sqrt(K.sum(K.pow((a-b),2), axis=0, keepdims=True))
def rbf2(x):
X = #here i need inputs that I receive from previous layer
Y = # here I need weights that I should apply for this layer
l2 = l2_norm(X,Y)
res = K.exp(-1 * gamma * K.pow(l2,2))
return res
The function rbf2
receives the previous layer as input:
#some keras layers
model.add(Dense(84, activation='tanh')) #layer1
model.add(Dense(10, activation = rbf2)) #layer2
What should I do to get the inputs from layer1
and weights from layer2
to create the customized activation function?
What I am actually trying to do is, implementing the output layer for LeNet5 neural network. The output layer of LeNet-5 is a bit special, instead of computing the dot product of the inputs and the weight vector, each neuron outputs the square of the Euclidean distance between its input vector and its weight vector.
For example, layer1
has 84 neurons and layer2
has 10 neurons. In general cases, for calculating output for each of 10 neurons of layer2
, we do the dot product of 84 neurons of layer1
and 84 weights in between layer1
and layer2
. We then apply softmax
activation function over it.
But here, instead of doing dot product, each neuron of the layer2
outputs the square of the Euclidean distance between its input vector and its weight vector (I want to use this as my activation function).
Any help on creating RBF activation function (calculating euclidean distance from inputs the layer receives and weights) and using it in the layer is also helpful.
layer1
andlayer2
and pass it to your rbf function? If that's the case then are your sure it would work with the current definition of your activation function, since they have different shapes? – Prolocutorlayer1
and weights of each neuron oflayer2
. and I want to calculate the Euclidean distance between them. – Oneman