Share weights between two dense layers in keras
Asked Answered
A

1

13

I have a code as follows. What I want to do is to share the same weights in two dense layers.

The equation for op1 and op2 layer will be like that

op1 = w1y1 + w2y2 + w3y3 + w4y4 + w5y5 + b1

op2 = w1z1 + w2z2 + w3z3 + w4z4 + w5z5 + b1

here w1 to w5 weights are shared between op1 and op2 layer inputs which are (y1 to y5) and (z1 to z5) respectively.

ip_shape1 = Input(shape=(5,))
ip_shape2 = Input(shape=(5,))

op1 = Dense(1, activation = "sigmoid", kernel_initializer = "ones")(ip_shape1)
op2 = Dense(1, activation = "sigmoid", kernel_initializer = "ones")(ip_shape2)

merge_layer = concatenate([op1, op2])
predictions = Dense(1, activation='sigmoid')(merge_layer)

model = Model(inputs=[ip_shape1, ip_shape2], outputs=predictions)

Thanks in advance.

Aponte answered 17/4, 2018 at 10:0 Comment(2)
Do you mean the biases must be separate?Preparatory
@DanielMöller I think if I am going to share weights then biases will also be same. Question updated.Aponte
S
15

This uses the same layer for both sides. (Weighs and bias are shared)

ip_shape1 = Input(shape=(5,))
ip_shape2 = Input(shape=(5,))

dense = Dense(1, activation = "sigmoid", kernel_initializer = "ones")

op1 = dense(ip_shape1)
op2 = dense(ip_shape2)

merge_layer = Concatenate()([op1, op2])
predictions = Dense(1, activation='sigmoid')(merge_layer)

model = Model(inputs=[ip_shape1, ip_shape2], outputs=predictions)
Sporangium answered 17/4, 2018 at 13:15 Comment(3)
it's also described here: keras.io/getting-started/functional-api-guide/#shared-layersUnaccustomed
What if you want the weights to be trainable in one network but not in the other, but you want the 2nd one to be updated whenever the 1st one is trained?Cabanatuan
Add a Lambda(lambda x: K.stop_gradient(x)) at the end of side 2.Preparatory

© 2022 - 2024 — McMap. All rights reserved.