Keras : How to use weights of a layer in loss function?
Asked Answered
G

1

6

I am implementing a custom loss function in keras. The model is an autoencoder. The first layer is an Embedding layer, which embed an input of size (batch_size, sentence_length) into (batch_size, sentence_length, embedding_dimension). Then the model compresses the embedding into a vector of a certain dimension, and finaly must reconstruct the embedding (batch_size, sentence_lenght, embedding_dimension).

But the embedding layer is trainable, and the loss must use the weights of the embedding layer (I have to sum over all word embeddings of my vocabulary).

For exemple, if I want to train on the toy exemple : "the cat". The sentence_length is 2 and suppose embedding_dimension is 10 and the vocabulary size is 50, so the embedding matrix has shape (50,10). The Embedding layer's output X is of shape (1,2,10). Then it passes in the model and the output X_hat, is also of shape (1,2,10). The model must be trained to maximize the probability that the vector X_hat[0] representing 'the' is the most similar to the vector X[0] representing 'the' in the Embedding layer, and same thing for 'cat'. But the loss is such that I have to compute the cosine similarity between X and X_hat, normalized by the sum of cosine similarity of X_hat and every embedding (50, since the vocabulary size is 50) in the embedding matrix, which are the columns of the weights of the embedding layer.

But How can I access the weights in the embedding layer at each iteration of the training process?

Thank you !

Geopolitics answered 16/11, 2017 at 18:26 Comment(3)
It's possible to hack the model to have the output of the embedding go to the loss function, but taking a layer's weights seems to add a more complex work....Extraversion
Are you sure that you want to do this way? Normalization sum might explode. Do you have any paper where your method is described? Maybe you have a softmax output there.Lyra
I try to implement this paper accepted at NIPS 2017. arxiv.org/pdf/1708.04729.pdf . Maybe I don't understand the paper well, but see Equation 1. The denominator takes the cosine similarity over all words of the vocabulary embedded in We, which is the embedding matrix.Geopolitics
G
1

It seems a bit crazy but it seems to work : instead of creating a custom loss function that I would pass in model.compile, the network computes the loss (Eq. 1 from arxiv.org/pdf/1708.04729.pdf) in a function that I call with Lambda :

loss = Lambda(lambda x: similarity(x[0], x[1], x[2]))([X_hat, X, embedding_matrix])    

And the network has two outputs: X_hat and loss, but I weight X_hat to have 0 weight and loss to have all the weight :

model = Model(input_sequence, [X_hat, loss])
model.compile(loss=mean_squared_error,
              optimizer=optimizer,
              loss_weights=[0., 1.])

When I train the model :

for i in range(epochs):
    for j in range(num_data):
        input_embedding = model.layers[1].get_weights()[0][[data[j:j+1]]]
        y = [input_embedding, 0] #The embedding of the input
        model.fit(data[j:j+1], y, batch_size=1, ...)

That way, the model is trained to tend loss toward 0, and when I want to use the trained model's prediction I use the first output which is the reconstruction X_hat

Geopolitics answered 18/11, 2017 at 17:35 Comment(1)
Do you think this will work? because in the computation graph the weights of model.layer[1] will not occur while taking auto-differentiation. Please correct me if I am wrong.Kyleekylen

© 2022 - 2024 — McMap. All rights reserved.