I am implementing a custom loss function in keras. The model is an autoencoder
. The first layer is an Embedding layer, which embed an input of size (batch_size, sentence_length)
into (batch_size, sentence_length, embedding_dimension)
. Then the model compresses the embedding into a vector of a certain dimension, and finaly must reconstruct the embedding (batch_size, sentence_lenght, embedding_dimension)
.
But the embedding layer is trainable, and the loss must use the weights of the embedding layer (I have to sum over all word embeddings of my vocabulary).
For exemple, if I want to train on the toy exemple : "the cat". The sentence_length is 2
and suppose embedding_dimension is 10
and the vocabulary size is 50
, so the embedding matrix has shape (50,10)
. The Embedding layer's output X
is of shape (1,2,10)
. Then it passes in the model and the output X_hat
, is also of shape (1,2,10)
. The model must be trained to maximize the probability that the vector X_hat[0]
representing 'the' is the most similar to the vector X[0]
representing 'the' in the Embedding layer, and same thing for 'cat'. But the loss is such that I have to compute the cosine similarity between X
and X_hat
, normalized by the sum of cosine similarity of X_hat
and every embedding (50, since the vocabulary size is 50) in the embedding matrix, which are the columns of the weights of the embedding layer.
But How can I access the weights in the embedding layer at each iteration of the training process?
Thank you !