In my model, I use GloVe pre-trained embeddings. I wish to keep them non-trainable in order to decrease the number of model parameters and avoid overfit. However, I have a special symbol whose embedding I do want to train.
Using the provided Embedding Layer, I can only use the parameter 'trainable' to set the trainability of all embeddings in the following way:
embedding_layer = Embedding(voc_size,
emb_dim,
weights=[embedding_matrix],
input_length=MAX_LEN,
trainable=False)
Is there a Keras-level solution to training only a subset of embeddings?
Please note:
- There is not enough data to generate new embeddings for all words.
- These answers only relate to native TensorFlow.