Initializing Out of Vocabulary (OOV) tokens
Asked Answered
U

2

3

I am building TensorFlow model for NLP task, and I am using pretrained Glove 300d word-vector/embedding dataset.

Obviously some tokens can't be resolved as embeddings, because were not included into training dataset for word vector embedding model, e.g. rare names.

I can replace those tokens with vectors of 0s, but rather than dropping this information on the floor, I prefer to encode it somehow and include to my training data.

Say, I have 'raijin' word, which can't be resolved as embedding vector, what would be the best way to encode it consistently with Glove embedding dataset? What is the best approach to convert it to 300d vector?

Thank you.

Unbounded answered 3/8, 2017 at 21:58 Comment(0)
N
13

Instead of assigning all the Out of Vocabulary tokens to a common UNK vector (zeros), it is better to assign them a unique random vector. At-least this way when you find the similarity between them with any other word, each of them will be unique and the model can learn something out of it. In the UNK case, they will all be same and so all the UNK words will be treated as having the same context.

I tried this approach and got a 3% accuracy improvement on the Quora Duplicate question pair detection dataset using an LSTM model.

Nothing answered 4/8, 2017 at 1:20 Comment(1)
good idea - if possible, can you give some keywords or a link regarding how you implemented this @vijai ? i have a fixed "embeddings_matrix" and a pytorch pipeline, and it is a bit cumbersome to implement this.Nyasaland
S
1

It's good to have a look at EMNLP paper on handling 'oov' tokens by generating embeddings

Mimicking Word Embeddings using Subword RNNs

Surfacetosurface answered 5/1, 2018 at 15:40 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.