I am curious as to how I can add a normal-randomized 300 dimension vector (elements' type = tf.float32) whenever a word unknown to the pre-trained vocabulary is encountered. I am using pre-trained GloVe word embeddings, but in some cases, I realize I encounter unknown words, and I want to create a normal-randomized word vector for this new found unknown word.
The problem is that with my current set up, I use tf.contrib.lookup.index_table_from_tensor to convert from words to integers based on the known vocabulary. This function can create new tokens and hash them for some predefined number of out of vocabulary words, but my embed
will not contain an embedding for this new unknown hash value. I am uncertain if I can simply append a randomized embedding to the end of the embed
list.
I also would like to do this in an efficient way, so pre-built tensorflow function or method involving tensorflow functions would probably be the most efficient. I define pre-known special tokens such as an end of sentence token and a default unknown as the empty string ("at index 0), but this is limited in its power to learn for various different unknown words. I currently use tf.nn.embedding_lookup() as the final embedding step.
I would like to be able to add new random 300d vectors for each unknown word in the training data, and I would also like to add pre-made random word vectors for any unknown tokens not seen in training that is possibly encountered during testing. What is the most efficient way of doing this?
def embed_tensor(string_tensor, trainable=True):
"""
Convert List of strings into list of indicies then into 300d vectors
"""
# ordered lists of vocab and corresponding (by index) 300d vector
vocab, embed = load_pretrained_glove()
# Set up tensorflow look up from string word to unique integer
vocab_lookup = tf.contrib.lookup.index_table_from_tensor(
mapping=tf.constant(vocab),
default_value = 0)
string_tensor = vocab_lookup.lookup(string_tensor)
# define the word embedding
embedding_init = tf.Variable(tf.constant(np.asarray(embed),
dtype=tf.float32),
trainable=trainable,
name="embed_init")
# return the word embedded version of the sentence (300d vectors/word)
return tf.nn.embedding_lookup(embedding_init, string_tensor)