Word Embedding, LookupTable, Word Embedding Visualizations
Asked Answered
L

1

7

I need to ask few questions regarding word embeddings.....could be basic.

  1. When we convert a one-hot vector of a word for instance king [0 0 0 1 0] into an embedded vector E = [0.2, 0.4, 0.2, 0.2].... is there any importance for each index in resultant word vector? For instance E[1] which is 0.2.... what specifically E[1] defines (although I know its basically a transformation into another space).... or word vector collectively defines context but not individually...
  2. How the dimension (reduced or increased) of a word vector matters as compared to the original one-hot vector?
  3. How can we define a lookup table in terms of an embedding layer?
  4. is lookup table a kind of random generated table or it already been trained separately with respect to data instance in data and we just use it later on in Neural Network operations?
  5. Is there any method to visualize an embedded vector at Hidden Layer (as we do have in Image based Neural Network Processing)?

Thanks in advance

Languish answered 3/7, 2017 at 9:24 Comment(0)
C
7

1: Each element (or a group of element) in embedding vector have some meaning, but mostly unknown for human. Depend on what algorithm you use, a word embedding vector may have different meaning, but usually useful. For example, Glove, similar word 'frog', 'toad' stay near each other in vector space. King - man result in vector similar to Queen.

  1. Turn vocab into index. For example, you have a vocabulary list: [dog, cat, mouse, feed, play, with] Then the sentences: Dog play with cat => 0, 4, 5, 1 While, you have embedding matrix as follow

    [0.1, 0.1, 0] # comment: this is dog
    [0.2, 0.5, 0.1] # this is cat
    [...]
    [...]
    [...]
    [...]

where first row is embedding vector of dog, second row is cat, then so on Then, you use the index (0, 4, 5, 1) after lookup would become a matrix [[0.1, 0.1, 0][...][...][0.2, 0.5, 0.1]]

  1. either or both
    • You can randomly init embedding vector and training it with gradient descent
    • You can take pretrained word vector and keep it fixed (i.e: read-only, no change). You can train your word vector in model and use it in another model. Our you can download pretrained word vector online. Example Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors, 2.03 GB download): glove.840B.300d.zip on Glove
    • You can init with pretrained word vector and train with your model by gradient descent

Update: One-hot vector does not contain any information. You can think that one-hot vector is index of that vector in vocabulary. For example, Dog => [1, 0, 0, 0, 0, 0] and cat => [0, 1, 0, 0, 0, 0]. There are some different between one-hot vs index:

  • if you input a list of index: [0, 4, 5, 1] to your multi-layer perceptron, it cannot learn anything (I tried...).But if you input a matrix of one-hot vector [[...1][1...][...][...]], it learn something. But it costly in term of RAM and CPU.

  • One-hot cost a lot of memory to store zeros. Thus, I suggest randomly init embedding matrix if you don't have one. Store dataset as index, and use index to look up embedding vector

"its mean that lookup table is just a matrix of embedded vectors (already been trained seperately via word2vec or...) for each word in the vocabulary. and while in the process of neural network either we can use an Embedding Layer or we can just refer to embedded vector in lookup table for that particular embedded vector against particular one-hot vector."

Use the "INDEX" to look-up in lookup table. Turn dog into 0, cat into 1. One-hot vector and index contain same information, but one-hot cost more memory to store. Moreover, a lot of deeplearning framework accept index as input to embedding layer (which, output is a vector represent for a word in that index.)

". How we get this embedding vector..."

=> read paper. Here is paper about Word2vec and Glove. Ask your lecturers for more detail, they are willing to help you.

Crustacean answered 3/7, 2017 at 15:24 Comment(3)
Thanx for this detailed explaination.... Regarding your answer I need to still make it clear. 1- so we shouldnt much care about the values in dense or embedded vector (as you mentioned vector for dog is [0.1,0.1,0] here alone "0.1" donesnt show anything but collectively with all other indices values). How we get this embedding vector for each one-hot vector as a by product from word2vec or others (trained weights on hidden layer? because we can not fine tune the values of one-hot vector) or ???Languish
2- its mean that lookup table is just a matrix of embedded vectors (already been trained seperately via word2vec or...) for each word in the vocabulary. and while in the process of neural network either we can use an Embedding Layer or we can just refer to embedded vector in lookup table for that particular embedded vector against particular one-hot vector.Languish
For king - man, the model tries to find something that is as close as possible to king and as far away as possible from man. That might be queen.Felipe

© 2022 - 2024 — McMap. All rights reserved.