CBOW v.s. skip-gram: why invert context and target words?
Asked Answered
H

3

54

In this page, it is said that:

[...] skip-gram inverts contexts and targets, and tries to predict each context word from its target word [...]

However, looking at the training dataset it produces, the content of the X and Y pair seems to be interexchangeable, as those two pairs of (X, Y):

(quick, brown), (brown, quick)

So, why distinguish that much between context and targets if it is the same thing in the end?

Also, doing Udacity's Deep Learning course exercise on word2vec, I wonder why they seem to do the difference between those two approaches that much in this problem:

An alternative to skip-gram is another Word2Vec model called CBOW (Continuous Bag of Words). In the CBOW model, instead of predicting a context word from a word vector, you predict a word from the sum of all the word vectors in its context. Implement and evaluate a CBOW model trained on the text8 dataset.

Would not this yields the same results?

Halfhour answered 10/7, 2016 at 1:21 Comment(0)
G
107

Here is my oversimplified and rather naive understanding of the difference:

As we know, CBOW is learning to predict the word by the context. Or maximize the probability of the target word by looking at the context. And this happens to be a problem for rare words. For example, given the context yesterday was a really [...] day CBOW model will tell you that most probably the word is beautiful or nice. Words like delightful will get much less attention of the model, because it is designed to predict the most probable word. This word will be smoothed over a lot of examples with more frequent words.

On the other hand, the skip-gram model is designed to predict the context. Given the word delightful it must understand it and tell us that there is a huge probability that the context is yesterday was really [...] day, or some other relevant context. With skip-gram the word delightful will not try to compete with the word beautiful but instead, delightful+context pairs will be treated as new observations.

UPDATE

Thanks to @0xF for sharing this article

According to Mikolov

Skip-gram: works well with small amount of the training data, represents well even rare words or phrases.

CBOW: several times faster to train than the skip-gram, slightly better accuracy for the frequent words

One more addition to the subject is found here:

In the "skip-gram" mode alternative to "CBOW", rather than averaging the context words, each is used as a pairwise training example. That is, in place of one CBOW example such as [predict 'ate' from average('The', 'cat', 'the', 'mouse')], the network is presented with four skip-gram examples [predict 'ate' from 'The'], [predict 'ate' from 'cat'], [predict 'ate' from 'the'], [predict 'ate' from 'mouse']. (The same random window-reduction occurs, so half the time that would just be two examples, of the nearest words.)

Glower answered 12/2, 2017 at 11:38 Comment(2)
This quora post [quora.com/… says skip-gram needs less data to train than cbow,just the opposite view of your comment.Can you justify your answer with the help of any published paper or so.Foolhardy
Thanks for pointing that out! The explanation provided in that article makes sense, so I have updated my answer.Glower
H
1

It has to do with what exactly you're calculating at any given point. The difference will become clearer if you start to look at models that incorporate a larger context for each probability calculation.

In skip-gram, you're calculating the context word(s) from the word at the current position in the sentence; you're "skipping" the current word (and potentially a bit of the context) in your calculation. The result can be more than one word (but not if your context window is just one word long).

In CBOW, you're calculating the current word from the context word(s), so you will only ever have one word as a result.

Helotism answered 11/7, 2016 at 13:37 Comment(2)
The difference is still unclear to me, the only thing that seems to change is the padding of the words near the begin and end of sentences: in one model there will be more total words on the input size or the output size in terms of how frequently the same words were shown. On an infinitely long sentence, the two models would not have that unequal padding concept I am introducing.Halfhour
As an example, how would one change the model's configuration in the Udacity link I sent? It seems to me that only exchanging the labels with the input examples would do the trick, but it can't be true, the difference would be so trivial...Halfhour
O
0

In Deep Learning Course, from coursera https://www.coursera.org/learn/nlp-sequence-models?specialization=deep-learning you can see that Andrew NG doesn't switch the context-target concepts. It means that target word will ALWAYS be treated as the word to be predicted, no matter if is CBOW or skip-gram.

Outfoot answered 20/1, 2022 at 14:12 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.