Proper way to add new vectors for OOV words
Asked Answered
B

1

5

I'm using some domain-specific language which have a lot of OOV words as well as some typos. I have noticed Spacy will just assign an all-zero vector for these OOV words, so I'm wondering what's the proper way to handle this. I appreciate clarification on all of these points if possible:

  1. What exactly does the pre-train command do? Honestly I cannot seem to parse correctly the explanation from the website:

Pre-train the “token to vector” (tok2vec) layer of pipeline components, using an approximate language-modeling objective. Specifically, we load pretrained vectors, and train a component like a CNN, BiLSTM, etc to predict vectors which match the pretrained ones

Isn't the tok2vec the part that generates the vectors? So shouldn't this command then change the produced vectors? What does it mean loading pretrained vectors and then train a component to predict these vectors? What's the purpose of doing this?

What does the --use-vectors flag do? What does the --init-tok2vec flag do? Is this included by mistake in the documentation?

  1. It seems pretrain is not what I'm looking for, it doesn't change the vectors for a given word. What would be the easiest way to generate a new set of vectors which includes my OOV words but still contain the general knowledge of the lanaguage?

  2. As far as I can see Spacy's pretrained models use fasttext vectors. Fasttext website mentions:

A nice feature is that you can also query for words that did not appear in your data! Indeed words are represented by the sum of its substrings. As long as the unknown word is made of known substrings, there is a representation of it!

But it seems Spacy does not use this feature. Is there a way to still make use of this for OOV words?

Thanks a lot

Biocatalyst answered 28/7, 2020 at 23:28 Comment(2)
Maybe this answer can help you: #57659388Crabbing
Thanks Anakin, it helps clarify some aspects indeed, but still most of the questions remain unanswered for me.Biocatalyst
A
14

I think there is some confusion about the different components - I'll try to clarify:

  1. The tokenizer does not produce vectors. It's just a component that segments texts into tokens. In spaCy, it's rule-based and not trainable, and doesn't have anything to do with vectors. It looks at whitespace and punctuation to determine which are the unique tokens in a sentence.
  2. An nlp model in spaCy can have predefined (static) word vectors that are accessible on the Token level. Every token with the same Lexeme gets the same vector. Some tokens/lexemes may indeed be OOV, like misspellings. If you want to redefine/extend all vectors used in a model, you can use something like init-model (init vectors in spaCy v3).
  3. The tok2vec layer is a machine learning component that learns how to produce suitable (dynamic) vectors for tokens. It does this by looking at lexical attributes of the token, but may also include the static vectors of the token (cf item 2). This component is generally not used by itself, but is part of another component, such as an NER. It will be the first layer of the NER model, and it can be trained as part of training the NER, to produce vectors that are suitable for your NER task.

In spaCy v2, you can first train a tok2vec component with pretrain, and then use this component for a subsequent train command. Note that all settings need to be the same across both commands, for the layers to be compatible.

To answer your questions:

Isn't the tok2vec the part that generates the vectors?

If you mean the static vectors, then no. The tok2vec component produces new vectors (possibly with a different dimension) on top of the static vectors, but it won't change the static ones.

What does it mean loading pretrained vectors and then train a component to predict these vectors? What's the purpose of doing this?

The purpose is to get a tok2vec component that is already pretrained from external vectors data. The external vectors data already embeds some "meaning" or "similarity" of the tokens, and this is -so to say- transferred into the tok2vec component, which learns to produce the same similarities. The point is that this new tok2vec component can then be used & further fine-tuned in the subsequent train command (cf item 3)

Is there a way to still make use of this for OOV words?

It really depends on what your "use" is. As https://mcmap.net/q/1781494/-how-to-specify-word-vector-for-oov-terms-in-spacy mentions, you can set the vectors yourself, or you can implement a user hook which will decide on how to define token.vector.

I hope this helps. I can't really recommend the best approach for you to follow, without understanding why you want the OOV vectors / what your use-case is. Happy to discuss further in the comments!

Absorbance answered 21/8, 2020 at 9:32 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.