I'm using some domain-specific language which have a lot of OOV words as well as some typos. I have noticed Spacy will just assign an all-zero vector for these OOV words, so I'm wondering what's the proper way to handle this. I appreciate clarification on all of these points if possible:
- What exactly does the pre-train command do? Honestly I cannot seem to parse correctly the explanation from the website:
Pre-train the “token to vector” (tok2vec) layer of pipeline components, using an approximate language-modeling objective. Specifically, we load pretrained vectors, and train a component like a CNN, BiLSTM, etc to predict vectors which match the pretrained ones
Isn't the tok2vec the part that generates the vectors? So shouldn't this command then change the produced vectors? What does it mean loading pretrained vectors and then train a component to predict these vectors? What's the purpose of doing this?
What does the --use-vectors flag do? What does the --init-tok2vec flag do? Is this included by mistake in the documentation?
It seems pretrain is not what I'm looking for, it doesn't change the vectors for a given word. What would be the easiest way to generate a new set of vectors which includes my OOV words but still contain the general knowledge of the lanaguage?
As far as I can see Spacy's pretrained models use fasttext vectors. Fasttext website mentions:
A nice feature is that you can also query for words that did not appear in your data! Indeed words are represented by the sum of its substrings. As long as the unknown word is made of known substrings, there is a representation of it!
But it seems Spacy does not use this feature. Is there a way to still make use of this for OOV words?
Thanks a lot