How to use spacy's lemmatizer to get a word into basic form
Asked Answered
N

6

41

I am new to spacy and I want to use its lemmatizer function, but I don't know how to use it, like I into strings of word, which will return the string with the basic form the words.

Examples:

  • 'words'=> 'word'
  • 'did' => 'do'

Thank you.

Neisa answered 4/8, 2016 at 9:4 Comment(3)
textminingonline.com/getting-started-with-spacyOssify
spacy.io/docsOssify
thank you, I have see this web before, but they didn't explain detail in it, ok, I will try the web code, thank you again.Neisa
D
77

Previous answer is convoluted and can't be edited, so here's a more conventional one.

# make sure your downloaded the english model with "python -m spacy download en"

import spacy
nlp = spacy.load('en')

doc = nlp(u"Apples and oranges are similar. Boots and hippos aren't.")

for token in doc:
    print(token, token.lemma, token.lemma_)

Output:

Apples 6617 apples
and 512 and
oranges 7024 orange
are 536 be
similar 1447 similar
. 453 .
Boots 4622 boot
and 512 and
hippos 98365 hippo
are 536 be
n't 538 not
. 453 .

From the official Lighting tour

Durware answered 24/3, 2017 at 15:48 Comment(6)
Do you need to signify the text is unicode before passing it to nlp? See hereWashin
@PhilipO'Brien maybe with python 2 but I'm using python 3 hereDurware
Ah OK, with Python 2 I have to explicitly state it's unicode. Thanks! (I really should switch to 3!)Washin
One problem with this is any pronouns are lemmatized to '-PRON-', which is confusing. Why wouldn't it just keep the pronoun itself?Fastening
Search for '-PRON-' here to see a solution, but I think this should not be the default behavior. It seems confusing.Fastening
Why does apples remain apples but other plurals get changed to the singular?Bilbo
I
26

If you want to use just the Lemmatizer, you can do that in the following way:

from spacy.lemmatizer import Lemmatizer
from spacy.lang.en import LEMMA_INDEX, LEMMA_EXC, LEMMA_RULES

lemmatizer = Lemmatizer(LEMMA_INDEX, LEMMA_EXC, LEMMA_RULES)
lemmas = lemmatizer(u'ducks', u'NOUN')
print(lemmas)

Output

['duck']

Update

Since spacy version 2.2, LEMMA_INDEX, LEMMA_EXC, and LEMMA_RULES have been bundled into a Lookups Object:

import spacy
nlp = spacy.load('en')

nlp.vocab.lookups
>>> <spacy.lookups.Lookups object at 0x7f89a59ea810>
nlp.vocab.lookups.tables
>>> ['lemma_lookup', 'lemma_rules', 'lemma_index', 'lemma_exc']

You can still use the lemmatizer directly with a word and a POS (part of speech) tag:

from spacy.lemmatizer import Lemmatizer, ADJ, NOUN, VERB

lemmatizer = nlp.vocab.morphology.lemmatizer
lemmatizer('ducks', NOUN)
>>> ['duck']

You can pass the POS tag as the imported constant like above or as string:

lemmatizer('ducks', 'NOUN')
>>> ['duck']

from spacy.lemmatizer import Lemmatizer, ADJ, NOUN, VERB

Inhaul answered 23/2, 2018 at 13:8 Comment(1)
I tried your code, but I'm getting an error, ``` cannot import name 'LEMMA_INDEX' from 'spacy.lang.en'```Dormant
O
11

Code :

import os
from spacy.en import English, LOCAL_DATA_DIR

data_dir = os.environ.get('SPACY_DATA', LOCAL_DATA_DIR)

nlp = English(data_dir=data_dir)

doc3 = nlp(u"this is spacy lemmatize testing. programming books are more better than others")

for token in doc3:
    print token, token.lemma, token.lemma_

Output :

this 496 this
is 488 be
spacy 173779 spacy
lemmatize 1510965 lemmatize
testing 2900 testing
. 419 .
programming 3408 programming
books 1011 book
are 488 be
more 529 more
better 615 better
than 555 than
others 871 others

Example Ref: here

Ossify answered 4/8, 2016 at 14:46 Comment(4)
nlp = English(data_dir=data_dir) : data_dir = data_dir, what does this meaning, they looks same.Neisa
passing variable. English() method takes argument data_dir. So you pass "data_dir = local_variable_name". It can be also like, d_dir = os.environ.get('SPACY_DATA', LOCAL_DATA_DIR) nlp = English(data_dir=d_dir) Its just basic python stuff.Ossify
Ok, I will a try these.Neisa
This gives ModuleNotFoundError: No module named 'spacy.en' in the current version (2.2).Interlace
P
10

I use Spacy version 2.x

import spacy
nlp = spacy.load('en_core_web_sm', disable=['parser', 'ner'])
doc = nlp('did displaying words')
print (" ".join([token.lemma_ for token in doc]))

and the output :

do display word

Hope it helps :)

Phosphorus answered 19/4, 2020 at 4:56 Comment(1)
Use only tagger component is a good idea! This can sometimes make a big difference and improve loading speed.Headrest
N
2

To get a mapping between words and their lemmas use this:

import spacy
# instantiate pipeline with any model of your choosing
nlp = spacy.load("en_core_web_lg")

words = "Those quickest and brownest foxes jumped over the laziest ones."

# only enable the needed pipeline components to speed up processing
with nlp.select_pipes(enable=['tok2vec', 'tagger', 'attribute_ruler', 'lemmatizer']):
    doc = nlp(words)

lemma_mapping = dict([(token.text, token.lemma_) 
                       for token in doc if token.is_punct==False])

print(lemma_mapping)

Output

{'Those': 'those',
 'quickest': 'quick',
 'and': 'and',
 'brownest': 'brown',
 'foxes': 'fox',
 'jumped': 'jump',
 'over': 'over',
 'the': 'the',
 'laziest': 'lazy',
 'ones': 'one'}
Nose answered 24/12, 2022 at 11:44 Comment(0)
Z
-2

I used:

import spacy

nlp = en_core_web_sm.load()
doc = nlp("did displaying words")
print(" ".join([token.lemma_ for token in doc]))
>>> do display word

But it returned

OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory.

I used:

pip3 install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz

to get rid of error.

Zolazoldi answered 30/8, 2020 at 9:48 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.