Noun phrases with spacy
Asked Answered
P

5

38

How can I extract noun phrases from text using spacy?
I am not referring to part of speech tags. In the documentation I cannot find anything about noun phrases or regular parse trees.

Peristome answered 22/10, 2015 at 20:12 Comment(0)
G
66

If you want base NPs, i.e. NPs without coordination, prepositional phrases or relative clauses, you can use the noun_chunks iterator on the Doc and Span objects:

>>> from spacy.en import English
>>> nlp = English()
>>> doc = nlp(u'The cat and the dog sleep in the basket near the door.')
>>> for np in doc.noun_chunks:
>>>     np.text
u'The cat'
u'the dog'
u'the basket'
u'the door'

If you need something else, the best way is to iterate over the words of the sentence and consider the syntactic context to determine whether the word governs the phrase-type you want. If it does, yield its subtree:

from spacy.symbols import *

np_labels = set([nsubj, nsubjpass, dobj, iobj, pobj]) # Probably others too
def iter_nps(doc):
    for word in doc:
        if word.dep in np_labels:
            yield word.subtree
Gaptoothed answered 4/11, 2015 at 1:26 Comment(8)
Dear syllogism, can you tell me what are the "probably other" tags that one can add to make the code complete? I would like also to extract things like "the baby and his toys".Pinette
@Pinette check out dir(spacy.symbols)Smacker
Just gives me <generator object iter_nps at 0x0000018B62E11B10>Nesline
@Superdooperhero: I also got that generator object. For anyone who's interested see my answer below (which should at least clarify things).Tangled
@Superdooperhero, that's because the iter_nps function defined in the answer is a generator function. If you're not familiar with the generator pattern you should read up (wiki.python.org/moin/Generators), but essentially they offer lazy execution to yield the next item each time the function is called, rather than constructing the whole list at once and keeping in memory. You can access the items that are generated by the generator function using the next keyword or in a loop, e.g: for np_label in iter_nps(doc): print np_labelCoraleecoralie
just in case anyone find it helpful - from spacy.en import English did not work for me. so instead I had to use from spacy.lang.en import EnglishJumbuck
Is it normal if different noun chunks are printed every time I run your topmost code? Not like entirely different, but say if 4 chunks were printed at the first go, 3 of them might be printed next.Domenech
This didn't work for me. Victoria Stewart's answer below did.Klecka
T
7
import spacy
nlp = spacy.load("en_core_web_sm")
doc =nlp('Bananas are an excellent source of potassium.')
for np in doc.noun_chunks:
    print(np.text)
'''
  Bananas
  an excellent source
  potassium
'''

for word in doc:
    print('word.dep:', word.dep, ' | ', 'word.dep_:', word.dep_)
'''
  word.dep: 429  |  word.dep_: nsubj
  word.dep: 8206900633647566924  |  word.dep_: ROOT
  word.dep: 415  |  word.dep_: det
  word.dep: 402  |  word.dep_: amod
  word.dep: 404  |  word.dep_: attr
  word.dep: 443  |  word.dep_: prep
  word.dep: 439  |  word.dep_: pobj
  word.dep: 445  |  word.dep_: punct
'''

from spacy.symbols import *
np_labels = set([nsubj, nsubjpass, dobj, iobj, pobj])
print('np_labels:', np_labels)
'''
  np_labels: {416, 422, 429, 430, 439}
'''

https://www.geeksforgeeks.org/use-yield-keyword-instead-return-keyword-python/

def iter_nps(doc):
    for word in doc:
        if word.dep in np_labels:
            yield(word.dep_)

iter_nps(doc)
'''
  <generator object iter_nps at 0x7fd7b08b5bd0>
'''

## Modified method:
def iter_nps(doc):
    for word in doc:
        if word.dep in np_labels:
            print(word.text, word.dep_)

iter_nps(doc)
'''
  Bananas nsubj
  potassium pobj
'''

doc = nlp('BRCA1 is a tumor suppressor protein that functions to maintain genomic stability.')
for np in doc.noun_chunks:
    print(np.text)
'''
  BRCA1
  a tumor suppressor protein
  genomic stability
'''

iter_nps(doc)
'''
  BRCA1 nsubj
  that nsubj
  stability dobj
'''
Tangled answered 21/12, 2019 at 0:52 Comment(0)
S
3

You can also get noun from a sentence like this:

    import spacy
    nlp=spacy.load("en_core_web_sm")
    doc=nlp("When Sebastian Thrun started working on self-driving cars at "
    "Google in 2007, few people outside of the company took him "
    "seriously. “I can tell you very senior CEOs of major American "
    "car companies would shake my hand and turn away because I wasn’t "
    "worth talking to,” said Thrun, in an interview with Recode earlier "
    "this week.")
    #doc text is from spacy website
    for x in doc :
    if x.pos_ == "NOUN" or x.pos_ == "PROPN" or x.pos_=="PRON":
    print(x)
    # here you can get Nouns, Proper Nouns and Pronouns
Serpens answered 13/3, 2021 at 12:0 Comment(0)
A
2

If you want to specify more exactly which kind of noun phrase you want to extract, you can use textacy's matches function. You can pass any combination of POS tags. For example,

textacy.extract.matches(doc, "POS:ADP POS:DET:? POS:ADJ:? POS:NOUN:+")

will return any nouns that are preceded by a preposition and optionally by a determiner and/or adjective.

Textacy was built on spacy, so they should work perfectly together.

Archibold answered 26/2, 2020 at 9:46 Comment(1)
it would be great if you could update the links. I'm getting error 404.Jumbuck
S
1

from spacy.en import English may give you an error

No module named 'spacy.en'

All language data has been moved to a submodule spacy.lang in spacy2.0+

Please use spacy.lang.en import English

Then do all the remaining steps as @syllogism_ answered

Serpens answered 12/3, 2021 at 8:21 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.