Python (NLTK) - more efficient way to extract noun phrases?
Asked Answered
H

4

12

I've got a machine learning task involving a large amount of text data. I want to identify, and extract, noun-phrases in the training text so I can use them for feature construction later on in the pipeline. I've extracted the type of noun-phrases I wanted from text but I'm fairly new to NLTK, so I approached this problem in a way where I can break down each step in list comprehensions like you can see below.

But my real question is, am I reinventing the wheel here? Is there a faster way to do this that I'm not seeing?

import nltk
import pandas as pd

myData = pd.read_excel("\User\train_.xlsx")
texts = myData['message']

# Defining a grammar & Parser
NP = "NP: {(<V\w+>|<NN\w?>)+.*<NN\w?>}"
chunkr = nltk.RegexpParser(NP)

tokens = [nltk.word_tokenize(i) for i in texts]

tag_list = [nltk.pos_tag(w) for w in tokens]

phrases = [chunkr.parse(sublist) for sublist in tag_list]

leaves = [[subtree.leaves() for subtree in tree.subtrees(filter = lambda t: t.label == 'NP')] for tree in phrases]

flatten the list of lists of lists of tuples that we've ended up with, into just a list of lists of tuples

leaves = [tupls for sublists in leaves for tupls in sublists]

Join the extracted terms into one bigram

nounphrases = [unigram[0][1]+' '+unigram[1][0] in leaves]
Hindward answered 29/3, 2018 at 20:4 Comment(0)
G
13

Take a look at Why is my NLTK function slow when processing the DataFrame?, there's no need to iterate through all rows multiple times if you don't need intermediate steps.

With ne_chunk and solution from

[code]:

from nltk import word_tokenize, pos_tag, ne_chunk
from nltk import RegexpParser
from nltk import Tree
import pandas as pd

def get_continuous_chunks(text, chunk_func=ne_chunk):
    chunked = chunk_func(pos_tag(word_tokenize(text)))
    continuous_chunk = []
    current_chunk = []

    for subtree in chunked:
        if type(subtree) == Tree:
            current_chunk.append(" ".join([token for token, pos in subtree.leaves()]))
        elif current_chunk:
            named_entity = " ".join(current_chunk)
            if named_entity not in continuous_chunk:
                continuous_chunk.append(named_entity)
                current_chunk = []
        else:
            continue

    return continuous_chunk

df = pd.DataFrame({'text':['This is a foo, bar sentence with New York city.', 
                           'Another bar foo Washington DC thingy with Bruce Wayne.']})

df['text'].apply(lambda sent: get_continuous_chunks((sent)))

[out]:

0                   [New York]
1    [Washington, Bruce Wayne]
Name: text, dtype: object

To use the custom RegexpParser :

from nltk import word_tokenize, pos_tag, ne_chunk
from nltk import RegexpParser
from nltk import Tree
import pandas as pd

# Defining a grammar & Parser
NP = "NP: {(<V\w+>|<NN\w?>)+.*<NN\w?>}"
chunker = RegexpParser(NP)

def get_continuous_chunks(text, chunk_func=ne_chunk):
    chunked = chunk_func(pos_tag(word_tokenize(text)))
    continuous_chunk = []
    current_chunk = []

    for subtree in chunked:
        if type(subtree) == Tree:
            current_chunk.append(" ".join([token for token, pos in subtree.leaves()]))
        elif current_chunk:
            named_entity = " ".join(current_chunk)
            if named_entity not in continuous_chunk:
                continuous_chunk.append(named_entity)
                current_chunk = []
        else:
            continue

    return continuous_chunk


df = pd.DataFrame({'text':['This is a foo, bar sentence with New York city.', 
                           'Another bar foo Washington DC thingy with Bruce Wayne.']})


df['text'].apply(lambda sent: get_continuous_chunks(sent, chunker.parse))

[out]:

0                  [bar sentence, New York city]
1    [bar foo Washington DC thingy, Bruce Wayne]
Name: text, dtype: object
Gateshead answered 31/3, 2018 at 4:33 Comment(3)
Fantastic answer! The links are super helpful as well. Thanks @Gateshead ! Question, why do you write 'prev = None' ? when defining 'get_continuous_chunks'?Hindward
Oh that was a mistake, it's not necessary. I think I was using prev to check the history but actually only current_chunk is needed to check the history. Thanks for catching that!Gateshead
hey @alvas, where did you come with that regex (NP = "NP: {(<V\w+>|<NN\w?>)+.*<NN\w?>}") - Is this a standard Noun phrase detection standard?Comines
C
1

I suggest referring to this prior thread: Extracting all Nouns from a text file using nltk

They suggest using TextBlob as the easiest way to achieve this (if not the one that is most efficient in terms of processing) and the discussion there addresses your question.

from textblob import TextBlob
txt = """Natural language processing (NLP) is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human (natural) languages."""
blob = TextBlob(txt)
print(blob.noun_phrases)
Cropdusting answered 20/11, 2020 at 0:5 Comment(1)
Thanks for contributing to the discussion on this question! Textblob definitely has advantages over the sometimes-bulky NLTK. However, your offered solution doesn't allow for customized parsing to occur - which ultimately could be a stronger pro for NLTK.Hindward
A
1

The above methods didn't give me the required results. Following is the function that I would suggest

from nltk import word_tokenize, pos_tag, ne_chunk
from nltk import RegexpParser
from nltk import Tree
import re


def get_noun_phrases(text):
    pos = pos_tag(word_tokenize(text))
    count = 0
    half_chunk = ""
    for word, tag in pos:
        if re.match(r"NN.*", tag):
            count+=1
            if count>=1:
                half_chunk = half_chunk + word + " "
        else:
            half_chunk = half_chunk+"---"
            count = 0
    half_chunk = re.sub(r"-+","?",half_chunk).split("?")
    half_chunk = [x.strip() for x in half_chunk if x!=""]
    return half_chunk
Annamaeannamaria answered 11/2, 2021 at 9:8 Comment(1)
Could you (briefly) elaborate what you couldn’t achieve with the other methods?Theodore
H
0

The Constituent-Treelib library, which can be installed via: pip install constituent-treelib does excatly what you are looking for in few lines of code. In order to extract noun (or any other) phrases, perform the following steps.

from constituent_treelib import ConstituentTree

# First, we have to provide a sentence that should be parsed
sentence = "I've got a machine learning task involving a large amount of text data."

# Then, we define the language that should be considered with respect to the underlying models 
language = ConstituentTree.Language.English

# You can also specify the desired model for the language ("Small" is selected by default)
spacy_model_size = ConstituentTree.SpacyModelSize.Medium

# Next, we must create the neccesary NLP pipeline. 
# If you wish, you can instruct the library to download and install the models automatically
nlp = ConstituentTree.create_pipeline(language, spacy_model_size) # , download_models=True

# Now, we can instantiate a ConstituentTree object and pass it the sentence and the NLP pipeline
tree = ConstituentTree(sentence, nlp)

# Finally, we can extract the phrases
tree.extract_all_phrases()

Result...

{'S': ["I 've got a machine learning task involving a large amount of text data ."],
 'PP': ['of text data'],
 'VP': ["'ve got a machine learning task involving a large amount of text data",
  'got a machine learning task involving a large amount of text data',
  'involving a large amount of text data'],
 'NML': ['machine learning'],
 'NP': ['a machine learning task involving a large amount of text data',
  'a machine learning task',
  'a large amount of text data',
  'a large amount',
  'text data']}

If you only want the noun phrases, just pick them out with tree.extract_all_phrases()['NP']

['a machine learning task involving a large amount of text data',
 'a machine learning task',
 'a large amount of text data',
 'a large amount',
 'text data']
Hinkley answered 16/1, 2023 at 22:23 Comment(4)
Does this work well also on messy data (e.g. conversation transcripts, where people don't necessarily speak in full sentences) or does this method require "clean" sentences as input?Theodore
I don't think so. To answer this conclusively, I would have to test/check it thoroughly...Hinkley
you don’t think that it works or you don’t think that it needs clean data?Theodore
I don't think that it works. The parsing depends directly on the postags, so it is very difficult to construct the constituents if the postags are already incorrect, mixed up or simply missing altogether...Hinkley

© 2022 - 2024 — McMap. All rights reserved.