How to prevent splitting specific words or phrases and numbers in NLTK?
Asked Answered
V

2

6

I have a problem in text matching when I tokenize text that splits specific words, dates and numbers. How can I prevent some phrases like "run in my family" ,"30 minute walk" or "4x a day" from splitting at the time of tokenizing words in NLTK?

They should not result in:

['runs','in','my','family','4x','a','day']

For example:

Yes 20-30 minutes a day on my bike, it works great!!

gives:

['yes','20-30','minutes','a','day','on','my','bike',',','it','works','great']

I want '20-30 minutes' to be treated as a single word. How can I get this behavior>?

Vidette answered 10/4, 2019 at 18:39 Comment(2)
Goof first question! I think it would be worthwhile to clean this question a little with the punctuation and grammar, because I don't think this is a straightforward task. I offered a solution, but I fear it could be very computationally expensive. Getting some other users on this could help a lot.Malherbe
Good question! And there're functions written in nltk, it works a little different from spacy linguistics steps / regex patterns approach.Corrade
M
3

You will be hard pressed to preserve n-grams of various length at the same time as tokenizing, to my knowledge, but you can find these n-grams as shown here. Then, you could replace the items in the corpus you want as n-grams with some joining character like dashes.

This is an example solution, but there are probably lots of ways to get there. Important note: I provided a way to find ngrams that are common in the text (you will probably want more than 1, so I put a variable there so that you can decide how many of the ngrams to collect. You might want a different number for each kind, but I only gave 1 variable for now.) This may miss ngrams you find important. For that, you can add ones you want to find to user_grams. Those will get added to the search.

import nltk 

#an example corpus
corpus='''A big tantrum runs in my family 4x a day, every week. 
A big tantrum is lame. A big tantrum causes strife. It runs in my family 
because of our complicated history. Every week is a lot though. Every week
I dread the tantrum. Every week...Here is another ngram I like a lot'''.lower()

#tokenize the corpus
corpus_tokens = nltk.word_tokenize(corpus)

#create ngrams from n=2 to 5
bigrams = list(nltk.ngrams(corpus_tokens,2))
trigrams = list(nltk.ngrams(corpus_tokens,3))
fourgrams = list(nltk.ngrams(corpus_tokens,4))
fivegrams = list(nltk.ngrams(corpus_tokens,5))

This section finds common ngrams up to five_grams.

#if you change this to zero you will only get the user chosen ngrams
n_most_common=1 #how many of the most common n-grams do you want.

fdist_bigrams = nltk.FreqDist(bigrams).most_common(n_most_common) #n most common bigrams
fdist_trigrams = nltk.FreqDist(trigrams).most_common(n_most_common) #n most common trigrams
fdist_fourgrams = nltk.FreqDist(fourgrams).most_common(n_most_common) #n most common four grams
fdist_fivegrams = nltk.FreqDist(fivegrams).most_common(n_most_common) #n most common five grams

#concat the ngrams together
fdist_bigrams=[x[0][0]+' '+x[0][1] for x in fdist_bigrams]
fdist_trigrams=[x[0][0]+' '+x[0][1]+' '+x[0][2] for x in fdist_trigrams]
fdist_fourgrams=[x[0][0]+' '+x[0][1]+' '+x[0][2]+' '+x[0][3] for x in fdist_fourgrams]
fdist_fivegrams=[x[0][0]+' '+x[0][1]+' '+x[0][2]+' '+x[0][3]+' '+x[0][4]  for x in fdist_fivegrams]

#next 4 lines create a single list with important ngrams
n_grams=fdist_bigrams
n_grams.extend(fdist_trigrams)
n_grams.extend(fdist_fourgrams)
n_grams.extend(fdist_fivegrams)

This section lets you add your own ngrams to a list

#Another option here would be to make your own list of the ones you want
#in this example I add some user ngrams to the ones found above
user_grams=['ngram1 I like', 'ngram 2', 'another ngram I like a lot']
user_grams=[x.lower() for x in user_grams]    

n_grams.extend(user_grams)

And this last part performs the processing so that you can tokenize again and get the ngrams as tokens.

#initialize the corpus that will have combined ngrams
corpus_ngrams=corpus

#here we go through the ngrams we found and replace them in the corpus with
#version connected with dashes. That way we can find them when we tokenize.
for gram in n_grams:
    gram_r=gram.replace(' ','-')
    corpus_ngrams=corpus_ngrams.replace(gram, gram.replace(' ','-'))

#retokenize the new corpus so we can find the ngrams
corpus_ngrams_tokens= nltk.word_tokenize(corpus_ngrams)

print(corpus_ngrams_tokens)

Out: ['a-big-tantrum', 'runs-in-my-family', '4x', 'a', 'day', ',', 'every-week', '.', 'a-big-tantrum', 'is', 'lame', '.', 'a-big-tantrum', 'causes', 'strife', '.', 'it', 'runs-in-my-family', 'because', 'of', 'our', 'complicated', 'history', '.', 'every-week', 'is', 'a', 'lot', 'though', '.', 'every-week', 'i', 'dread', 'the', 'tantrum', '.', 'every-week', '...']

I think this is actually a very good question.

Malherbe answered 10/4, 2019 at 19:23 Comment(4)
thanx.If i want to match the n-grams fwhich i found in my dataset then i should make my own list to match that and only keep that n-grams which are in list but that would be more time consuming?Vidette
I included that option in the code. If you don't want to find the most common ones as well just change n_most_common=1 to n_most_common=0. I wanted my solution to be self contained and verifiable though. I'll edit this in to a comment. Then you can just add the n-grams you want to the user_gram list.Malherbe
Also, it seems that there can't be that many ngrams which can be simultaneously uncommon and important. In other words, if you are going to bother to tokenizing these ngrams, then it should be because they are important to you, but if they don't occur often, that itself makes them not that important. You should get all the important common ones with the beginning method, and just need to add a few that are specific to your study, but that's just my hunch.Malherbe
Also, if that answered your question please consider upvoting and checking it as answered. See, What to do if someone answers my question?Malherbe
C
6

You can use the MWETokenizer:

from nltk import word_tokenize
from nltk.tokenize import MWETokenizer

tokenizer = MWETokenizer([('20', '-', '30', 'minutes', 'a', 'day')])
tokenizer.tokenize(word_tokenize('Yes 20-30 minutes a day on my bike, it works great!!'))

[out]:

['Yes', '20-30_minutes_a_day', 'on', 'my', 'bike', ',', 'it', 'works', 'great', '!', '!']

A more principled approach since you don't know how `word_tokenize will split the words you want to keep:

from nltk import word_tokenize
from nltk.tokenize import MWETokenizer

def multiword_tokenize(text, mwe):
    # Initialize the MWETokenizer
    protected_tuples = [word_tokenize(word) for word in mwe]
    protected_tuples_underscore = ['_'.join(word) for word in protected_tuples]
    tokenizer = MWETokenizer(protected_tuples)
    # Tokenize the text.
    tokenized_text = tokenizer.tokenize(word_tokenize(text))
    # Replace the underscored protected words with the original MWE
    for i, token in enumerate(tokenized_text):
        if token in protected_tuples_underscore:
            tokenized_text[i] = mwe[protected_tuples_underscore.index(token)]
    return tokenized_text

mwe = ['20-30 minutes a day', '!!']
print(multiword_tokenize('Yes 20-30 minutes a day on my bike, it works great!!', mwe))

[out]:

['Yes', '20-30 minutes a day', 'on', 'my', 'bike', ',', 'it', 'works', 'great', '!!']
Corrade answered 12/4, 2019 at 3:58 Comment(0)
M
3

You will be hard pressed to preserve n-grams of various length at the same time as tokenizing, to my knowledge, but you can find these n-grams as shown here. Then, you could replace the items in the corpus you want as n-grams with some joining character like dashes.

This is an example solution, but there are probably lots of ways to get there. Important note: I provided a way to find ngrams that are common in the text (you will probably want more than 1, so I put a variable there so that you can decide how many of the ngrams to collect. You might want a different number for each kind, but I only gave 1 variable for now.) This may miss ngrams you find important. For that, you can add ones you want to find to user_grams. Those will get added to the search.

import nltk 

#an example corpus
corpus='''A big tantrum runs in my family 4x a day, every week. 
A big tantrum is lame. A big tantrum causes strife. It runs in my family 
because of our complicated history. Every week is a lot though. Every week
I dread the tantrum. Every week...Here is another ngram I like a lot'''.lower()

#tokenize the corpus
corpus_tokens = nltk.word_tokenize(corpus)

#create ngrams from n=2 to 5
bigrams = list(nltk.ngrams(corpus_tokens,2))
trigrams = list(nltk.ngrams(corpus_tokens,3))
fourgrams = list(nltk.ngrams(corpus_tokens,4))
fivegrams = list(nltk.ngrams(corpus_tokens,5))

This section finds common ngrams up to five_grams.

#if you change this to zero you will only get the user chosen ngrams
n_most_common=1 #how many of the most common n-grams do you want.

fdist_bigrams = nltk.FreqDist(bigrams).most_common(n_most_common) #n most common bigrams
fdist_trigrams = nltk.FreqDist(trigrams).most_common(n_most_common) #n most common trigrams
fdist_fourgrams = nltk.FreqDist(fourgrams).most_common(n_most_common) #n most common four grams
fdist_fivegrams = nltk.FreqDist(fivegrams).most_common(n_most_common) #n most common five grams

#concat the ngrams together
fdist_bigrams=[x[0][0]+' '+x[0][1] for x in fdist_bigrams]
fdist_trigrams=[x[0][0]+' '+x[0][1]+' '+x[0][2] for x in fdist_trigrams]
fdist_fourgrams=[x[0][0]+' '+x[0][1]+' '+x[0][2]+' '+x[0][3] for x in fdist_fourgrams]
fdist_fivegrams=[x[0][0]+' '+x[0][1]+' '+x[0][2]+' '+x[0][3]+' '+x[0][4]  for x in fdist_fivegrams]

#next 4 lines create a single list with important ngrams
n_grams=fdist_bigrams
n_grams.extend(fdist_trigrams)
n_grams.extend(fdist_fourgrams)
n_grams.extend(fdist_fivegrams)

This section lets you add your own ngrams to a list

#Another option here would be to make your own list of the ones you want
#in this example I add some user ngrams to the ones found above
user_grams=['ngram1 I like', 'ngram 2', 'another ngram I like a lot']
user_grams=[x.lower() for x in user_grams]    

n_grams.extend(user_grams)

And this last part performs the processing so that you can tokenize again and get the ngrams as tokens.

#initialize the corpus that will have combined ngrams
corpus_ngrams=corpus

#here we go through the ngrams we found and replace them in the corpus with
#version connected with dashes. That way we can find them when we tokenize.
for gram in n_grams:
    gram_r=gram.replace(' ','-')
    corpus_ngrams=corpus_ngrams.replace(gram, gram.replace(' ','-'))

#retokenize the new corpus so we can find the ngrams
corpus_ngrams_tokens= nltk.word_tokenize(corpus_ngrams)

print(corpus_ngrams_tokens)

Out: ['a-big-tantrum', 'runs-in-my-family', '4x', 'a', 'day', ',', 'every-week', '.', 'a-big-tantrum', 'is', 'lame', '.', 'a-big-tantrum', 'causes', 'strife', '.', 'it', 'runs-in-my-family', 'because', 'of', 'our', 'complicated', 'history', '.', 'every-week', 'is', 'a', 'lot', 'though', '.', 'every-week', 'i', 'dread', 'the', 'tantrum', '.', 'every-week', '...']

I think this is actually a very good question.

Malherbe answered 10/4, 2019 at 19:23 Comment(4)
thanx.If i want to match the n-grams fwhich i found in my dataset then i should make my own list to match that and only keep that n-grams which are in list but that would be more time consuming?Vidette
I included that option in the code. If you don't want to find the most common ones as well just change n_most_common=1 to n_most_common=0. I wanted my solution to be self contained and verifiable though. I'll edit this in to a comment. Then you can just add the n-grams you want to the user_gram list.Malherbe
Also, it seems that there can't be that many ngrams which can be simultaneously uncommon and important. In other words, if you are going to bother to tokenizing these ngrams, then it should be because they are important to you, but if they don't occur often, that itself makes them not that important. You should get all the important common ones with the beginning method, and just need to add a few that are specific to your study, but that's just my hunch.Malherbe
Also, if that answered your question please consider upvoting and checking it as answered. See, What to do if someone answers my question?Malherbe

© 2022 - 2024 — McMap. All rights reserved.