How to get the probability of bigrams in a text of sentences?
Asked Answered
G

1

7

I have a text which has many sentences. How can I use nltk.ngrams to process it?

This is my code:

   sequence = nltk.tokenize.word_tokenize(raw) 
   bigram = ngrams(sequence,2)
   freq_dist = nltk.FreqDist(bigram)
   prob_dist = nltk.MLEProbDist(freq_dist)
   number_of_bigrams = freq_dist.N()

However, the above code supposes that all sentences are one sequence. But, sentences are separated, and I guess the last word of one sentence is unrelated to the start word of another sentence. How can I create a bigram for such a text? I need also prob_dist and number_of_bigrams which are based on the `freq_dist.

There are similar questions like this What are ngram counts and how to implement using nltk? but they are mostly about a sequence of words.

Glaikit answered 2/3, 2019 at 20:7 Comment(0)
M
11

You can use the new nltk.lm module. Here's an example, first get some data and tokenize it:

import os
import requests
import io #codecs

from nltk import word_tokenize, sent_tokenize 

# Text version of https://kilgarriff.co.uk/Publications/2005-K-lineer.pdf
if os.path.isfile('language-never-random.txt'):
    with io.open('language-never-random.txt', encoding='utf8') as fin:
        text = fin.read()
else:
    url = "https://gist.githubusercontent.com/alvations/53b01e4076573fea47c6057120bb017a/raw/b01ff96a5f76848450e648f35da6497ca9454e4a/language-never-random.txt"
    text = requests.get(url).content.decode('utf8')
    with io.open('language-never-random.txt', 'w', encoding='utf8') as fout:
        fout.write(text)

# Tokenize the text.
tokenized_text = [list(map(str.lower, word_tokenize(sent))) 
              for sent in sent_tokenize(text)]

Then the language modelling:

# Preprocess the tokenized text for 3-grams language modelling
from nltk.lm.preprocessing import padded_everygram_pipeline
from nltk.lm import MLE

n = 3
train_data, padded_sents = padded_everygram_pipeline(n, tokenized_text)

model = MLE(n) # Lets train a 3-grams maximum likelihood estimation model.
model.fit(train_data, padded_sents)

To get the counts:

model.counts['language'] # i.e. Count('language')
model.counts[['language']]['is'] # i.e. Count('is'|'language')
model.counts[['language', 'is']]['never'] # i.e. Count('never'|'language is')

To get the probabilities:

model.score('is', 'language'.split())  # P('is'|'language')
model.score('never', 'language is'.split())  # P('never'|'language is')

There's some kinks on the Kaggle platform when loading the notebook but at some point this notebook should give a good overview of the nltk.lm module https://www.kaggle.com/alvations/n-gram-language-model-with-nltk

Montiel answered 4/3, 2019 at 8:49 Comment(4)
Thanks, how can I install nltk.lm via pip. It seems there is no such module when I install nltkGlaikit
pip install -U nltk>=3.4Montiel
Could you please take a look at this related quesiton: #55000184Glaikit
Why NLTK does not provide the function model.score(sentence)? It is not super-trivial to compute the score of a sentence if we use backoff for example.Bongbongo

© 2022 - 2024 — McMap. All rights reserved.