First time poster - I am a new Python user with limited programming skills. Ultimately I am trying to identify and compare n-grams across numerous text documents found in the same directory. My analysis is somewhat similar to plagiarism detection - I want to calculate the percentage of text documents in which a particular n-gram can be found. For now, I am attempting a simpler version of the larger problem, trying to compare n-grams across two text documents. I have no problem identifying the n-grams but I am struggling to compare across the two documents. Is there a way to store the n-grams in a list to effectively compare which ones are present in the two documents? Here's what I've done so far (forgive the naive coding). For reference, I provide basic sentences below as opposed to the text documents I am actually reading in my code.
import nltk
from nltk.util import ngrams
text1 = 'Hello my name is Jason'
text2 = 'My name is not Mike'
n = 3
trigrams1 = ngrams(text1.split(), n)
trigrams2 = ngrams(text2.split(), n)
print(trigrams1)
for grams in trigrams1:
print(grams)
def compare(trigrams1, trigrams2):
for grams1 in trigrams1:
if each_gram in trigrams2:
print (each_gram)
return False
Thanks to everyone for your help!