Word Frequency in text using Python but disregard stop words
Asked Answered
U

4

4

This gives me a frequency of words in a text:

 fullWords = re.findall(r'\w+', allText)

 d = defaultdict(int)

 for word in fullWords :
          d[word] += 1

 finalFreq = sorted(d.iteritems(), key = operator.itemgetter(1), reverse=True)

 self.response.out.write(finalFreq)

This also gives me useless words like "the" "an" "a"

My question is, is there a stop words library available in python which can remove all these common words? I want to run this on google app engine

Unvarnished answered 4/7, 2010 at 3:6 Comment(1)
or do you want to compete in stackoverflow.com/questions/3169051 ?Sillabub
R
5

You can download lists of stopwords as files in various formats, e.g. from here -- all Python needs to do is to read the file (and these are in csv format, easily read with the csv module), make a set, and use membership in that set (probably with some normalization, e.g., lowercasing) to exclude words from the count.

Roryros answered 4/7, 2010 at 3:25 Comment(0)
V
3

There's an easy way to handle this by slightly modifying the code you have (edited to reflect John's comment):

stopWords = set(['a', 'an', 'the', ...])
fullWords = re.findall(r'\w+', allText)
d = defaultdict(int)
for word in fullWords:
    if word not in stopWords:
        d[word] += 1
finalFreq = sorted(d.iteritems(), key=lambda t: t[1], reverse=True)
self.response.out.write(finalFreq)

This approach constructs the sorted list in two steps: first it filters out any words in your desired list of "stop words" (which has been converted to a set for efficiency), then it sorts the remaining entries.

Valonia answered 4/7, 2010 at 3:19 Comment(3)
Ummmm: why insert the stopwords and then rip them out again? Two lines to fix: ` if word not in stopwords: d[word] += 1` followed by a simple finalFreq = d.items()Desalinate
@John: I missed that. Although the number of stopwords is by definition limited, so it's not such a big deal.Valonia
re your latest edit: you don't need the [] (sorted() takes any iterable), and (k,v) for k,v in d.iteritems() is just d.iteritems()Desalinate
C
2

I know that NLTK has a package with a corpus and the stopwords for many languages, including English, see here for more information. NLTK has also a word frequency counter, it's a nice module for natural language processing that you should consider to use.

Contumelious answered 4/7, 2010 at 3:45 Comment(0)
C
0
stopwords = set(['an', 'a', 'the']) # etc...
finalFreq = sorted((k,v) for k,v in d.iteritems() if k not in stopwords,
                      key = operator.itemgetter(1), reverse=True)

This will filter out any keys which are in the stopwords set.

Culicid answered 4/7, 2010 at 3:19 Comment(2)
See my comment on DavidZ's answer, yours has the same problem.Desalinate
It's not really a problem - performance wise, you're trading a set lookup for each resultant key for a set lookup for each word your regex matches. Which is more efficient will depend on the parameters of the problem set. You're already iterating over the set of result keys to output, anyways, so the generator expression for filtering doesn't involve much additional overhead - there's no extra lists being created, and the dict isn't being modified (so you're not actually "ripping them out"; just filtering them so that they never make it into the sorted list).Culicid

© 2022 - 2024 — McMap. All rights reserved.