NLTK and language detection
Asked Answered
R

5

44

How do I detect what language a text is written in using NLTK?

The examples I've seen use nltk.detect, but when I've installed it on my mac, I cannot find this package.

Reagan answered 5/7, 2010 at 21:30 Comment(3)
The langid and langdetect libraries do the trick and are super easy to use: github.com/hb20007/hands-on-nltk-tutorial/blob/master/…Miguelmiguela
langdetect is not very reliable (e.g. check github.com/Mimino666/langdetect/issues/51 for instance) and langid choked on a test Japanese string when I tested it. YMMV. In 2019, if you are not tied to NLTK, I'd recommend you take a look at cld2, cld3 or fastText instead.Discordant
Nicely summarized here https://mcmap.net/q/136690/-determine-if-text-is-in-englishLindholm
P
45

Have you come across the following code snippet?

english_vocab = set(w.lower() for w in nltk.corpus.words.words())
text_vocab = set(w.lower() for w in text if w.lower().isalpha())
unusual = text_vocab.difference(english_vocab) 

from http://groups.google.com/group/nltk-users/browse_thread/thread/a5f52af2cbc4cfeb?pli=1&safe=active

Or the following demo file?

https://web.archive.org/web/20120202055535/http://code.google.com/p/nltk/source/browse/trunk/nltk_contrib/nltk_contrib/misc/langid.py

Pistareen answered 2/8, 2010 at 2:34 Comment(5)
PS, it still relied on nltk.detect, though. Any idea on how to install that on a Mac?Reagan
I don't believe detect is a native module for nltk. Here's the code: docs.huihoo.com/nltk/0.9.5/api/nltk.detect-pysrc.html You could probably download it and put it in your python library, which may be in: /Library/Python/2.x/site-packages/nltk...Pistareen
Check this out.. blog.alejandronolla.com/2013/05/15/…Ellene
The requested URL /p/nltk/source/browse/trunk/nltk_contrib/nltk_contrib/misc/langid.py was not found on this server. That’s all we know.Cooley
This is such a good answer. The simplicity of checking if the words are in the vocab is an amazingly direct approach to this kind of task. Granted it doesn't give you the actual language or translate, but if you simply need to know if it's an outlier, this is brilliant.Erebus
T
31

This library is not from NLTK either but certainly helps.

$ sudo pip install langdetect

Supported Python versions 2.6, 2.7, 3.x.

>>> from langdetect import detect

>>> detect("War doesn't show who's right, just who's left.")
'en'
>>> detect("Ein, zwei, drei, vier")
'de'

https://pypi.python.org/pypi/langdetect?

P.S.: Don't expect this to work correctly always:

>>> detect("today is a good day")
'so'
>>> detect("today is a good day.")
'so'
>>> detect("la vita e bella!")
'it'
>>> detect("khoobi? khoshi?")
'so'
>>> detect("wow")
'pl'
>>> detect("what a day")
'en'
>>> detect("yay!")
'so'
Trudeau answered 3/8, 2016 at 19:39 Comment(5)
Thank you for pointing out that it doesn't always work. detect("You made it home!") is giving me "fr". I'm wondering if there is anything better.Chibouk
Here is another fun observation: It doesn't seem to give the same answer each time. >>> detect_langs("Hello, I'm christiane amanpour.") [it:0.8571401485770536, en:0.14285811674731527] >>> detect_langs("Hello, I'm christiane amanpour.") [it:0.8571403121803622, fr:0.14285888197332486] >>> detect_langs("Hello, I'm christiane amanpour.") [it:0.999995562246093]Chibouk
langdetect works much better for longer strings where it can sample more n-grams ... for short strings of a few words, it's extremely unreliable.Quadrifid
@MarkCramer The algorithm is non-deterministic. If you want the same answer each time, set the seed: import DetectorFactory DetectorFactory.seed = 0Coelom
Quick to install, easy to use. Maybe not perfect but for my usage, it worked fine. Thank you!Akel
L
20

Although this is not in the NLTK, I have had great results with another Python-based library :

https://github.com/saffsd/langid.py

This is very simple to import and includes a large number of languages in its model.

Lowndes answered 30/6, 2013 at 3:43 Comment(0)
R
12

Super late but, you could use textcat classifier in nltk, here. This paper discusses the algorithm.

It returns a country code in ISO 639-3, so I would use pycountry to get the full name.

For example, load the libraries

import nltk
import pycountry
from nltk.stem import SnowballStemmer

Now let's look at two phrases, and guess their language:

phrase_one = "good morning"
phrase_two = "goeie more"

tc = nltk.classify.textcat.TextCat() 
guess_one = tc.guess_language(phrase_one)
guess_two = tc.guess_language(phrase_two)

guess_one_name = pycountry.languages.get(alpha_3=guess_one).name
guess_two_name = pycountry.languages.get(alpha_3=guess_two).name
print(guess_one_name)
print(guess_two_name)

English
Afrikaans

You could then pass them into other nltk functions, for example:

stemmer = SnowballStemmer(guess_one_name.lower())
s1 = "walking"
print(stemmer.stem(s1))
walk

Disclaimer obviously this will not always work, especially for sparse data

Extreme example

guess_example = tc.guess_language("hello")
print(pycountry.languages.get(alpha_3=guess_example).name)
Konkani (individual language)
Roan answered 17/10, 2019 at 12:7 Comment(0)
C
0

polyglot.detect can detect the language:

from polyglot.detect import Detector

foreign = 'Este libro ha sido uno de los mejores libros que he leido.'
print(Detector(foreign).language)

name: Spanish     code: es       confidence:  98.0 read bytes:   865
Congregationalism answered 13/2, 2023 at 15:16 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.