I'm writing a dissertation, and using nltk.pos_tagger in my work. I can't find any information about what the accuracy of this algorithm. Does anybody know where can I find such information?
What is the accuracy of nltk pos_tagger?
Asked Answered
I don't think you can get the accuracy score anywhere really. Like most NLP tools, this is very application-specific. Depends on how many ambiguous words you've got, whether you have ground truth to evaluate the model, etc. I would design your dissertation the way that you can calculate precision and recall in your specific case. Say, use Mechanical Turk to generate human-tagged data from your corpus and then evaluate. –
Mutualism
NLTK
default pos tagger pos_tag
is a MaxEnt tagger, see line 82 from https://github.com/nltk/nltk/blob/develop/nltk/tag/init.py
from nltk.corpus import brown
from nltk.data import load
sents = brown.tagged_sents()
# test on last 10% of brown corpus.
numtest = len(sents) / 10
testsents = sents[numtest:]
_POS_TAGGER = 'taggers/maxent_treebank_pos_tagger/english.pickle'
tagger = load(_POS_TAGGER)
print tagger.evaluate(testsents)
[out]:
I think you forgot to paste the output. –
Taproot
And how using
MaxEnt tagger
is the answer to the accuracy of it? –
Retrogradation accuracy I trained several taggers on the WSJ corpus (90% training / 10% test data). nltk-maxent-pos-tagger achieved an accuracy of 93.64% (100 iterations, rare feature cutoff = 5) while MXPOST reached 96.93% (100 iterations). Since both implementations use the same feature set, results shouldn't be that different. Unfortunately, there's no source code available for MXPOST, but comparing nltk-maxent-pos-tagger with OpenNLP's implementation should be helpful. Link : github.com/arne-cl/nltk-maxent-pos-tagger#todo –
Vaal
© 2022 - 2024 — McMap. All rights reserved.