Has anyone parsed Wiktionary? [closed]
Asked Answered
S

11

32

Wiktionary is a wiki dictionary that covers many languages. It even has translations. I would be interested in parsing it and playing with the data, has anyone does anything like this before? Is there any library I can use? (Preferably Python.)

Shurlock answered 29/7, 2010 at 15:36 Comment(1)
en.wiktionary.org/wiki/Wiktionary:ParsingIsallobar
P
20

Wiktionary runs on MediaWiki, which has an API.

One of the subpages for the API documentation is Client code, which lists some Python libraries.

Parrott answered 29/7, 2010 at 15:40 Comment(0)
R
23

I had at one time downloaded a wiktionary dump, trying to gather together words and definitions for slavic languages. I approached it using elementtree to go thru the xml file that is the dump. I would avoid trying to scrape or crawl the site, and just download the xml dump that wikimedia provides for wiktionary. Go to the wikimedia downloads, look for the english wiktionary dumps (enwiktionary) and go to the most recent dump. You'll probably want the pages-articles.xml.bz2 file, which is just the article content, no history or comments. Parse this with whatever xml processing libraries you prefer in python. I personally prefer elementtree. Good luck.

Rosewater answered 29/7, 2010 at 20:59 Comment(1)
How did you use elementtree? As far as I can see, most of the data is not xml tagged, ie, you get everything under <text>: <text xml:space="preserve">==English== ===Etymology 1=== {{rfe}} ====Pronunciation==== * {{enPR|fēt}}, {{IPA|/fiːt/|lang=en}} * {{audio|en-us-feet.ogg|Audio (US)|lang=en}} * {{rhymes|iːt|lang=en}} * {{homophones|lang=en|feat}} ====Noun==== {{en-plural noun}}Marybellemarybeth
P
20

Wiktionary runs on MediaWiki, which has an API.

One of the subpages for the API documentation is Client code, which lists some Python libraries.

Parrott answered 29/7, 2010 at 15:40 Comment(0)
C
15

wordnik has done a good job parsing-out definitions, etc and they have a great api

like the others have mentioned, wiktionary is a formatting-disaster, and was not built to be computer-readable

Couthie answered 16/3, 2012 at 9:51 Comment(3)
Thanks, wordnik works perfectly for me. I have a thin Python client for getting definitions and examples for a word.Numbing
Do you recognize that the dump from wikimedia is intentionally partial? In fact, it is also maliciously partial in that the dump misses very basic and often used word while containing a lot of words that many of us don't even know exist.Claqueur
@Claqueur Link for "intentionally partial", please. If you found some page which is present on the wiki but not in the dumps, have you reported the bug?Destructive
D
10

Yes, many people parsed Wiktionary. You can usually find past experiences in the Wiktionary-l mailing list archives.

A project not mentioned by other answers is DBPedia's Wiktionary RDF extraction.

Dozens other research projects parsed Wiktionary: you can find some examples in a recent Wiktionary special and in other issues of the Wikimedia research newsletter.

Recently someone also made an English Wiktionary REST API which includes an unspecified subset of the Wiktionary data; future plans for the thing are not known yet.

Destructive answered 29/7, 2010 at 15:36 Comment(0)
S
9

I had a crack at parsing the german wiktionary. I ended up writing it off as too difficult, but I put my (not at all tidied up) code up at https://github.com/benreynwar/wiktionary-parser before I gave up. Although there are conventions used by the editors they are not enforced by anything other than peer oversight. The diversity of templates used along with all the typos in the pages makes the parsing quite challenging.

I think the problem is that they've used the same system as for wiktionary which is great for ease of use by the editors, but is not appropriate for the much more structured content of wiktionary. It's a shame because if wiktionary could be easily parsed it would be a hugely useful resource.

Samuels answered 6/5, 2011 at 4:52 Comment(2)
Just saw this when looking at other slashdot wiktionary questions. It might be useful. en.wikipedia.org/wiki/…Samuels
This project is now hosted at github.com/benreynwar/wiktionary-parser. It remains neglected.Samuels
P
4

I just made a word list from the German dump like that:

bzcat pages-articles.xml.bz2 | grep '<title>[^[:space:][:punct:]]*</title>' | sed 's:.*<title>\(.*\)</title>.*:\1:' > words
Pusan answered 24/3, 2012 at 23:5 Comment(1)
I think the question was about parsing the wiki content, not the XML.Blankenship
C
4

You are welcome to play with the MySQL parsed Wiktionary database. There are two databases (English Wiktionary and Russian Wiktionary) created by the parser written in Java: http://wikokit.googlecode.com

If you like PHP, then you are welcome to play with piwidict - PHP API to this machine-readable Wiktionary 2

Cooky answered 13/3, 2014 at 13:20 Comment(1)
This may be the most hopeful option of all written thus far. +1Mitis
P
3

You may be interested in dbnary project, not python but interesting. Claims support parsing for 21 languages and it powers wikdict.

Pearlstein answered 29/7, 2015 at 10:18 Comment(1)
WikDict also provide downloads of translation data which has been further processed to make it easier to use. See wikdict.com/page/about .Gurdwara
H
1

There is also JWKTL which does a good job at parsing and extracting structured data from wiktionary. It is written in Java and has support for the English, German and Russian editions.

Housebreaking answered 28/11, 2014 at 21:12 Comment(1)
I think it doesn't support French, but GermanMutazilite
M
0

It depends on how thoroughly you need to parse it. If you just need to get all contents of a word in a language (definition, etymology, pronunciation, conjugation, etc.) then it's pretty easy. I had done this before, although in Java using jsoup

However, if you need to parse it down to different components of the content (e.g. just getting the definitions of a word), then it will be much more challenging. A Wiktionary entry for a word in a language has no pre-defined template, so a header can be anything from <h3> to <h6>, the order of the sections may be jumbled, they can be repetitive, etc.

Mutazilite answered 17/6, 2015 at 0:22 Comment(0)
S
-1

I wrote a primitive parser for the German Wiktionary dump in Java that only extracts nouns and their articles, plus their Arabic translation, without any dependencies. Execution takes a long time, so be warned. If there’s interest/need to parse more or other data, please tell me, I might look into it as time permits.

Styria answered 19/5, 2018 at 11:7 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.