How can I retrieve Wiktionary word content?
Asked Answered
S

10

126

How may Wiktionary's API be used to determine whether or not a word exists?

Synge answered 5/5, 2010 at 4:5 Comment(3)
Anyone who has read the documentation will see that the API contains nowhere near enough functionality to "retrieve Wiktionary word content". I'd estimate it gets you roughly 1% of the way. You can retrieve raw wiki syntax or parsed HTML and from there you have to do everything yourself. Having said that there might be a very new experimental API that works only on the English Wiktionary.Chink
Get all Wiktionary articles in individual JSON files here: github.com/dan1wang/jsonbook-builderSundae
An even better parsed JSON version is here: kaikki.orgCompote
K
89

The Wiktionary API can be used to query whether or not a word exists.

Examples for existing and non-existing pages:

http://en.wiktionary.org/w/api.php?action=query&titles=test http://en.wiktionary.org/w/api.php?action=query&titles=testx

The first link provides examples on other types of formats that might be easier to parse.

To retrieve the word's data in a small XHTML format (should more than existence be required), request the printable version of the page:

http://en.wiktionary.org/w/index.php?title=test&printable=yes http://en.wiktionary.org/w/index.php?title=testx&printable=yes

These can then be parsed with any standard XML parser.

Kalli answered 5/5, 2010 at 4:8 Comment(9)
Thanks; the API itself is not what I was hoping for but the link you provided is what I was looking for.Synge
Now it accepts additional format parameter for other than xml output like so : en.wiktionary.org/w/…Totaquine
Might not work as you expect though en.wiktionary.org/wiki/Category:English_misspellings en.wiktionary.org/wiki/amatuerFrayne
Use: https://en.wiktionary.org/w/?curid=[page_id]&printable=yes, to redirect to the XHTML page using pageid.Alcibiades
If you need to fetch the data using the browser, you can use https://en.wiktionary.org/w/api.php?format=json&action=query&origin=*&export&exportnowrap&titles=test to avoid CORS-related problemsMargalo
How to filter in this API for only English words?Jemena
Use HTTPS with those example. The current http version isn't giving resultsDibasic
Sadly the printable XHTML seems poorly supported. There's a no longer supported warning shown. Also, I found that it gives me invalid XHTML, specifically an unclosed <input> tag. Here's the URL I used: en.wiktionary.org/w/?curid=103410&printable=yes , alternatively: en.wiktionary.org/w/index.php?title=test&printable=yesTendentious
I've been playing with this myself. I think, if you want to check whether a word is valid in English, you want to use https://en.wiktionary.org/w/api.php?action=query&format=xml&prop=categories&titles=WORDS%7CTO%7CCHECK&clcategories=Category%3AEnglish%20lemmas%7CCategory%3AEnglish%20non-lemma%20forms%7CCategory%3AEnglish%20eye%20dialect. Then, "valid in English" means a result that has the category "English lemmas" or "English non-lemma forms" but doesn't have the category "English eye dialect". However the set of words meeting these criteria may still be overly broad for many uses.Golf
C
49

2024 UPDATE!

It seems that a new MediaWiki REST API has appeared since I last played with this stuff. And the biggest news is that it includes a method to get definitions from the English Wiktionary!

/page/definition/{term} Get term definitions based on Wiktionary content. Experimental end point providing term definitions extracted from Wiktionary content. Currently, only English Wiktionary is supported. See this wiki page for background and considerations for further development.

Stability: stable

Please follow wikitech-l or mediawiki-api-announce for announcements of breaking changes.


Old answer

There are a few caveats in just checking that Wiktionary has a page with the name you are looking for:

Caveat #1: All Wiktionaries including the English Wiktionary actually have the goal of including every word in every language, so if you simply use above API call you will know that the word you are asking about is a word in at least one language, but not necessarily English: http://en.wiktionary.org/w/api.php?action=query&titles=dicare

Caveat #2: Perhaps a redirect exists from one word to another word. It might be from an alternative spelling, but it might be from an error of some kind. The API call above will not differentiate between a redirect and an article: http://en.wiktionary.org/w/api.php?action=query&titles=profilemetry

Caveat #3: Some Wiktionaries including the English Wiktionary include "common misspellings": http://en.wiktionary.org/w/api.php?action=query&titles=fourty

Caveat #4: Some Wiktionaries allow stub entries which have little or no information about the term. This used to be common on several Wiktionaries but not the English Wiktionary. But it seems to have now spread also to the English Wiktionary: https://en.wiktionary.org/wiki/%E6%99%B6%E7%90%83 (permalink for when the stub is filled so you can still see what a stub looks like: https://en.wiktionary.org/w/index.php?title=%E6%99%B6%E7%90%83&oldid=39757161)

If these are not included in what you want, you will have to load and parse the wikitext itself, which is not a trivial task.

Chink answered 3/12, 2010 at 5:35 Comment(3)
What I really wanted to do was take a full dump of the data on one of the non-English Wikitionary sites, and then turn the contents into something I could use locally. It seems silly now, but I was hoping that I could request the list of all words, and then pull down their defitions/translations one at a time as needed.Synge
The fix to Caveat #2 is simple: add &prop=info to the query and check the response for redirect attribute.Brion
@svick: Yes it's true #2 is easier to circumvent when using the API but these basic caveats also cover trying to parse the Wiktionary data dump files, even though this question doesn't ask about that approach.Chink
T
24

You can download a dump of Wiktionary data. There's more information in the FAQ. For your purposes, the definitions dump is probably a better choice than the XML dump.

Thicken answered 18/8, 2011 at 8:15 Comment(3)
Those dump files are massive, and it's unclear which ones to download (all of them?). Probably not what most people are looking for it they just want to programmatically lookup a handful of words.Liquescent
I explain which file to download - i.e. the definitions dump (the directory from my link is just different versions of the same file), and yes, if you programmatically want to look up words this is ideal. If you can guarantee the program will be executed only online, there are other options, but nevertheless I'm answering this part of the original question: "Alternatively, is there any way I can pull down the dictionary data that backs a Wiktionary?"Thicken
Definitions dump link is no longer available.Psilocybin
C
13

To keep it really simple, extract the words from the dump like this:

bzcat pages-articles.xml.bz2 | grep '<title>[^[:space:][:punct:]]*</title>' | sed 's:.*<title>\(.*\)</title>.*:\1:' > words
Crissie answered 24/3, 2012 at 23:14 Comment(3)
how do I get a copy of pages-articles.xml.bz2?Synge
It's just a generic name I used to describe the dumps of the form LANGwiktionary-DATE-pages-articles.xml.bz2 . Go to link, then click LANGwiktionary (LANG e.g. 'en', 'de'...).Crissie
That's great, thanks! If you want to get the words with a dash or space in it, you should use: bzcat pages-articles.xml.bz2 | grep '<title>\(.*\)</title>' | sed 's:.*<title>\(.*\)</title>.*:\1:' > wordsRosenstein
A
11

If you are using Python, you can use WiktionaryParser by Suyash Behera.

You can install it by

pip install wiktionaryparser

Example usage:

from pprint import pprint
from wiktionaryparser import WiktionaryParser
parser = WiktionaryParser()
word = parser.fetch('test')
pprint(word)
another_word = parser.fetch('test', 'french')
pprint(another_word)

# features
parser.set_default_language('french')
parser.exclude_part_of_speech('noun')
parser.include_relation('alternative forms')
Anatolian answered 20/3, 2018 at 19:43 Comment(0)
M
4

You could use the revisions API:

https://en.wiktionary.org/w/api.php?action=query&prop=revisions&titles=test&rvslots=*&rvprop=content&formatversion=2

Or the parse API:

https://en.wiktionary.org/w/api.php?action=parse&page=test&prop=wikitext&formatversion=2

More examples are provided in the documentation.

Mont answered 14/8, 2020 at 4:11 Comment(1)
You can also add &format=json to the urls to have a formatted response.Cleancut
L
3

As mentioned earlier, the problem with this approach is that Wiktionary provides the information about all the words of all the languages. So the approach to check if a page exists using Wikipedia API won't work because there're a lot of pages for non-English words. To overcome this, you need to parse each page to figure out if there's a section describing the English word. Parsing wikitext isn't a trivial task, though in your case it's not that bad. To cover almost all the cases you need to just check if the wikitext contains the English heading. Depending on the programming language you use, you can find some tools to build an AST from wikitext. This will cover most of the cases, but not all of them because Wiktionary includes some common misspellings.

As an alternative, you could try using Lingua Robot or something similar. Lingua Robot parses the Wiktionary content and provides it as a REST API. A non-empty response means that the word exists. Please note that, as opposed to Wiktionary, the API itself doesn't include any misspellings (at least at the moment of writing this answer). Please also note that the Wiktionary contains not only the words, but multi-word expressions.

Lenny answered 7/10, 2019 at 20:48 Comment(0)
J
2

You might want to try JWKTL out. I just found out about it ;)

Jaquenette answered 24/1, 2011 at 2:39 Comment(3)
The citation that you refer to is broken. Here is a link to the JWKTL page ukp.tu-darmstadt.de/software/jwktl. It's not really what I believe the OP is looking for though.Eddy
The second link is (effectively) broken. It redirects to a genetic page, Welcome to the Ubiquitous Knowledge Processing (UKP) Lab!.Premedical
The Wikipedia reference leads to Extracting lexical semantic knowledge from Wikipedia and Wiktionary and "...JWKTL (Java-based WiKTionary Library)...".Premedical
H
1

Here's a start to parsing etymology and pronunciation data:

function parsePronunciationLine(line) {
  let val
  let type
  line.replace(/\{\{\s*a\s*\|UK\s*\}\}\s*\{\{IPA\|\/?([^\/\|]+)\/?\|lang=en\}\}/, (_, $1) => {
    val = $1
    type = 'uk'
  })
  line.replace(/\{\{\s*a\s*\|US\s*\}\}\s*\{\{IPA\|\/?([^\/\|]+)\/?\|lang=en\}\}/, (_, $1) => {
    val = $1
    type = 'us'
  })
  line.replace(/\{\{enPR|[^\}]+\}\},?\s*\{\{IPA\|\/?([^\/\|]+)\/?\|lang=en}}/, (_, $1) => {
    val = $1
    type = 'us'
  })
  line.replace(/\{\{a|GA\}\},?\s*\{\{IPA\|\/?([^\/\|]+)\/?\|lang=en}}/, (_, $1) => {
    val = $1
    type = 'ga'
  })
  line.replace(/\{\{a|GA\}\},?.+\{\{IPA\|\/?([^\/\|]+)\/?\|lang=en}}/, (_, $1) => {
    val = $1
    type = 'ga'
  })
  // {{a|GA}} {{IPA|/ˈhæpi/|lang=en}}
  // * {{a|RP}} {{IPA|/pliːz/|lang=en}}
  // * {{a|GA}} {{enPR|plēz}}, {{IPA|/pliz/|[pʰliz]|lang=en}}

  if (!val)
    return

  return { val, type }
}

function parseEtymologyPiece(piece) {
  let parts = piece.split('|')
  parts.shift() // The first one is ignored.
  let ls = []
  if (langs[parts[0]]) {
    ls.push(parts.shift())
  }
  if (langs[parts[0]]) {
    ls.push(parts.shift())
  }
  let l = ls.pop()
  let t = parts.shift()
  return [ l, t ]
  // {{inh|en|enm|poisoun}}
  // {{m|enm|poyson}}
  // {{der|en|la|pōtio|pōtio, pōtiōnis|t=drink, a draught, a poisonous draught, a potion}}
  // {{m|la|pōtō|t=I drink}}
  // {{der|en|enm|happy||fortunate, happy}}
  // {{cog|is|heppinn||lucky}}
}

Here is a gist with it more fleshed out.

Haplosis answered 9/6, 2019 at 16:19 Comment(2)
thanks, tried to run it inside browser devtools console. what is langs?Petiolate
updated with a gist, langs is a few thousand lines, too big for SO.Haplosis
C
1

I created my own open source Wiktionary API project. It is based on the wiktextract data and in general has much more information than the official API: For example IPAs, etymology information, canonical forms of words (describing stress in many languages, for example), translation tables.

It also not only contains information from the English Wiktionary, but 6 different Wiktionaries, and can translate any language pair, so for example from French to Czech.

(I currently have it hosted, but can't make any guarantees about uptime etc., but each one can easily self-host if needed).

Compote answered 15/2 at 12:34 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.