Get Text Content from mediawiki page via API
Asked Answered
F

9

71

I'm quite new to MediaWiki, and now I have a bit of a problem. I have the title of some Wiki page, and I want to get just the text of a said page using api.php, but all that I have found in the API is a way to obtain the Wiki content of the page (with wiki markup). I used this HTTP request...

/api.php?action=query&prop=revisions&rvlimit=1&rvprop=content&format=xml&titles=test

But I need only the textual content, without the Wiki markup. Is that possible with the MediaWiki API?

Falconiform answered 26/10, 2009 at 14:32 Comment(2)
I don't have enough of whatever the microcurrency here is called to add an answer to a question this old, but for anyone searching, it's worth noting that the Mediawiki TextExtracts API ( mediawiki.org/wiki/… ) gives you just the text contents of an article. (It keeps article headings, but that's relatively easy to regex out.)Lise
Not enough microcurrency to edit: Actually, you can can also remove the heading markup. Sample query: en.wikipedia.org/w/…Lise
C
6

I don't think it is possible using the API to get just the text.

What has worked for me was to request the HTML page (using the normal URL that you would use in a browser) and strip out the HTML tags under the content div.

EDIT:

I have had good results using HTML Parser for Java. It has examples of how to strip out HTML tags under a given DIV.

Cand answered 26/10, 2009 at 14:51 Comment(4)
I have done, the same thing, i have java app, that must recieve the text content of wiki page. When i use api, and recieve wikisyntax page it works very fast, but i need clear Text, i have tried to request the HTML page and strip out the HTML tags, but it works slowly, therefore i have asked about this feature in wiki API. Or maybe you now some good wikisyntax-clear text converter for Java, then i can convert it directly in Java?Falconiform
The real issue with wikipedia's language is that it is Turing complete. If you look closely at the code of a page, you will notice all sorts of custom functions. The definitions of those functions have to be fetched as well and then interpreted, which might expand to yet more functions. That is why I reverted to html parsing, which contains the complete, rendered text.Cand
MediaWiki's wikitext isn't quite Turing complete since the developrs have bravely fought off the editors' demands for looping constructs. But you are correct that to get plain text out of MediaWiki you need to get the HTML and then strip that. You might like to user this html2txt.pl tool I made in Perl for that job, or convert it to your favourite language: gist.github.com/751910Choline
A relatively new extension to the API (TextExtracts) now allows for plain text extraction from an article. See my answer.Anarchism
H
75

Use action=parse to get the html:

/api.php?action=parse&page=test

One way to get the text from the html would be to load it into a browser and walk the nodes, looking only for the text nodes, using JavaScript.

Hwahwan answered 27/5, 2011 at 16:50 Comment(3)
action=parse can also return JSON by adding format=json.Saltigrade
Getting links to the page in results for titles search would be nice. Not sure which query string that is. Also, Hi @gilly3.. :D This answer still helped after a decade.Pasteurizer
using the REST API is also an option, for getting a parsed html version of a MediaWiki page /rest.php/v1/page/<page name>/html working example: mediawiki.org/w/rest.php/v1/page/MediaWiki/htmlKinetics
A
47

The TextExtracts extension of the API does about what you're asking. Use prop=extracts to get a cleaned up response. For example, this link will give you cleaned up text for the Stack Overflow article. What's also nice is that it still includes section tags, so you can identify individual sections of the article.

Just to include a visible link in my answer, the above link looks like:

/api.php?format=xml&action=query&prop=extracts&titles=Stack%20Overflow&redirects=true

Edit: As Amr mentioned, TextExtracts is an extension to MediaWiki, so it won't necessarily be available for every MediaWiki site.

Anarchism answered 18/2, 2014 at 4:5 Comment(1)
TextExtracts is an extension to MediaWiki. It's available for Wikipedia but not for every MediaWiki installation. mediawiki.org/wiki/Extension:TextExtractsCrossbench
H
40

Adding ?action=raw at the end of a MediaWiki page return the latest content in a raw text format. Eg:- https://en.wikipedia.org/wiki/Main_Page?action=raw

Heng answered 6/3, 2014 at 12:49 Comment(4)
I tried this on a page not on wikipedia, and it didn't work. Does this require an extension?Chekiang
It seems only to work for the English Wikipedia - see exampleAspidistra
@MartinThoma If you change %26action%3Draw to ?action=raw, it works.Alister
Is there any way to also get page title in the same request using this method?Hughes
I
33

You can get the wiki data in text format from the API by using the explaintext parameter. Plus, if you need to access many titles' information, you can get all the titles' wiki data in a single call. Use the pipe character | to separate each title. For example, this API call will return the data from both the "Google" and "Yahoo" pages:

http://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exlimit=max&explaintext&exintro&titles=Yahoo|Google&redirects=

Parameters:

  • explaintext: Return extracts as plain text instead of limited HTML.
  • exlimit=max: Return more than one result. The max is currently 20.
  • exintro: Return only the content before the first section. If you want the full data, just remove this.
  • redirects=: Resolve redirect issues.
Invention answered 10/6, 2015 at 18:31 Comment(3)
This will give you just the first section, not the whole article's textAmazon
We can also use exsectionformat=plain to remove wikitext-style formatting (== like this ==). Source: mediawiki.org/w/…Lacy
Can you get the data of a page by the id of this page?Whidah
L
11

That's the simplest way: http://en.wikipedia.org/w/api.php?format=xml&action=query&titles=Albert%20Einstein&prop=revisions&rvprop=content

Lecture answered 24/4, 2012 at 18:41 Comment(1)
Unfortunately, this returns MediaWiki markup, which needs to be parsed in order to retrieve the text.Thanatos
A
7

Python users coming to this question might be interested in the wikipedia module (docs):

import wikpedia
wikipedia.set_lang('de')
page = wikipedia.page('Wikipedia')
print(page.content)

Every formatting, except for sections (==) is striped away.

Aspidistra answered 3/8, 2017 at 6:52 Comment(0)
C
6

I don't think it is possible using the API to get just the text.

What has worked for me was to request the HTML page (using the normal URL that you would use in a browser) and strip out the HTML tags under the content div.

EDIT:

I have had good results using HTML Parser for Java. It has examples of how to strip out HTML tags under a given DIV.

Cand answered 26/10, 2009 at 14:51 Comment(4)
I have done, the same thing, i have java app, that must recieve the text content of wiki page. When i use api, and recieve wikisyntax page it works very fast, but i need clear Text, i have tried to request the HTML page and strip out the HTML tags, but it works slowly, therefore i have asked about this feature in wiki API. Or maybe you now some good wikisyntax-clear text converter for Java, then i can convert it directly in Java?Falconiform
The real issue with wikipedia's language is that it is Turing complete. If you look closely at the code of a page, you will notice all sorts of custom functions. The definitions of those functions have to be fetched as well and then interpreted, which might expand to yet more functions. That is why I reverted to html parsing, which contains the complete, rendered text.Cand
MediaWiki's wikitext isn't quite Turing complete since the developrs have bravely fought off the editors' demands for looping constructs. But you are correct that to get plain text out of MediaWiki you need to get the HTML and then strip that. You might like to user this html2txt.pl tool I made in Perl for that job, or convert it to your favourite language: gist.github.com/751910Choline
A relatively new extension to the API (TextExtracts) now allows for plain text extraction from an article. See my answer.Anarchism
D
-2

You can do one thing after the contents are brought into your page - you can use the PHP function strip_tags() to remove the HTML tags.

Dichotomy answered 23/6, 2017 at 14:50 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.