As has been pointed out, you can't do it all on one request but you can do it all in one line, broken out:
curl -s "http://myserver/mywiki/api.php?action=parse&format=json&page=Testpage&prop=sections" |\
jq -r '.parse.sections[] | .index' |\
xargs -I {} -n 1 curl -s "http://myserver/mywiki/api.php?action=parse&page=Testpage&format=json&prop=wikitext§ion={}" |\
jq '.parse.wikitext."*"' | xargs -I {} -0 -n 1 echo -e {}
explanation:
- curl -s keeps it quiet. you may need -k for https
- the first jq grabs the indexes of the returned array, aka sections
- we use xargs to grab each section as json
- and pass that back to get the wikitext of each section
- finally passing each to echo -e to interpret escapes
- the -0 stops some metacharacter being interpreted by xargs
This of course does not look much different from grabbing the page but you can change the first jq slightly to
jq -r ".parse.sections[] | select(.line == \"$section\") | .index"
and limit to one section. You did not ask this but it's a useful as a poor man's supplement to man pages. Written as a bash function one could recall a specifically named condensed section of a larger self-linked page at the command line. Man doesn't cover everything and it's been around since the start of Unix exactly because no one can remember everything and get it right, especially not chatGPT. Thanks Nemo for your original answer.