Script to get the HTTP status code of a list of urls?
Asked Answered
D

9

109

I have a list of URLS that I need to check, to see if they still work or not. I would like to write a bash script that does that for me.

I only need the returned HTTP status code, i.e. 200, 404, 500 and so forth. Nothing more.

EDIT Note that there is an issue if the page says "404 not found" but returns a 200 OK message. It's a misconfigured web server, but you may have to consider this case.

For more on this, see Check if a URL goes to a page containing the text "404"

Dimeter answered 26/5, 2011 at 8:58 Comment(2)
To be fair, my script's "bug" is only when the server returns HTTP code 200 but the body text says "404 not found", which is a misbehaving webserver.Lundt
The exit status of wget will be 0 if the response code was 200, 8 if 404, 4 if 302... You can use the $? variable to access the exit status of the previous command.Quamash
L
220

Curl has a specific option, --write-out, for this:

$ curl -o /dev/null --silent --head --write-out '%{http_code}\n' <url>
200
  • -o /dev/null throws away the usual output
  • --silent throws away the progress meter
  • --head makes a HEAD HTTP request, instead of GET
  • --write-out '%{http_code}\n' prints the required status code

To wrap this up in a complete Bash script:

#!/bin/bash
while read LINE; do
  curl -o /dev/null --silent --head --write-out "%{http_code} $LINE\n" "$LINE"
done < url-list.txt

(Eagle-eyed readers will notice that this uses one curl process per URL, which imposes fork and TCP connection penalties. It would be faster if multiple URLs were combined in a single curl, but there isn't space to write out the monsterous repetition of options that curl requires to do this.)

Lundt answered 26/5, 2011 at 10:7 Comment(9)
Very nice. Can I execute that command on every url in my file ?Dimeter
@Manu: Yes, I've edited my answer to show one possible way of wrapping up the curl command. It assumes url-list.txt contains one URL per line.Lundt
If you're wanting to POST, you can't use --head. In that case, the rest still applies but it would look like: curl -o /dev/null -s -w '%{http_code}\n' --data "key=value" <url>Kandi
I don't know why script from above andswer always get me 000 in the output, but when I run command only once without loop it works...Proulx
@KarolFiturski I had the same problem (which you've probably since fixed but just in case anyone else stumbles across this...) in my case I had carriage returns at the line ends of my input file, causing the urls to be like http://example.com/\r when going through the loopApulia
I had this issue and I was able to fix it by switching the line ending from the Windows type to the Linux type.Rubinrubina
Added example for Fish shell. @see github.com/fish-shell/fish-shell/issues/…Bhagavadgita
During my testing, an empty trailing line was necessary, otherwise, the last line would not be tested.Rhodian
What if I want the final response code after redirects?Gravante
M
54
wget --spider -S "http://url/to/be/checked" 2>&1 | grep "HTTP/" | awk '{print $2}'

prints only the status code for you

Morven answered 25/2, 2012 at 10:40 Comment(3)
+1 Shows multiple codes when a url is redirected, each at new line.Anachronistic
Had to get rid of the --spider for it to work with the request that I was trying to make, but works.Storyteller
you can use --max-redirect=0 if you do not want multiple codes: wget --max-redirect=0 --spider -S "https://miles4migrants.org/ukraine2canada/s" 2>&1 | grep "HTTP/" | awk '{print $2}'Kane
O
36

Extending the answer already provided by Phil. Adding parallelism to it is a no brainer in bash if you use xargs for the call.

Here the code:

xargs -n1 -P 10 curl -o /dev/null --silent --head --write-out '%{url_effective}: %{http_code}\n' < url.lst

-n1: use just one value (from the list) as argument to the curl call

-P10: Keep 10 curl processes alive at any time (i.e. 10 parallel connections)

Check the write_out parameter in the manual of curl for more data you can extract using it (times, etc).

In case it helps someone this is the call I'm currently using:

xargs -n1 -P 10 curl -o /dev/null --silent --head --write-out '%{url_effective};%{http_code};%{time_total};%{time_namelookup};%{time_connect};%{size_download};%{speed_download}\n' < url.lst | tee results.csv

It just outputs a bunch of data into a csv file that can be imported into any office tool.

Oira answered 13/3, 2014 at 13:20 Comment(6)
Parallelism, file input and csv. Exactly what i was looking for.Peninsula
Brilliant, made my day.Retrogression
This is awesome, just what I was looking for, thank you sir. One question, how could one include the page title of the page in the csv results?Ashly
@Oira - stackoverflow.com/users/1182464/estani how could one include getting the page title of a page into the .csv file. Sorry for repost, forgot to tag you so you would get notified about this question. Many thanks.Ashly
@Ashly this is not handling the contents of the http call at all. If the "page title" (whatever that is) is in the url, then you could add it. If not, you need to parse the whole page to extract the "title" of it (assuming you mean a html page retrieved by the http). Look for other answers at stack overflow or ask that specific question.Oira
the output is not the same as the phil scriptCoke
C
19

This relies on widely available wget, present almost everywhere, even on Alpine Linux.

wget --server-response --spider --quiet "${url}" 2>&1 | awk 'NR==1{print $2}'

The explanations are as follow :

--quiet

Turn off Wget's output.

Source - wget man pages

--spider

[ ... ] it will not download the pages, just check that they are there. [ ... ]

Source - wget man pages

--server-response

Print the headers sent by HTTP servers and responses sent by FTP servers.

Source - wget man pages

What they don't say about --server-response is that those headers output are printed to standard error (sterr), thus the need to redirect to stdin.

The output sent to standard input, we can pipe it to awk to extract the HTTP status code. That code is :

  • the second ($2) non-blank group of characters: {$2}
  • on the very first line of the header: NR==1

And because we want to print it... {print $2}.

wget --server-response --spider --quiet "${url}" 2>&1 | awk 'NR==1{print $2}'
Creditor answered 18/11, 2018 at 5:25 Comment(1)
I used this one with 2>&1 | head -1 | awk '{ print $2 }'Malachi
H
10

Use curl to fetch the HTTP-header only (not the whole file) and parse it:

$ curl -I  --stderr /dev/null http://www.google.co.uk/index.html | head -1 | cut -d' ' -f2
200
Hexachlorophene answered 26/5, 2011 at 9:25 Comment(2)
curl tells me 200 when wget says 404 ... :(Dimeter
The -I flag causes curl to make a HTTP HEAD request, which is treated separately from a normal HTTP GET by some servers and can thus return different values. The command should still work without it.Benjie
S
4

wget -S -i *file* will get you the headers from each url in a file.

Filter though grep for the status code specifically.

Shearer answered 26/5, 2011 at 9:10 Comment(0)
F
2

I found a tool "webchk” written in Python. Returns a status code for a list of urls. https://pypi.org/project/webchk/

Output looks like this:

▶ webchk -i ./dxieu.txt | grep '200'
http://salesforce-case-status.dxi.eu/login ... 200 OK (0.108)
https://support.dxi.eu/hc/en-gb ... 200 OK (0.389)
https://support.dxi.eu/hc/en-gb ... 200 OK (0.401)

Hope that helps!

Fevre answered 29/4, 2020 at 21:5 Comment(0)
N
1

Due to https://mywiki.wooledge.org/BashPitfalls#Non-atomic_writes_with_xargs_-P (output from parallel jobs in xargs risks being mixed), I would use GNU Parallel instead of xargs to parallelize:

cat url.lst |
  parallel -P0 -q curl -o /dev/null --silent --head --write-out '%{url_effective}: %{http_code}\n' > outfile

In this particular case it may be safe to use xargs because the output is so short, so the problem with using xargs is rather that if someone later changes the code to do something bigger, it will no longer be safe. Or if someone reads this question and thinks he can replace curl with something else, then that may also not be safe.

Example url.lst:

https://fsfe.org
https://www.fsf.org/bulletin/2010/fall/gnu-parallel-a-design-for-life
https://www.fsf.org/blogs/community/who-actually-reads-the-code
https://publiccode.eu/
Naevus answered 7/9, 2019 at 6:36 Comment(8)
what is the format of 'url.lst'? How are urls separated?Idden
thanks, but for some reason for me it returns response status code only for last url in the list, all urls above it get status code 000 ... Did you try yourself? Is this a working code for you?Idden
@Idden Code is working for me. I get: <url>: 200 for each of them.Naevus
@Idden I just tried putting in \r\n as newline, and then I get your 000. So you probably have an additional \r in your file. Try: -d '\r\n'Naevus
You are right, the infamous newlines between Windows and Linux :) I have fixed it and now it works for me too. However, I have noticed that the order of results in the output file is not the same as the order of urls in the input file.... interesting.Idden
@Idden If you want that: --keep-orderNaevus
Thanks, works great. I also managed to achieve similar result using curl native parallel option: curl --parallel --parallel-immediate --config config.txt --retry 3 --retry-delay 5 -s -w "%{url}: %{response_code}\n" > outfile. Wonder which one is better / uses more resourcesIdden
@Idden GNU Parallel rarely outperforms tools that have their own parallelization: The tools know how they can cut corners, whereas GNU Parallel will have to run the full program every time.Naevus
L
1

Keeping in mind that curl is not always available (particularly in containers), there are issues with this solution:

wget --server-response --spider --quiet "${url}" 2>&1 | awk 'NR==1{print $2}'

which will return exit status of 0 even if the URL doesn't exist.

Alternatively, here is a reasonable container health-check for using wget:

wget -S --spider -q -t 1 "${url}" 2>&1 | grep "200 OK" > /dev/null

While it may not give you exact status out, it will at least give you a valid exit code based health responses (even with redirects on the endpoint).

Louralourdes answered 17/8, 2021 at 9:3 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.