I'm working with urllib and urllib2 in python and am using them to retrieve images from urls.
Using something similar to :
try: buffer=urllib2.url_open(urllib2.Request(url)) f.write(buffer) f.close except (Errors that could occur): #Network Errors(?) print "Failed to retrieve "+url pass
Now what happens often is that the image does not load/is broken when using the site via a normal web browser this is presumably because of high server load or because the image does not exist or could not be retrieved by the server.
Whatever the reason may be, the image does not load and a similar situation can also/is likely to occur when using the script. Since I do not know what error it might it throw up how do I handle it?
I think mentioning all possible errors in the urllib2,urllib library in the except statement might be overkill so I need a better way.
(I also might need to/have to handle broken Wi-Fi, unreachable server and the like at times so more errors)
[e for e in dir(urllib2) if 'rror' in e]
gives me['HTTPDefaultErrorHandler', 'HTTPError', 'HTTPErrorProcessor', 'URLError']
. Does that help at all? – Baggyurllib2
. So if you want to catch "all" errors explicitly, those are the ones that you should list. If you're having trouble catching multiple exceptions in one exception block, look at this question – Baggy