how can I catch the 404 and 403 errors for pages in python and urllib(2), for example?
Are there any fast ways without big class-wrappers?
Added info (stack trace):
Traceback (most recent call last):
File "test.py", line 3, in <module>
page = urllib2.urlopen("http://localhost:4444")
File "/usr/lib/python2.6/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.6/urllib2.py", line 391, in open
response = self._open(req, data)
File "/usr/lib/python2.6/urllib2.py", line 409, in _open
'_open', req)
File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain
result = func(*args)
File "/usr/lib/python2.6/urllib2.py", line 1161, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.6/urllib2.py", line 1136, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno 111] Connection refused>
urllib2.URLError: <urlopen error [Errno 111] Connection refused>
– Millsap