Making HTTP requests via Python Requests module not working via proxy where curl does? Why?
Asked Answered
L

2

11

Using this curl command I am able to get the response I am looking for from Bash

curl -v -u z:secret_key --proxy http://proxy.net:80  \
-H "Content-Type: application/json" https://service.com/data.json

I have already seen this other post on proxies with the Requests module

And it helped me formulate my code in Python but I need to make a request via a proxy. However, even while supplying the proper proxies it isn't working. Perhaps I'm just not seeing something?

>>> requests.request('GET', 'https://service.com/data.json', \
>>> headers={'Content-Type':'application/json'}, \ 
>>> proxies = {'http' : "http://proxy.net:80",'https':'http://proxy.net:80'}, \
>>> auth=('z', 'secret_key'))

Furthermore, at the same python console I can use urllib to make a request have it be successful.

>>> import urllib
>>> urllib.urlopen("http://www.httpbin.org").read()
---results---

Even trying requests on just a non-https address fails to work.

>>> requests.get('http://www.httpbin.org')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.6/site-packages/requests/api.py", line 79, in get
   return request('get', url, **kwargs)
File "/Library/Python/2.6/site-packages/requests/api.py", line 66, in request
    prefetch=prefetch
File "/Library/Python/2.6/site-packages/requests/sessions.py", line 191, in request
    r.send(prefetch=prefetch)
File "/Library/Python/2.6/site-packages/requests/models.py", line 454, in send
    raise ConnectionError(e)
requests.exceptions.ConnectionError: Max retries exceeded for url:

Requests is so elegant and awesome but how could it be failing in this instance?

Limp answered 13/12, 2011 at 0:30 Comment(5)
pycurl.sourceforge.netNagpur
I know that I could probably setup and use pycurl on my Mac without too much trouble (or likely any at all). I was just trying to go for the more elegant solution of using Requests which is pretty awesome and clean. Thank you for the suggestion though.Limp
Setting up a proxy for use with requests works just fine here. Ideally we could reproduce what your seeing... otherwise telling us why it doesn't work is the only other option. Are you getting a stack trace from requests? You could also monitor your network and check the actual requests, since I can only guess they have to be different for a different effect to be observed between curl/requests.Nagpur
I'm now noticing that https requests of any kind using any library/module are not working within Python. However, doing just a normal http request works fine. Think it could be my environment variables? How would I check what is wrong?Limp
requests does https cert validation by default. perhaps it's failing to validate the cert for your proxy?Benzoin
C
9

The problem actually lies with python's standard url access libraries - urllib/urllib2/httplib. I can't remember which library is the exact culprit, but for simplicity's sake, let's just call it urllib. Unfortunately, urllib doesn't implement the HTTP Connect method which is required for accessing an https site through an http(s) proxy. My efforts to add the functionality using urllib have not been successful (it has been a while since I tried). So unfortunately the only option I know to work is to use pycurl for this case.

However, there is a solution which is relatively clean that is almost exactly the same API as python requests, but it uses a pycurl backend instead of the python standard libraries.

The library is called human_curl. I've used it myself and have had great results.

Crapulent answered 11/1, 2012 at 9:15 Comment(1)
That is not correct. urllib2 does support HTTP connect (bugs.python.org/issue1424152) while request didn't support it till 2.0 (github.com/kennethreitz/requests/pull/1515).Wiatt
G
1

Believeing above answer we tried human_curl

human_curl gave errors like Unknown errors, whereas urllib3 gave correct errors like Request Timed out, Max retries exceeded with url.

So, we went back to urllib3, urllib3 is thread-safe. We are happy with urllib3

Only problem now we get it "Max retries exceeded", We cant solve it, Guessing it might be to do with server/proxy, But not sure.

Giacobo answered 3/8, 2012 at 3:26 Comment(1)
I am using requests at work and everything seems to work fine, including communications over https connections. Further we use proxies for debugging http requests. If you can shed some light on your issue, I might be able to help you.Fracture

© 2022 - 2024 — McMap. All rights reserved.