How to avoid HTTP error 429 (Too Many Requests) python
Asked Answered
J

7

158

I am trying to use Python to login to a website and gather information from several webpages and I get the following error:

Traceback (most recent call last):
  File "extract_test.py", line 43, in <module>
    response=br.open(v)
  File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 203, in open
    return self._mech_open(url, data, timeout=timeout)
  File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 255, in _mech_open
    raise response
mechanize._response.httperror_seek_wrapper: HTTP Error 429: Unknown Response Code

I used time.sleep() and it works, but it seems unintelligent and unreliable, is there any other way to dodge this error?

Here's my code:

import mechanize
import cookielib
import re
first=("example.com/page1")
second=("example.com/page2")
third=("example.com/page3")
fourth=("example.com/page4")
## I have seven URL's I want to open

urls_list=[first,second,third,fourth]

br = mechanize.Browser()
# Cookie Jar
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)

# Browser options 
br.set_handle_equiv(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False)

# Log in credentials
br.open("example.com")
br.select_form(nr=0)
br["username"] = "username"
br["password"] = "password"
br.submit()

for url in urls_list:
        br.open(url)
        print re.findall("Some String")
Joslin answered 1/4, 2014 at 12:35 Comment(2)
There's no way around it, this is an enforcement on the server-side keeping track of how many requests /time-unit you make. If you exceed this unit you'll be temporarily blocked. Some servers send this information in the header, but those occasions are rare. Check the headers recieved from the server, use the information available.. If not, check how fast you can hammer without getting caught and use a sleep.Doloritas
#15648772Doloritas
G
230

Receiving a status 429 is not an error, it is the other server "kindly" asking you to please stop spamming requests. Obviously, your rate of requests has been too high and the server is not willing to accept this.

You should not seek to "dodge" this, or even try to circumvent server security settings by trying to spoof your IP, you should simply respect the server's answer by not sending too many requests.

If everything is set up properly, you will also have received a "Retry-after" header along with the 429 response. This header specifies the number of seconds you should wait before making another call. The proper way to deal with this "problem" is to read this header and to sleep your process for that many seconds.

You can find more information on status 429 here: https://www.rfc-editor.org/rfc/rfc6585#page-3

Green answered 29/4, 2014 at 14:14 Comment(4)
Well, no one ever said that all web servers are configured correctly. Also, since most rate limiters are identifying visitors by IP, this might lead to problems in a scenario where IPs are shared dynamically. If you keep receiving status 429 although you are confident that you have not sent too many requests at all, you might consider contacting the site's administrator.Green
Thanks for mentioning the "Retry-after" header. I would love a code example to see how to get that value (I was using urllib, to OP mechanize, in either case I don't think the headers are included in the raised exception)Molding
@Molding I don't have any particular Python code examples ready, but I assume some examples about how to retrieve response headers in general can be taken from the answers to this question: https://mcmap.net/q/152574/-python-get-http-headers-from-urllib2-urlopen-callGreen
Thanks @MRA. I found that the headers are available in the exception too: after catching HTTPError as my_exception, it is available in my_exception.headers, at least for urllib2.Molding
D
65

Writing this piece of code when requesting fixed my problem:

requests.get(link, headers = {'User-agent': 'your bot 0.1'})

This works because sites sometimes return a Too Many Requests (429) error when there isn't a user agent provided. For example, Reddit's API only works when a user agent is applied.

Danella answered 3/11, 2016 at 4:14 Comment(6)
This answer is downvoted, but some sites automatically return error code 429 if the user agent is banned due to abuse from other people. If you get error code 429 even if you've only sent a few requests, try setting the user agent to something else.Commination
Would also like to add, some sites plainly refuse requests unless a user-agent is sent, and you may get a myriad of other responses: 503 / 403 / some generic index page.Hoedown
Can confirm this. Just trying to interface python with reddit and without setting the user agent I was always getting error code 429.Clinician
can you add some explanation please ?Swaim
Where do you "write this piece of code"? This solution needs more details.Charbonnier
This code is using bs4.Giffy
I
49

As MRA said, you shouldn't try to dodge a 429 Too Many Requests but instead handle it accordingly. You have several options depending on your use-case:

1) Sleep your process. The server usually includes a Retry-after header in the response with the number of seconds you are supposed to wait before retrying. Keep in mind that sleeping a process might cause problems, e.g. in a task queue, where you should instead retry the task at a later time to free up the worker for other things.

2) Exponential backoff. If the server does not tell you how long to wait, you can retry your request using increasing pauses in between. The popular task queue Celery has this feature built right-in.

3) Token bucket. This technique is useful if you know in advance how many requests you are able to make in a given time. Each time you access the API you first fetch a token from the bucket. The bucket is refilled at a constant rate. If the bucket is empty, you know you'll have to wait before hitting the API again. Token buckets are usually implemented on the other end (the API) but you can also use them as a proxy to avoid ever getting a 429 Too Many Requests. Celery's rate_limit feature uses a token bucket algorithm.

Here is an example of a Python/Celery app using exponential backoff and rate-limiting/token bucket:

class TooManyRequests(Exception):
"""Too many requests"""

@task(
   rate_limit='10/s',
   autoretry_for=(ConnectTimeout, TooManyRequests,),
   retry_backoff=True)
def api(*args, **kwargs):
  r = requests.get('placeholder-external-api')

  if r.status_code == 429:
    raise TooManyRequests()
Ichthyornis answered 11/5, 2018 at 14:26 Comment(0)
G
32
if response.status_code == 429:
  time.sleep(int(response.headers["Retry-After"]))
Gader answered 1/9, 2020 at 8:58 Comment(2)
Way way to simple implementation. The "Retry-After" could be a timestamp instead of a number of seconds. See developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Retry-AfterBlackfish
This may be a simple exmple, but it points to the general shape of how to handle rate limiting - check for 429 status, use info in headers to respond. It was useful to me.Phial
O
8

Another workaround would be to spoof your IP using some sort of Public VPN or Tor network. This would be assuming the rate-limiting on the server at IP level.

There is a brief blog post demonstrating a way to use tor along with urllib2:

http://blog.flip-edesign.com/?p=119

Odellodella answered 1/4, 2014 at 13:8 Comment(1)
This is why I always require users of my API's to register for a key to make requests. This way I can limit requests by key rather than by IP. Registering for another key would be the only way to get a higher limit.Emelda
A
5

I've found out a nice workaround to IP blocking when scraping sites. It lets you run a Scraper indefinitely by running it from Google App Engine and redeploying it automatically when you get a 429.

Check out this article

Alethiaaletta answered 7/11, 2020 at 12:3 Comment(3)
Haha wow... using Google to scrape Google. And then changing your Google IP when Google blocks it.Ge
Thanks -_- Google is now blocking Google for legitimate users. stackoverflow.com/questions/74237192Understrapper
There is an API to get information from Google services. This is much more convenient than parsing HTML most of the time. serpapi.comGiffy
T
2

In many cases, continuing to scrape data from a website even when the server is requesting you not to is unethical. However, in the cases where it isn't, you can utilize a list of public proxies in order to scrape a website with many different IP addresses.

T answered 22/11, 2020 at 1:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.