urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=59587): Max retries exceeded with url using Selenium GeckoDriver Firefox
Asked Answered
I

5

10

At dawn my code was working perfectly, but today when I woke up it is no longer working, and I didn't change any line of code, I also checked if Firefox updated, and no, it didn't, and I have no idea what maybe, I've been reading the urllib documentation but I couldn't find any information

from asyncio.windows_events import NULL
from ctypes.wintypes import PINT
from logging import root
from socket import timeout
from string import whitespace
from tkinter import N
from turtle import color
from urllib.request import Request
from hyperlink import URL
from selenium import webdriver
from selenium.webdriver.firefox.service import Service
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support.expected_conditions import presence_of_element_located
#from webdriver_manager.firefox import GeckoDriverManager
import time
from datetime import datetime
import telebot

#driver = webdriver.Firefox(service=Service(GeckoDriverManager().install()))

colors = NULL
api = "******"
url = "https://blaze.com/pt/games/double"
bot = telebot.TeleBot(api)

chat_id = "*****"

firefox_driver_path = "/Users/Antônio/Desktop/roletarobo/geckodriver.exe"
firefox_options = Options()
firefox_options.add_argument("--headless")
webdriver = webdriver.Firefox(
executable_path = firefox_driver_path,
options = firefox_options)

with webdriver as driver:

    driver.get(url)
    wait = WebDriverWait(driver, 25)

wait.until(presence_of_element_located((By.CSS_SELECTOR, "div#roulette.page.complete")))
time.sleep(2)

results = driver.find_elements(By.CSS_SELECTOR, "div#roulette-recent div.entry")
for quote in results:
      quote.text.split('\n')

data = [my_elem.text for my_elem in driver.find_elements(By.CSS_SELECTOR, "div#roulette-recent div.entry")][:8]

#método convertElements, converte elementos da lista em elementos declarados
def convertElements( oldlist, convert_dict ):
    newlist = []
    for e in oldlist:
      if e in convert_dict:
        newlist.append(convert_dict[e])
      else:
        newlist.append(e)
    return newlist
#fim do método

colors = convertElements(data, {'':"white",'1':"red",'2':"red",'3':"red",'4':"red",'5':"red",'6':"red",'7':"red",'8':"black",'9':"black",'10':"black",'11':"black",'12':"black",'13':"black",'14':"black"})
print(colors)

It was working perfectly, since Sunday I've been coding and it's always been working

 File "C:\Users\Antônio\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\support\wait.py", line 78, in until
    value = method(self._driver)
  File "C:\Users\Antônio\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\support\expected_conditions.py", line 64, in _predicate
    return driver.find_element(*locator)
  File "C:\Users\Antônio\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 1248, in find_element      
    return self.execute(Command.FIND_ELEMENT, {
  File "C:\Users\Antônio\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 423, in execute
    response = self.command_executor.execute(driver_command, params)
  File "C:\Users\Antônio\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\remote_connection.py", line 333, in execute    
    return self._request(command_info[0], url, body=data)
  File "C:\Users\Antônio\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\remote_connection.py", line 355, in _request   
    resp = self._conn.request(method, url, body=body, headers=headers)        
  File "C:\Users\Antônio\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\request.py", line 78, in request
    return self.request_encode_body(
  File "C:\Users\Antônio\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\request.py", line 170, in request_encode_body
    return self.urlopen(method, url, **extra_kw)
  File "C:\Users\Antônio\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 813, in urlopen
    return self.urlopen(
  File "C:\Users\Antônio\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 785, in urlopen    retries = retries.increment(
  File "C:\Users\Antônio\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\retry.py", line 592, in increment    raise MaxRetryError(_pool, url, error or ResponseError(cause))urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=59587): Max retries exceeded with url: /session/b38be2fe-6d92-464f-a096-c43183aef6a8/element (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000173145EF520>: Failed to establish a new connection: [WinError 10061] No connections could be made because the target machine actively refused them'))
Impact answered 15/4, 2022 at 15:26 Comment(1)
last line states that the server rejected the connection. Have you tried actually accessing the site? It seems the Spanish government has shut down the gambling website in question...Neoptolemus
B
12

This error message...

MaxRetryError(_pool, url, error or ResponseError(cause))urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=59587): Max retries exceeded with url: /session/b38be2fe-6d92-464f-a096-c43183aef6a8/element (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000173145EF520>: Failed to establish a new connection: [WinError 10061] No connections could be made because the target machine actively refused them'))

...implies that the GeckoDriver was unable to initiate/spawn a new Browsing Context i.e. session.


Root cause

The root cause of this error can be either of the following:

  • This error may surface if have closed the Browsing Context manually with brute force when the driver have already initiated a lookout for element/elements.
  • There is a possibility that the application you are trying to access is throttling the requests from your system/machine/ip-address/network.
  • There is also a possibility that the application have identified the Selenium driven GeckoDriver initiated Browsing Context as a and is denying any access.

Solution

Ensure that:

  • To evade the detection as a bot, pass the argument --disable-blink-features=AutomationControlled as follows:

    from selenium.webdriver.firefox.options import Options
    
    options = Options()
    options.add_argument('--disable-blink-features=AutomationControlled')
    
  • Always invoke driver.quit() within tearDown(){} method to close & destroy the WebDriver and Web Client instances gracefully.

  • Induce WebDriverWait to synchronize the fast moving WebDriver along with the Browsing Context.

Borne answered 16/4, 2022 at 23:33 Comment(2)
What do you mean with the tearDown function? Could you give an example?Whiteman
python example of Always invoke driver.quit() within tearDown(){} method to close & destroy the WebDriver and Web Client instances gracefully. would be very niceSalmonella
D
1

got the same issue while trying to generate a case where i needed to restart my driver if it fails for some reason (while locating any html element), so i just re-declared the driver config (as shown below):

try:

    ## driver config
    driver = webdriver.Chrome(
        service=service, 
        options=options
    )

    run_script(driver)

except:

    # if the driver fails to load an element, then quit the driver
    driver.quit()

    print("\nScrapper stopped, launching again in 4 seconds...")
    time.sleep(4)

    ## driver config
    driver = webdriver.Chrome(
        service=service, 
        options=options
    )
    time.sleep(3)

    run_script(driver)
Delvalle answered 12/4, 2023 at 10:0 Comment(0)
S
1

My case was I was working on scraping multiple webpages by passing the urls as dictionary. Even I faced the same error. What I did to resolved this was simply didn't quit the driver and waited for the next iteration to start.

Smilax answered 19/12, 2023 at 6:54 Comment(0)
F
0

Encountered the same error yesterday. Try quit() the driver and reactivate the driver. That worked for me. enter image description here

Firstly answered 24/7, 2023 at 13:30 Comment(0)
I
0

I got this error when I forgot and added driver.quit() and then used the same driver to do the scraping (quit method will close the driver error means you have closed the driver and try to do action with that driver),

example of error:

driver = get_driver()
driver.quit()
rows_urls = WebDriverWait(driver, 5).until(
        EC.presence_of_all_elements_located(
             (By.XPATH, '//div[@class="song-blocks"]//ul//li/a'))
    )

urllib3.exceptions.MaxRetryError

urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12): Max retries exceeded with url: /session (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000212A7AB1AC0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))


solution is scrape and then close the driver

import sys
try:
    driver = get_driver()
    rows_urls = WebDriverWait(driver, 5).until(
            EC.presence_of_all_elements_located(
                 (By.XPATH, '//div[@class="song-blocks"]//ul//li/a'))
        )
except:
    print(sys.exc_info())
finally:
    driver.quit()
Involved answered 12/4 at 21:50 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.