ResultSet object has no attribute 'find_all'
Asked Answered
J

2

7

i always met one problem, when I scraping one web page.

AttributeError: ResultSet object has no attribute 'find'. You're probably treating a list of items like a single item. Did you call find_all() when you meant to call find()?

anyone can tell me how to solve this? my code as below:

import requests  
r = requests.get('https://www.example.com')
from bs4 import BeautifulSoup  
soup = BeautifulSoup(r.text, 'html.parser')  
results = soup.find_all('div', attrs={'class':'product-item item-template-0 alternative'})
records = []  
for result in results:  
    name = results.find('div', attrs={'class':'name'}).text 
    price = results.find('div', attrs={'class':'price'}).text[13:-11]
    records.append((name, price,))

I want to ask a close question.If I want to scrap multiple pages.the pattern like below,I use the code as below,but still scrap the first page only Can you solve this issue.

import requests  
for i in range(100):   
    url = "https://www.example.com/a/a_{}.format(i)"
    r = requests.get(url)
from bs4 import BeautifulSoup  
soup = BeautifulSoup(r.text, 'html.parser')  
results = soup.find_all('div', attrs={'class':'product-item item-template-0 alternative'})
Jacquetta answered 27/2, 2018 at 6:46 Comment(1)
Does this answer your question? Beautiful Soup: 'ResultSet' object has no attribute 'find_all'?Catherin
G
13

Try this. You mixed up results with result:

import requests  
r = requests.get('https://www.example.com')
from bs4 import BeautifulSoup  
soup = BeautifulSoup(r.text, 'html.parser')  
results = soup.find_all('div', attrs={'class':'product-item item-template-0 alternative'})
records = []  
for result in results:  
    name = result.find('div', attrs={'class':'name'}).text # result not results
    price = result.find('div', attrs={'class':'price'}).text[13:-11]
    records.append((name, price,))
Grazia answered 27/2, 2018 at 7:1 Comment(7)
thanks, it works, just change the results to be result.Jacquetta
I want to ask a close question.I want to scrap multiple pages.example.com/a/a_1 example.com/a/a_2 example.com/a/a_3 -------.I use the code as below,but still scrap the first page only Can you solve this issue.import requests for i in range(100): url = "example.com/a/a_{}.format(i)" r = requests.get(url) from bs4 import BeautifulSoup soup = BeautifulSoup(r.text, 'html.parser') results = soup.find_all('div', attrs={'class':'product-item item-template-0 alternative'})Jacquetta
You should post another question for this. Why are you looping 100 times?Grazia
because it has 100 pagesJacquetta
Oh, I see. Instead of looping 100 times, loop through the list of pages (and index it if necessary).Grazia
Can you show me the code for reference. high appreciated!Jacquetta
can you help answer my question?#49106272Jacquetta
K
1

Try this, remove 's' in 'results' in particularly name = results

your error code "name = results.find('div', attrs={'class':'name'}).text"

with one changes "name = result.find('div', attrs={'class':'name'}).text"

well, nice try!

import requests  
r = requests.get('https://www.example.com')
from bs4 import BeautifulSoup  
soup = BeautifulSoup(r.text, 'html.parser')  
results = soup.find_all('div', attrs={'class':'product-item item-template-0 alternative'})
records = []  
for result in results:  
    name = result.find('div', attrs={'class':'name'}).text 
    price = result.find('div', attrs={'class':'price'}).text[13:-11]
    records.append((name, price,))
Kisner answered 4/1, 2022 at 14:22 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.