Parsing web page in python using Beautiful Soup
Asked Answered
Y

4

8

I have some troubles with getting the data from the website. The website source is here:

view-source:http://release24.pl/wpis/23714/%22La+mer+a+boire%22+%282011%29+FRENCH.DVDRip.XviD-AYMO

there's sth like this:

INFORMACJE O FILMIE

Tytuł............................................: La mer à boireOcena.............................................: IMDB - 6.3/10 (24)Produkcja.........................................: FrancjaGatunek...........................................: DramatCzas trwania......................................: 98 min.Premiera..........................................: 22.02.2012 - ŚwiatReżyseria........................................: Jacques MaillotScenariusz........................................: Pierre Chosson, Jacques MaillotAktorzy...........................................: Daniel Auteuil, Maud Wyler, Yann Trégouët, Alain Beigel

And I want to get the data from this website to have a Python list of strings:

[[Tytuł, "La mer à boire"]
[Ocena, "IMDB - 6.3/10 (24)"]
[Produkcja, Francja]
[Gatunek, Dramat]
[Czas trwania, 98 min.]
[Premiera, "22.02.2012 - Świat"]
[Reżyseria, "Jacques Maillot"]
[Scenariusz, "Pierre Chosson, Jacques Maillot"]
[Aktorzy, "Daniel Auteuil, Maud Wyler, Yann Trégouët, Alain Beigel"]]

I wrote some code using BeautifulSoup but I cant go any further, I just don't know what to get the rest from the website source and how to convert is to string ... Please, help!

My code:

    # -*- coding: utf-8 -*-
#!/usr/bin/env python

import urllib2
from bs4 import BeautifulSoup

try :
    web_page = urllib2.urlopen("http://release24.pl/wpis/23714/%22La+mer+a+boire%22+%282011%29+FRENCH.DVDRip.XviD-AYMO").read()
    soup = BeautifulSoup(web_page)
    c = soup.find('span', {'class':'vi'}).contents
    print(c)
except urllib2.HTTPError :
    print("HTTPERROR!")
except urllib2.URLError :
    print("URLERROR!")
Yogh answered 27/6, 2012 at 20:48 Comment(3)
HTML is structured - if you look at the source code of the page, you'll notice patterns (look for classes/or an h2 following a div etc...), and then try to work out what logic you need to extract the data and if you have problems still writing the code, someone will be able to helpNoncontributory
Good point:) I wrote sth like this: ||c = soup.find('span', {'class':'vi'}).contents|| but it finds only the first 'span' element but how about of rest of them? How to get them out and convert to string value?Yogh
Have a look at soup.findAllNoncontributory
P
14

The secret of using BeautifulSoup is to find the hidden patterns of your HTML document. For example, your loop

for ul in soup.findAll('p') :
    print(ul)

is in the right direction, but it will return all paragraphs, not only the ones you are looking for. The paragraphs you are looking for, however, have the helpful property of having a class i. Inside these paragraphs one can find two spans, one with the class i and another with the class vi. We are lucky because those spans contains the data you are looking for:

<p class="i">
    <span class="i">Tytuł............................................</span>
    <span class="vi">: La mer à boire</span>
</p>

So, first get all the paragraphs with the given class:

>>> ps = soup.findAll('p', {'class': 'i'})
>>> ps
[<p class="i"><span class="i">Tytuł... <LOTS OF STUFF> ...pan></p>]

Now, using list comprehensions, we can generate a list of pairs, where each pair contains the first and the second span from the paragraph:

>>> spans = [(p.find('span', {'class': 'i'}), p.find('span', {'class': 'vi'})) for p in ps]
>>> spans
[(<span class="i">Tyt... ...</span>, <span class="vi">: La mer à boire</span>), 
 (<span class="i">Ocena... ...</span>, <span class="vi">: IMDB - 6.3/10 (24)</span>),
 (<span class="i">Produkcja.. ...</span>, <span class="vi">: Francja</span>),
 # and so on
]

Now that we have the spans, we can get the texts from them:

>>> texts = [(span_i.text, span_vi.text) for span_i, span_vi in spans]
>>> texts
[(u'Tytu\u0142............................................', u': La mer \xe0 boire'),
 (u'Ocena.............................................', u': IMDB - 6.3/10 (24)'),
 (u'Produkcja.........................................', u': Francja'), 
  # and so on
]

Those texts are not ok still, but it is easy to correct them. To remove the dots from the first one, we can use rstrip():

>>> u'Produkcja.........................................'.rstrip('.')
u'Produkcja'

The : string can be removed with lstrip():

>>> u': Francja'.lstrip(': ')
u'Francja'

To apply it to all content, we just need another list comprehension:

>>> result = [(text_i.rstrip('.'), text_vi.replace(': ', '')) for text_i, text_vi in texts]
>>> result
[(u'Tytu\u0142', u'La mer \xe0 boire'),
 (u'Ocena', u'IMDB - 6.3/10 (24)'),
 (u'Produkcja', u'Francja'),
 (u'Gatunek', u'Dramat'),
 (u'Czas trwania', u'98 min.'),
 (u'Premiera', u'22.02.2012 - \u015awiat'),
 (u'Re\u017cyseria', u'Jacques Maillot'),
 (u'Scenariusz', u'Pierre Chosson, Jacques Maillot'),
 (u'Aktorzy', u'Daniel Auteuil, Maud Wyler, Yann Tr&eacute;gou&euml;t, Alain Beigel'),
 (u'Wi\u0119cej na', u':'),
 (u'Trailer', u':Obejrzyj zwiastun')]

And that is it. I hope this step-by-step example can make the use of BeautifulSoup clearer for you.

Policlinic answered 27/6, 2012 at 21:22 Comment(1)
Geez, thank You so much for the explanation:) I will do some more excercises with this. I guess, the problem is solved. Thank You all:)Yogh
L
0

This will get you the List You want you'll have to write some code to get rid of the trailing '....'s and to convert the character strings.

    import urllib2
    from bs4 import BeautifulSoup

     try :
 web_page = urllib2.urlopen("http://release24.pl/wpis/23714/%22La+mer+a+boire%22+%282011%29+FRENCH.DVDRip.XviD-AYMO").read()
soup = BeautifulSoup(web_page)
LIST = []
for p in soup.findAll('p'):
    s = p.find('span',{ "class" : 'i' })
    t = p.find('span',{ "class" : 'vi' })
    if s and t:
        p_list = [s.string,t.string]
        LIST.append(p_list)

except urllib2.HTTPError : print("HTTPERROR!") except urllib2.URLError : print("URLERROR!")

Lynnett answered 27/6, 2012 at 21:15 Comment(0)
H
0

Here is the clean code:

import requests
from bs4 import BeautifulSoup

try:
    # Send an HTTP GET request to the URL
    url = "http://release24.pl/wpis/23714/%22La+mer+a+boire%22+%282011%29+FRENCH.DVDRip.XviD-AYMO"
    response = requests.get(url)

    # Check if the request was successful (status code 200)
    if response.status_code == 200:
        soup = BeautifulSoup(response.content, 'html.parser')

        # Find the span elements with class 'vi'
        vi_elements = soup.find_all('span', class_='vi')

        # Initialize a list to store the data
        data_list = []

        # Iterate through the 'vi' elements and extract the information
        for vi_element in vi_elements:
            # Extract the label and value as strings
            label = vi_element.find_previous('strong').get_text(strip=True)
            value = vi_element.get_text(strip=True)
            
            # Append the label and value as a list to the data_list
            data_list.append([label, value])

        # Print the data_list
        for item in data_list:
            print(item)
    else:
        print('Failed to retrieve the webpage. Status code:', response.status_code)

except requests.exceptions.RequestException as e:
    print('Error:', e)

This code sends an HTTP GET request to the specified URL, parses the HTML content, finds the vi elements, extracts the label and value, and stores them in the data_list. Finally, it prints the data list, which should resemble the desired format.

Reference

Hector answered 27/9, 2023 at 1:32 Comment(0)
F
-1
page = requests.get('https://habr.com/ru/search/page1/? q=Ютуб&target_type=posts&order=relevance').text
page_soup = BeautifulSoup(page, 'html.parser')
count_pages = int(page_soup.find_all('div', 'tm-pagination__page-group') [-1].text.split()[0])
hrefs = []
for i in range(1, count_pages + 1):
    print(i)
    page = requests.get(f'https://habr.com/ru/search/page{i}/? 
    q=Новости&target_type=posts&order=relevance').text
    page_s = BeautifulSoup(page, 'html.parser')
    links = page_s.find_all('article', 'tm-articles-list__item')
    for idx, link in enumerate(links):
        hrefs.append(f'https://habr.com/ru/news/{link["id"]}/')
    
texts = [''] * 1000   
for ind, href in enumerate(hrefs):
    print(ind)
    pagex = requests.get(href).text
    page_su = BeautifulSoup(pagex, 'html.parser')
    try:
        text = page_su.find_all("div", "article-formatted-body article- 
formatted-body article-formatted-body_version-1")[0].text
        texts[ind] = text
    except:
        ...
Fecula answered 29/2 at 12:11 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.