How to save "complete webpage" not just basic html using Python
Asked Answered
U

3

30

I am using following code to save webpage using Python:

import urllib
import sys
from bs4 import BeautifulSoup

url = 'http://www.vodafone.de/privat/tarife/red-smartphone-tarife.html'
f = urllib.urlretrieve(url,'test.html')

Problem: This code saves html as basic html without javascripts, images etc. I want to save webpage as complete (Like we have option in browser)

Update: I am using following code now to save all the js/images/css files of webapge so that it can be saved as complete webpage but still my output html is getting saved like basic html:

import pycurl
import StringIO

c = pycurl.Curl()
c.setopt(pycurl.URL, "http://www.vodafone.de/privat/tarife/red-smartphone-tarife.html")

b = StringIO.StringIO()
c.setopt(pycurl.WRITEFUNCTION, b.write)
c.setopt(pycurl.FOLLOWLOCATION, 1)
c.setopt(pycurl.MAXREDIRS, 5)
c.perform()
html = b.getvalue()
#print html
fh = open("file.html", "w")
fh.write(html)
fh.close()
Unessential answered 25/1, 2013 at 6:34 Comment(7)
Then you would have to write code to parse the HTML, grab all of the linked resources, and download them individually, just like a browser does.Slat
using beautiful soup can I do that?Unessential
Try Scrapy, an open source portable Python web scrapping frameworkScepter
How do I use it? I am very new to programming, I have some experience with Beautiful soup.Unessential
Similar: Is it possible to get complete source code of a website including css by just providing the URL of website? + PythonUitlander
@AnneLagang I tried using PyCurl without success, please check out the updated code.Unessential
Have you tried what @Slat said? In the link I provided, I gave all the steps that can help you get started.Uitlander
T
23

Try emulating your browser with selenium. This script will pop up the save as dialog for the webpage. You will still have to figure out how to emulate pressing enter for download to start as the file dialog is out of selenium's reach (how you do it is also OS dependent).

from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys

br = webdriver.Firefox()
br.get('http://www.google.com/')

save_me = ActionChains(br).key_down(Keys.CONTROL)\
         .key_down('s').key_up(Keys.CONTROL).key_up('s')
save_me.perform()

Also I think following @Amber suggestion of grabbing the linked resources may be a simpler, thus a better solution. Still, I think using selenium is a good starting point as br.page_source will get you the entire dom along with the dynamic content generated by javascript.

Tripe answered 25/1, 2013 at 7:43 Comment(7)
This code is giving me WindowsError: [Error 2] The system cannot find the file specified errorUnessential
@atams -- On what line are you getting the error? I tried it out and it worked on my machine...Tripe
I am getting error in this line: 'br = webdriver.Firefox()', Is it because I am using portable version of firefox?Unessential
How are you going to click the 'save' button after Ctrl+S is triggered?Bordereau
How do you press the Save key?Street
@Street @Bordereau Just press the Enter key to Save: save_me = ActionChains(driver).key_down(Keys.ENTER).key_up(Keys.ENTER) save_me.perform()Ladykiller
And since most JS downloads are async, depending on your site, you may have to include a generous time.sleep(x) value after your get() request before the br.page_source will include the content you're seeking!Likeness
M
12

You can easily do that with simple python library pywebcopy.

For Current version: 5.0.1

from pywebcopy import save_webpage

url = 'http://some-site.com/some-page.html'
download_folder = '/path/to/downloads/'    

kwargs = {'bypass_robots': True, 'project_name': 'recognisable-name'}

save_webpage(url, download_folder, **kwargs)

You will have html, css, js all at your download_folder. Completely working like original site.

Mania answered 26/7, 2018 at 13:4 Comment(2)
this library is really useful! is there however, a way to locate the html file of the webpage and launch it in a browser without manually searching for it and locating it? I need to download the complete webpage then launch the page from html file via a .py scriptHermetic
Try the version 6 Beta from the github. It automatically opens the html in the browser. github.com/rajatomar788/pywebcopy/tree/Beta?files=1Mania
N
0

Try saveFullHtmlPage bellow or adapt it.

Will save a modified *.html and save javascripts, css and images based on the tags script, link and img (tags_inner dict keys) on a folder _files.

import os, sys, re
import requests
from urllib.parse import urljoin
from bs4 import BeautifulSoup

def saveFullHtmlPage(url, pagepath='page', session=requests.Session(), html=None):
    """Save web page html and supported contents        
        * pagepath : path-to-page   
        It will create a file  `'path-to-page'.html` and a folder `'path-to-page'_files`
    """
    def savenRename(soup, pagefolder, session, url, tag, inner):
        if not os.path.exists(pagefolder): # create only once
            os.mkdir(pagefolder)
        for res in soup.findAll(tag):   # images, css, etc..
            if res.has_attr(inner): # check inner tag (file object) MUST exists  
                try:
                    filename, ext = os.path.splitext(os.path.basename(res[inner])) # get name and extension
                    filename = re.sub('\W+', '', filename) + ext # clean special chars from name
                    fileurl = urljoin(url, res.get(inner))
                    filepath = os.path.join(pagefolder, filename)
                    # rename html ref so can move html and folder of files anywhere
                    res[inner] = os.path.join(os.path.basename(pagefolder), filename)
                    if not os.path.isfile(filepath): # was not downloaded
                        with open(filepath, 'wb') as file:
                            filebin = session.get(fileurl)
                            file.write(filebin.content)
                except Exception as exc:
                    print(exc, file=sys.stderr)
    if not html:
        html = session.get(url).text
    soup = BeautifulSoup(html, "html.parser")
    path, _ = os.path.splitext(pagepath)
    pagefolder = path+'_files' # page contents folder
    tags_inner = {'img': 'src', 'link': 'href', 'script': 'src'} # tag&inner tags to grab
    for tag, inner in tags_inner.items(): # saves resource files and rename refs
        savenRename(soup, pagefolder, session, url, tag, inner)
    with open(path+'.html', 'wb') as file: # saves modified html doc
        file.write(soup.prettify('utf-8'))

Example saving google.com as google.html and contents on google_files folder. (current folder)

saveFullHtmlPage('https://www.google.com', 'google')
Neuroglia answered 23/9, 2022 at 23:51 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.