Download a working local copy of a webpage [closed]
Asked Answered
V

1

238

I would like to download a local copy of a web page and get all of the css, images, javascript, etc.

In previous discussions (e.g. here and here, both of which are more than two years old), two suggestions are generally put forward: wget -p and httrack. However, these suggestions both fail. I would very much appreciate help with using either of these tools to accomplish the task; alternatives are also lovely.


Option 1: wget -p

wget -p successfully downloads all of the web page's prerequisites (css, images, js). However, when I load the local copy in a web browser, the page is unable to load the prerequisites because the paths to those prerequisites haven't been modified from the version on the web.

For example:

  • In the page's html, <link rel="stylesheet href="/stylesheets/foo.css" /> will need to be corrected to point to the new relative path of foo.css
  • In the css file, background-image: url(/images/bar.png) will similarly need to be adjusted.

Is there a way to modify wget -p so that the paths are correct?


Option 2: httrack

httrack seems like a great tool for mirroring entire websites, but it's unclear to me how to use it to create a local copy of a single page. There is a great deal of discussion in the httrack forums about this topic (e.g. here) but no one seems to have a bullet-proof solution.


Option 3: another tool?

Some people have suggested paid tools, but I just can't believe there isn't a free solution out there.

Vickery answered 14/6, 2011 at 18:32 Comment(5)
If the answer doesn't work, try: wget -E -H -k -K -p http://example.com - only this worked for me. Credit: superuser.com/a/136335/94039Delatorre
There is also software to do that, Teleport Pro.Larimer
wget --random-wait -r -p -e robots=off -U mozilla http://www.example.comUzzi
Possible duplicate of download webpage and dependencies, including css images.Bolitho
The way in which this question is closed, having 203K views to date, has explicit incremental requirements over other proposed and linked solutions.Travelled
P
293

wget is capable of doing what you are asking. Just try the following:

wget -p -k http://www.example.com/

The -p will get you all the required elements to view the site correctly (css, images, etc). The -k will change all links (to include those for CSS & images) to allow you to view the page offline as it appeared online.

From the Wget docs:

‘-k’
‘--convert-links’
After the download is complete, convert the links in the document to make them
suitable for local viewing. This affects not only the visible hyperlinks, but
any part of the document that links to external content, such as embedded images,
links to style sheets, hyperlinks to non-html content, etc.

Each link will be changed in one of the two ways:

    The links to files that have been downloaded by Wget will be changed to refer
    to the file they point to as a relative link.

    Example: if the downloaded file /foo/doc.html links to /bar/img.gif, also
    downloaded, then the link in doc.html will be modified to point to
    ‘../bar/img.gif’. This kind of transformation works reliably for arbitrary
    combinations of directories.

    The links to files that have not been downloaded by Wget will be changed to
    include host name and absolute path of the location they point to.

    Example: if the downloaded file /foo/doc.html links to /bar/img.gif (or to
    ../bar/img.gif), then the link in doc.html will be modified to point to
    http://hostname/bar/img.gif. 

Because of this, local browsing works reliably: if a linked file was downloaded,
the link will refer to its local name; if it was not downloaded, the link will
refer to its full Internet address rather than presenting a broken link. The fact
that the former links are converted to relative links ensures that you can move
the downloaded hierarchy to another directory.

Note that only at the end of the download can Wget know which links have been
downloaded. Because of that, the work done by ‘-k’ will be performed at the end
of all the downloads. 
Poole answered 28/6, 2011 at 16:54 Comment(13)
I tried this, but somehow internal links such as index.html#link-to-element-on-same-page stopped working.Plumber
Whole the site : snipplr.com/view/23838/downloading-an-entire-web-site-with-wgetCyclist
Some servers will response with a 403 code if you use wget without an User Agent, you can add -U 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.6) Gecko/20070802 SeaMonkey/1.1.4'Antakiya
does't this work if web site menus/links are implemented using java script?Joost
doesn't get me background imagesCementite
If you're finding you're still missing images etc.. then try adding this: -e robots=off ..... wget actually reads and respects robots.txt - this really made it hard for me to figure out why nothing worked!Johannajohannah
to get resources from foreign hosts use -H, --span-hostsManchester
I made this for the same purpose: webjay.github.io/webtography It uses Wget and pushes the site to a repository on your GitHub account.Numerable
hey what if the link I want to replicate requires a session to work? for example if it's a page behind login?Jann
@SabaAhang Fetch pages that are behind a login page. You need to replace user and password with the actual form fields while the URL should point to the Form Submit (action) page. wget ‐‐cookies=on ‐‐save-cookies cookies.txt ‐‐keep-session-cookies ‐‐post-data ‘user=labnol&password=123’ http://example.com/login.php wget ‐‐cookies=on ‐‐load-cookies cookies.txt ‐‐keep-session-cookies http://example.com/paywall Full details hereGuardianship
I have tried this to save a reddit page and it doesn't seem to save any of the images, graphics or css files on the webpage.Whitebait
No parameters work for me at all, the query just go by wget ... url when add a parameter it throw error .. and cant download full soursesKentkenta
The documentation (gnu.org/software/wget/manual/wget.html#Examples) claims, exg.: wget -m -k -K -E gnu.org -o /home/me/weeklog and it worked for me with: api.jquery.comDismember

© 2022 - 2024 — McMap. All rights reserved.