How do you Programmatically Download a Webpage in Java
Asked Answered
I

11

122

I would like to be able to fetch a web page's html and save it to a String, so I can do some processing on it. Also, how could I handle various types of compression.

How would I go about doing that using Java?

Imaginative answered 26/10, 2008 at 20:16 Comment(1)
This is basically a special case of #921762Alec
R
117

Here's some tested code using Java's URL class. I'd recommend do a better job than I do here of handling the exceptions or passing them up the call stack, though.

public static void main(String[] args) {
    URL url;
    InputStream is = null;
    BufferedReader br;
    String line;

    try {
        url = new URL("http://stackoverflow.com/");
        is = url.openStream();  // throws an IOException
        br = new BufferedReader(new InputStreamReader(is));

        while ((line = br.readLine()) != null) {
            System.out.println(line);
        }
    } catch (MalformedURLException mue) {
         mue.printStackTrace();
    } catch (IOException ioe) {
         ioe.printStackTrace();
    } finally {
        try {
            if (is != null) is.close();
        } catch (IOException ioe) {
            // nothing to see here
        }
    }
}
Rod answered 26/10, 2008 at 21:9 Comment(7)
DataInputStream.readLine() is deprecated, but other than that very good example. I used an InputStreamReader() wrapped in a BufferedReader() to get the readLine() function.Garboard
This doesn't take character encoding into account, so while it'll appear to work for ASCII text, it will eventually result in 'strange characters' when there's a mismatch.Contiguous
In the 3rd line replace DataInputStream to BufferedReader. And replace "dis = new DataInputStream(new BufferedInputStream(is));" to "dis = new BufferedReader(new InputStreamReader(is));"Lamonicalamont
@akapelko Thanks you. I updated my answer to remove the calls to deprecated methods.Rod
what about closing the InputStreamReader?Imbrication
if you need to get all lines together use StringBuilder append("line") method instead of System.out.println(line); - it will be the most efficient way to put together all linesApiece
This is not closing its socket.Guncotton
C
185

I'd use a decent HTML parser like Jsoup. It's then as easy as:

String html = Jsoup.connect("http://stackoverflow.com").get().html();

It handles GZIP and chunked responses and character encoding fully transparently. It offers more advantages as well, like HTML traversing and manipulation by CSS selectors like as jQuery can do. You only have to grab it as Document, not as a String.

Document document = Jsoup.connect("http://google.com").get();

You really don't want to run basic String methods or even regex on HTML to process it.

See also:

Caulicle answered 31/12, 2010 at 17:57 Comment(3)
Good answer. A little late. ;)Imaginative
Why did noone tell me about .html() before. I looked so hard into how to easily store the html fetched by Jsoup and that helps a lot.Steerageway
for newcomers , if you use this library in android you need to use this in different thread because it runs by default on same application thread which will cause the application to throw NetworkOnMainThreadExceptionEsque
R
117

Here's some tested code using Java's URL class. I'd recommend do a better job than I do here of handling the exceptions or passing them up the call stack, though.

public static void main(String[] args) {
    URL url;
    InputStream is = null;
    BufferedReader br;
    String line;

    try {
        url = new URL("http://stackoverflow.com/");
        is = url.openStream();  // throws an IOException
        br = new BufferedReader(new InputStreamReader(is));

        while ((line = br.readLine()) != null) {
            System.out.println(line);
        }
    } catch (MalformedURLException mue) {
         mue.printStackTrace();
    } catch (IOException ioe) {
         ioe.printStackTrace();
    } finally {
        try {
            if (is != null) is.close();
        } catch (IOException ioe) {
            // nothing to see here
        }
    }
}
Rod answered 26/10, 2008 at 21:9 Comment(7)
DataInputStream.readLine() is deprecated, but other than that very good example. I used an InputStreamReader() wrapped in a BufferedReader() to get the readLine() function.Garboard
This doesn't take character encoding into account, so while it'll appear to work for ASCII text, it will eventually result in 'strange characters' when there's a mismatch.Contiguous
In the 3rd line replace DataInputStream to BufferedReader. And replace "dis = new DataInputStream(new BufferedInputStream(is));" to "dis = new BufferedReader(new InputStreamReader(is));"Lamonicalamont
@akapelko Thanks you. I updated my answer to remove the calls to deprecated methods.Rod
what about closing the InputStreamReader?Imbrication
if you need to get all lines together use StringBuilder append("line") method instead of System.out.println(line); - it will be the most efficient way to put together all linesApiece
This is not closing its socket.Guncotton
I
28

Bill's answer is very good, but you may want to do some things with the request like compression or user-agents. The following code shows how you can various types of compression to your requests.

URL url = new URL(urlStr);
HttpURLConnection conn = (HttpURLConnection) url.openConnection(); // Cast shouldn't fail
HttpURLConnection.setFollowRedirects(true);
// allow both GZip and Deflate (ZLib) encodings
conn.setRequestProperty("Accept-Encoding", "gzip, deflate");
String encoding = conn.getContentEncoding();
InputStream inStr = null;

// create the appropriate stream wrapper based on
// the encoding type
if (encoding != null && encoding.equalsIgnoreCase("gzip")) {
    inStr = new GZIPInputStream(conn.getInputStream());
} else if (encoding != null && encoding.equalsIgnoreCase("deflate")) {
    inStr = new InflaterInputStream(conn.getInputStream(),
      new Inflater(true));
} else {
    inStr = conn.getInputStream();
}

To also set the user-agent add the following code:

conn.setRequestProperty ( "User-agent", "my agent name");
Imaginative answered 6/4, 2010 at 5:17 Comment(2)
For those looking to convert the InputStream to string, see this answer.Bilection
setFollowRedirects helps, I use setInstanceFollowRedirects in my case, I was getting empty web pages in many cases before using that. I assume that you try to use compression to download the file faster.Mordacious
L
13

Well, you could go with the built-in libraries such as URL and URLConnection, but they don't give very much control.

Personally I'd go with the Apache HTTPClient library.
Edit: HTTPClient has been set to end of life by Apache. The replacement is: HTTP Components

Legged answered 26/10, 2008 at 20:20 Comment(3)
There is no java version of System.Net.WebRequest?Upstanding
Sort of, that would be URL. :-) For example: new URL("google.com").openStream() // => InputStreamDrin
@Jonathan: What Daniel said, for the most part - although WebRequest gives you more control than URL. HTTPClient is closer in functionality, IMO.Legged
S
9

All the above mentioned approaches do not download the web page text as it looks in the browser. these days a lot of data is loaded into browsers through scripts in html pages. none of above mentioned techniques supports scripts, they just downloads the html text only. HTMLUNIT supports the javascripts. so if you are looking to download the web page text as it looks in the browser then you should use HTMLUNIT.

Subtenant answered 30/5, 2014 at 10:30 Comment(0)
K
2

You'd most likely need to extract code from a secure web page (https protocol). In the following example, the html file is being saved into c:\temp\filename.html Enjoy!

import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.URL;

import javax.net.ssl.HttpsURLConnection;

/**
 * <b>Get the Html source from the secure url </b>
 */
public class HttpsClientUtil {
    public static void main(String[] args) throws Exception {
        String httpsURL = "https://stackoverflow.com";
        String FILENAME = "c:\\temp\\filename.html";
        BufferedWriter bw = new BufferedWriter(new FileWriter(FILENAME));
        URL myurl = new URL(httpsURL);
        HttpsURLConnection con = (HttpsURLConnection) myurl.openConnection();
        con.setRequestProperty ( "User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:63.0) Gecko/20100101 Firefox/63.0" );
        InputStream ins = con.getInputStream();
        InputStreamReader isr = new InputStreamReader(ins, "Windows-1252");
        BufferedReader in = new BufferedReader(isr);
        String inputLine;

        // Write each line into the file
        while ((inputLine = in.readLine()) != null) {
            System.out.println(inputLine);
            bw.write(inputLine);
        }
        in.close(); 
        bw.close();
    }
}
Kimberli answered 27/10, 2018 at 17:55 Comment(0)
C
1

To do so using NIO.2 powerful Files.copy(InputStream in, Path target):

URL url = new URL( "http://download.me/" );
Files.copy( url.openStream(), Paths.get("downloaded.html" ) );
Cerellia answered 15/6, 2020 at 19:23 Comment(0)
C
0

On a Unix/Linux box you could just run 'wget' but this is not really an option if you're writing a cross-platform client. Of course this assumes that you don't really want to do much with the data you download between the point of downloading it and it hitting the disk.

Convertible answered 26/10, 2008 at 20:43 Comment(1)
i would also start with this approach and refactor it later if insufficientMetamorphism
R
0

Jetty has an HTTP client which can be use to download a web page.

package com.zetcode;

import org.eclipse.jetty.client.HttpClient;
import org.eclipse.jetty.client.api.ContentResponse;

public class ReadWebPageEx5 {

    public static void main(String[] args) throws Exception {

        HttpClient client = null;

        try {

            client = new HttpClient();
            client.start();
            
            String url = "http://example.com";

            ContentResponse res = client.GET(url);

            System.out.println(res.getContentAsString());

        } finally {

            if (client != null) {

                client.stop();
            }
        }
    }
}

The example prints the contents of a simple web page.

In a Reading a web page in Java tutorial I have written six examples of dowloading a web page programmaticaly in Java using URL, JSoup, HtmlCleaner, Apache HttpClient, Jetty HttpClient, and HtmlUnit.

Ruzich answered 18/8, 2016 at 16:42 Comment(0)
M
0

Get help from this class it get code and filter some information.

public class MainActivity extends AppCompatActivity {

    EditText url;
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate( savedInstanceState );
        setContentView( R.layout.activity_main );

        url = ((EditText)findViewById( R.id.editText));
        DownloadCode obj = new DownloadCode();

        try {
            String des=" ";

            String tag1= "<div class=\"description\">";
            String l = obj.execute( "http://www.nu.edu.pk/Campus/Chiniot-Faisalabad/Faculty" ).get();

            url.setText( l );
            url.setText( " " );

            String[] t1 = l.split(tag1);
            String[] t2 = t1[0].split( "</div>" );
            url.setText( t2[0] );

        }
        catch (Exception e)
        {
            Toast.makeText( this,e.toString(),Toast.LENGTH_SHORT ).show();
        }

    }
                                        // input, extrafunctionrunparallel, output
    class DownloadCode extends AsyncTask<String,Void,String>
    {
        @Override
        protected String doInBackground(String... WebAddress) // string of webAddress separate by ','
        {
            String htmlcontent = " ";
            try {
                URL url = new URL( WebAddress[0] );
                HttpURLConnection c = (HttpURLConnection) url.openConnection();
                c.connect();
                InputStream input = c.getInputStream();
                int data;
                InputStreamReader reader = new InputStreamReader( input );

                data = reader.read();

                while (data != -1)
                {
                    char content = (char) data;
                    htmlcontent+=content;
                    data = reader.read();
                }
            }
            catch (Exception e)
            {
                Log.i("Status : ",e.toString());
            }
            return htmlcontent;
        }
    }
}
Microhenry answered 16/12, 2017 at 17:23 Comment(0)
L
-1

I used the actual answer to this post (url) and writing the output into a file.

package test;

import java.net.*;
import java.io.*;

public class PDFTest {
    public static void main(String[] args) throws Exception {
    try {
        URL oracle = new URL("http://www.fetagracollege.org");
        BufferedReader in = new BufferedReader(new InputStreamReader(oracle.openStream()));

        String fileName = "D:\\a_01\\output.txt";

        PrintWriter writer = new PrintWriter(fileName, "UTF-8");
        OutputStream outputStream = new FileOutputStream(fileName);
        String inputLine;

        while ((inputLine = in.readLine()) != null) {
            System.out.println(inputLine);
            writer.println(inputLine);
        }
        in.close();
        } catch(Exception e) {

        }

    }
}
Legpull answered 26/10, 2017 at 8:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.