Without the use of any external library, what is the simplest way to fetch a website's HTML content into a String?
I'm currently using this:
String content = null;
URLConnection connection = null;
try {
connection = new URL("http://www.google.com").openConnection();
Scanner scanner = new Scanner(connection.getInputStream());
scanner.useDelimiter("\\Z");
content = scanner.next();
scanner.close();
}catch ( Exception ex ) {
ex.printStackTrace();
}
System.out.println(content);
But not sure if there's a better way.
This has worked well for me:
URL url = new URL(theURL);
InputStream is = url.openStream();
int ptr = 0;
StringBuffer buffer = new StringBuffer();
while ((ptr = is.read()) != -1) {
buffer.append((char)ptr);
}
Not sure at to whether the other solution(s) provided are any more efficient or not.
while
, you should display the buffer's content too! or write a method where you read it! –
Raseda close
the inputstream –
Brancusi ptr
? –
Shiah I just left this post in your other thread, though what you have above might work as well. I don't think either would be any easier than the other. The Apache packages can be accessed by just using import org.apache.commons.HttpClient
at the top of your code.
Edit: Forgot the link ;)
Whilst not vanilla-Java, I'll offer up a simpler solution. Use Groovy ;-)
String siteContent = new URL("http://www.google.com").text
try {
URL u = new URL("https"+':'+'/'+'/'+"www.Samsung.com"+'/'+"in"+'/');
URLConnection urlconnect = u.openConnection();
InputStream stream = urlconnect.getInputStream();
int i;
while ((i = stream.read()) != -1) {
System.out.print((char)i);
}
}
catch (Exception e) {
System.out.println(e);
}
Its not library but a tool named curl generally installed in most of the servers or you can easily install in ubuntu by
sudo apt install curl
Then fetch any html page and store it to your local file like an example
curl https://www.facebook.com/ > fb.html
You will get the home page html.You can run it in your browser as well.
© 2022 - 2024 — McMap. All rights reserved.