simulate infinite scrolling in c# to get full html of a page
Asked Answered
A

2

7

There are lots of sites that use this (imo) annoying "infinite scrolling" style. Examples of this are sites like tumblr, twitter, 9gag, etc..

I recently tried to scrape some pics off of these sites programatically with HtmlAgilityPack. like this:

HtmlWeb web = new HtmlWeb();  
HtmlDocument doc = web.Load(url);
var primary = doc.DocumentNode.SelectNodes("//img[@class='badge-item-img']");
var picstring = primary.Select(r => r.GetAttributeValue("src", null)).FirstOrDefault();

This works fine, but when I tried to load in the HTML from certain sites, I noticed that I only got back a small amount of content (lets say the first 10 "posts" or "pictures", or whatever..) This made me wonder if it would be possible to simulate the "scrolling down to the bottom" of the page in c#.

This isn't just the case when I load the html programatically, when I simply go to sites like tumblr, and I check firebug or just "view source", I expected that all the content would be in there somewhere, but alot of it seems to be hidden/inserted with javascript. Only the content that is actually visible on my screen is present in the HTML source.

So my questions is: is it possible to simulate infinitely scrolling down to a page, and loading in that HTML with c# (preferably)?

(I know that I can use API's for tumblr and twitter, but i'm just trying to have some fun hacking stuff together with HtmlAgilityPack)

Ajar answered 24/7, 2013 at 18:46 Comment(0)
T
6

There is no way to reliably do this for all such websites in one shot, short of embedding a web browser (which typically won't work in headless environments).

What you should consider doing instead is looking at the site's JavaScript in order to see what AJAX queries are used to fetch content as the user scrolls down.

Alternatively, use a web debugger in your browser (such as the one included in Chrome). These debuggers usually have a "network" pane you can use to inspect AJAX requests performed by the page. Looking at these requests as you scroll down should give you enough information to write C# code that simulates those requests.

You will then have to parse the response from those requests as whatever type of content that particular API delivers, which will probably be JSON or XML, but almost certainly not HTML. (This may be better for you anyway, since it will save you having to parse out display-oriented HTML, whereas the AJAX API will give you data objects that should be much easier to use.)

Timmi answered 24/7, 2013 at 18:49 Comment(0)
G
1

Those sites are making asynchronous http requests to load the subsequent page contents. Since HTML agility pack doesn't have a javascript interpreter (thank heavens for that), you will need to make those requests yourself. It is most likely that most sites will not return html fragments, but rather JSON. For that, you'll need to use a JSON parser, not the HTML agility pack.

Gauzy answered 24/7, 2013 at 18:50 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.