Scraping data to Google Sheets from a website that uses JavaScript
Asked Answered
R

2

2

I am trying to import data from the following website to Google Sheets. I want to import all the matches for the day.

https://www.tournamentsoftware.com/tournament/b731fdcd-a0c8-4558-9344-2a14c267ee8b/Matches

I have tried importxml and importhtml, but it seems this does not work as the website uses JavaScript. I have also tried to use Apipheny without any success.

When using Apipheny, the error message is

'Failed to fetch data - please verify your API Request: {DNS error'

Rue answered 28/10, 2022 at 15:35 Comment(2)
Specify what data do you need? Each line contains a lot of data, for example: BS U13 Semi final : [W] Uvin Gunawardena [5] - Pubudu Sathsara Melon [6] : 15-13 15-4Clothilde
This question is being discussed in Canonical question for importing content from dynamic websites using IMPORTHTML / IMPORTXML Google Sheets functionsPhosphatase
P
4

Tl;Dr

Adapted from my answer to How to know if Google Sheets IMPORTDATA, IMPORTFEED, IMPORTHTML or IMPORTXML functions are able to get data from a resource hosted on a website? (also posted by me)

Please spend some time learning how to use the browsers developers tools so you will be able to identify

  • if the data is already included in source code of the webpage as JSON / literal JavaScript object or in another form
  • if the webpage is doing a GET or POST requests to retrieve the data and when those requests are done (i.e. as some point of the page parsing, or on event)
  • if the requests require data from cookies

Brief guide about how to use the web browser to find useful details about the webpage / data to import

  1. Open the source code and look if the required data is included. Sometimes the data is included as JSON and added to the DOM using JavaScript. In this case it might be possible to retrieve the data by using the Google Sheets functions or URL Fetch Service from Google Apps Script.
  2. Let say that you use Chrome. Open the Dev Tools, then look at the Elements tab. There you will see the DOM. It might be helpful to identify if the data that you want to import besides being on visible elements is included in hidden / not visible elements like <script> tags.
  3. Look at Source, there you might be able to see the JavaScript code. It might include the data that you want to import as JavaScript object (commonly referred as JSON).

There are a lot of questions about + that mentions problems using importhtml and/or importxml that already have answers and even many include code (JavaScript snippets, Google Apps Script functions, etc.) that might save you to have to use an specialized web-scraping tool that has a more stepped learning curve. At the bottom of this answer there is a list of questions about using Google Sheets built-in functions, including annotations of the workaround proposed.

On Is there a way to get a single response from a text/event-stream without using event listeners? ask about using EventSource. While this can't be used on server side code, the answer show how to use the HtmlService to use it on client-side code and retrieve the result to Google Sheets.


As you already realized, the Google Sheets built-in functions importhtml(), importxml(), importdata() and importfeed() only work with static pages that do not require signing in or other forms of authentication.

When the content of a public page is created dynamically by using JavaScript, it cannot be accessed with those functions, by the other hand the website's webmaster may also purposefully have prevented web scraping.


How to identify if content is added dynamically

To check if the content is added dynamically, using Chrome,

  1. Open the URL of the source data.
  2. Press F12 to open Chrome Developer Tools
  3. Press Control+Shift+P to open the Command Menu.
  4. Start typing javascript, select Disable JavaScript, and then press Enter to run the command. JavaScript is now disabled.

JavaScript will remain disabled in this tab so long as you have DevTools open.

Reload the page to see if the content that you want to import is shown, if it's shown it could be imported by using Google Sheets built-in functions, otherwise it's not possible but might be possible by using other means for doing web scraping.

According to Wikipedia, Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites.

Use of robots.txt to block Web crawlers

The webmasters could use robots.txt file to block access to website. In such case the result will be #N/A Could not fetch URL.

Use of User agent

The webpage could be designed to return a special a custom message instead of the data.


Below there are more details about how Google Sheets built-in "web-scraping" functions works

IMPORTDATA, IMPORTFEED, IMPORTHTML and IMPORTXML are able to get content from resources hosted on websites that are:

  • Publicly available. This means that the resource doesn't require authorization / to be logged in into any service to access it.
  • The content is "static". This mean that if you open the resource using the view source code option of modern web browsers it will be displayed as plain text.
    • NOTE: The Chrome's Inspect tool shows the parsed DOM; in other works the actual structure/content of the web page which could be dynamically modified by JavaScript code or browser extensions/plugins.
  • The content has the appropriated structure.
    • IMPORTDATA works with structured content as csv or tsv doesn't matter of the file extension of the resource.
    • IMPORTFEED works with marked up content as ATOM/RSS
    • IMPORTHTML works with marked up content as HTML that includes properly markedup list or tables.
    • IMPORTXML works with marked up content as XML or any of its variants like XHTML.
  • The content doesn't exceeds the maximum size. Google haven't disclosed this limit but the below error will be shown when the content exceeds the maximum size:

    Resource at url contents exceeded maximum size.

  • Google servers are not blocked by means of robots.txt or the user agent.

On W3C Markup Validator there are several tools to checkout is the resources had been properly marked up.

Regarding CSV check out Are there known services to validate CSV files

It's worth to note that the spreadsheet

  • should have enough room for the imported content; Google Sheets has a 10 million cell limit by spreadsheet, according to this post a columns limit of 18278, and a 50 thousand characters as cell content even as a value or formula.
  • it doesn't handle well large in-cell content; the "limit" depends on the user screen size and resolution as now it's possible to zoom in/out.

References

Related

Software Recommendations

Web Applications

The following question is about a different result, #N/A Could not fetch URL


Similar questions

Some of this questions might be closed as duplicate of this one

Phosphatase answered 29/10, 2022 at 13:34 Comment(5)
When the values are returned with the content type of text/event-stream, in the current stage, UrlFetchApp, IMPORTXML, IMPORTHTML and IMPORTDATA cannot be used by the timeout issue. I thought that checking the response content type and/or a workaround for retrieving the values from the server using the content type of text/event-stream might be also useful when it is included in your answer. RefStaats
If there are questions in your list of similar Qs here that are closed as duplicates, they don't need to be mentioned in the post. They will be listed on the right-hand sidebar in the "Linked Questions" list.Wendolyn
@Wendolyn Initially the only links to questions closed as duplicates included also a comment.Phosphatase
My point is they're not needed in the answer body because they're not part of the answer, just noise that requires users to scroll twice as far as they otherwise would. It's especially redundant given that most, if not all, of these links are going to show up in the 'related' or 'linked' questions lists on the right. Opt for less content duplication, in short.Wendolyn
@Wendolyn I added them because they were not showing.Phosphatase
G
1

ImportXML is good for basic tasks, but it won't get you too far if you are serious in scraping:

  • The approach only works with the most basic websites (no SPAs rendered in browsers can be scraped this way. Any basic web scraping protection or connectivity issue breaks the process, and there isn't any control over HTTP request geo location, or number of retries) - and Yahoo Finance is not a simple website

  • If the target website data requires some cleanup post-processing, it's getting very complicated since you are now "programming with Excel formulas", rather a painful process compared to regular code writing in conventional programming languages

  • There isn't any proper launch and cache control, so the function can be triggered occasionally and if the HTTP request fails, cells will be populated with ERR! values

I recommend using proper tools (automation framework and scraping engine which can render JavaScript-powered websites) and use Google Sheets just for basic storage purposes: https://youtu.be/uBC752CWTew (Pipedream for automation and ScrapeNinja engine for scraping)

Gerard answered 22/11, 2022 at 6:8 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.