reactjs - fetch as google displays blank page only
Asked Answered
C

8

4

I've just coded my first website using reactjs, but when I check how google sees my website, I receive the following result: enter image description here

My HTML file looks like this:

<!DOCTYPE html>
<html>
<head>
    <title>MySite</title>
</head>
<body>
    <div id="root"></div>
    <script async type="text/javascript" src="index.browser.js"></script>
</body>
</html>

I've deactivated all AJAX-calls for testing, and the ReactDOM.render gets executed right after its js file was loaded. The JS file itself is compiled, compressed and less than 300 KB big (including all libraries like react itself).

At this point, I don't understand which changes I can do to make google render my page correctly? As far as I have understood, google rendering issues with reactjs commonly source from AJAX calls or other long work that is done in application code before the website itself gets rendered and the DOM changed. But after removing big libraries (apart from i18next and react itself), minimizing and compressing the code, I don't see what I could do to improve the performance or rendering time significantly. PageSpeed Insights is on 99/100 points (desktop, only complaining I could minimize the html to save 110 bytes).

Any ideas where my mistake could be? Server-side rendering is not really a suitable option for me.

You can inspect the demo page here: http://comparo.com.mx

As you can see, there is not much - but the HTML content displayed gets rendered right after loading index.browser.js, which is a file < 300KB and should therefore not hold google search console back from rendering the page correctly.

EDIT: My server is located in europe, and afaik the google servers crawl from the US. Could that be in any way an issue?

Cockscomb answered 21/2, 2018 at 9:15 Comment(2)
Are you sure, your index.browser.js is working correctly?Carolinian
Yes, I'm sure. I removed all AJAX calls from it, thats why the result looks so awkward and is not translated - but my point is: Google should display those few HTML elements, and not a blank page.Cockscomb
R
8

Add babel polyfill to your project:

npm install --save babel-polyfill

And then import it in your index.js (entry point):

import 'babel-polyfill';

Hopefully, this will solve your problem.

Reluctance answered 7/4, 2018 at 22:20 Comment(0)
S
2

I would not be sure that is exactly how Google sees your website, as most simulators just strip off Javascript.

Did you use https://www.google.com/webmasters/tools/googlebot-fetch ?

In general Javascript support is limited for search engines so if you really want to have crawlers index your site you would have to implement server side rendering for React.

I've used https://github.com/kriasoft/react-starter-kit to generate http://gifhub.net It was a bit complicated experience but it worked at the end.

There are also frameworks like NextJS https://github.com/zeit/next.js/ that you can leverage to ensure you have server rendered content.

Third option is to use Google Headless Chrome browser to generate content for crawlers https://github.com/GoogleChrome/puppeteer

Having one of these options above implemented makes sure crawlers see everything you wanted. Relying on Javascript rendering will not give you expected results.

Sudoriferous answered 21/2, 2018 at 9:50 Comment(0)
C
1

In one of my legacy projects I run Angular.js to insert dynamic content into a backend-rendered page. Google crawler is smart enough to let it render the dynamic javascript content and index it (e.g. the table is completely dynamic rendered from Ajax data).

enter image description here

So I strongly double that it is related to Server-Side Rendering issues.

I wouldn't suggest spending time on doing SSR as @AlexGvozden suggested - it's quite tedious, especially the Webpack setup. Probably even with Next.js and Create React App.

Carolinian answered 21/2, 2018 at 12:12 Comment(0)
P
1

This appears to be a known issue with Google Bot's JS engine. I'm still trying to understand what exactly the problem is, but it seems that adding 'babel-polyfill' to your app solves the problem.

Medium post detailing a fix

Punitive answered 7/4, 2018 at 21:54 Comment(0)
B
1

Had the same issue with blank pages at "Fetch as Google", the advice above with babel-polyfill didn't solve the trouble so I did more digging into it:

  1. Spent hours searching for portable Google Chrome v.41 (which is claimed to be the rendering engine of Google Search Bot) to see what's the error halting Google Crawler. JIC, https://rutracker.org/forum/viewtopic.php?t=4817317
  2. Chrome refused to run in Windows 10, so I had to find Windows 7 VM and finally I discovered there were 2 APIs which babel-polyfill didn't solve: URLSearchParams and fetch()
  3. I accidentally discovered that exact same errors halted IE11 (part of Windows 10) and I could save couple of hours by debugging the site in IE11 right away, instead of searching/troubleshooting Chrome v.41.
  4. Found and added all the polyfills required and made the app rendered under "Fetch as Google".

Long story short, here's the fix that worked for me:

  1. Install 3 polyfills:
npm install --save babel-polyfill
npm install --save url-search-params-polyfill
npm install --save whatwg-fetch 
  1. Import those 3 at the top of my entry point JS file (index.js):
import 'babel-polyfill';
import 'url-search-params-polyfill';
import 'whatwg-fetch'

import React from 'react';
import ReactDOM from 'react-dom';*
...

enter image description here

Baccy answered 27/11, 2018 at 14:41 Comment(0)
H
0

For google to see your page as is, you should be implementing server side rendering. Here by looking at your code it is client side rendering, here browser uses java script to load your DOM.

Harness answered 21/2, 2018 at 10:31 Comment(0)
S
0

I don't know if it still an issue, but...

For each project there could be different reasons. First of all I would recommend you try to run you project in Dev mode (including console logs) and test it with PhantomJS v2.1.1. Result can show you some useful errors.

next you can see my phantomjs sample (called website.js):

var system = require('system')
var page = require("webpage").create();
var homePage = "http://<link to your localhost>";
var captureName = "result.png";

page.onConsoleMessage = function(msg) {
  system.stderr.writeLine('console: ' + msg);
};

page.onError = function(msg, trace) {
  var msgStack = ['PHANTOM ERROR: ' + msg];
  if (trace && trace.length) {
    msgStack.push('TRACE:');
    trace.forEach(function(t) {
      msgStack.push(' -> ' + (t.file || t.sourceURL) + ': ' + t.line + (t.function ? ' (in function ' + t.function +')' : ''));
    });
  }
  console.log(msgStack.join('\n'));
  phantom.exit(1);
};

page.onLoadFinished = function(status) {
  var url = page.url;
  console.log("Status:  " + status);
  console.log("Loaded:  " + url);
  window.setTimeout(function () {
    page.render(captureName);
    phantom.exit();
  }, 5000);
};

page.open(homePage);

btw, as result you will get result.png snapshot in the same directory as website.js located

Stereobate answered 29/12, 2018 at 20:19 Comment(0)
S
0

Try adding browser shims. Note that it doesn’t matter if you use Babel to compile your code, you still need polyfills for older browsers and for headless browsers such as Google Bot or PhantomsJS.

npm install --save es5-shim es6-shim

// in your frontend/index.js, as early as possible
import 'es5-shim';
import 'es6-shim';

You can read more here

Smokejumper answered 7/1, 2019 at 11:58 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.