Using DOMContentReady considered anti-pattern by Google
Asked Answered
N

7

21

A Google Closure library team member asserts that waiting for DOMContentReady event is a bad practice.

The short story is that we don't want to wait for DOMContentReady (or worse the load event) since it leads to bad user experience. The UI is not responsive until all the DOM has been loaded from the network. So the preferred way is to use inline scripts as soon as possible.

Since they still don't provide more details on this, so I wonder how they deal with Operation Aborted dialog in IE. This dialog is the only critical reason I know to wait for DOMContentReady (or load) event.

  1. Do you know any other reason?
  2. How do you think they deal with that IE issue?
Nolie answered 7/1, 2010 at 22:4 Comment(3)
Also, inline JavaScript mixes JS code with the HTML which is a very bad thing in terms of code maintenance and collaboration. Very strange advice really...Linden
BYK, it might seem less strange when considering that Google wants people to be using GWT. Google doesn't really care how much of a mess an application's "document" is, they're all about squeezing every last millisecond out of every dark corner of an app. For their purposes, this makes a lot of sense. In other cases, maybe not so much.Septimal
@Marcel Korpel: I intended it to be a small comment first. But later i realized that i had more to talk about. :) Will post it as an answer and remove the comments.Buckhound
A
29

A little explanation first: The point with inline JavaScript is to include it as soon as possible. However, that "possible" is dependent on the DOM nodes that that script requires being declared. For example, if you have some navigation menu that requires JavaScript, you would include the script immediately after the menu is defined in the HTML.

<ul id="some-nav-menu">
    <li>...</li>
    <li>...</li>
    <li>...</li>
</ul>
<script type="text/javascript">
    // Initialize menu behaviors and events
    magicMenuOfWonder( document.getElementById("some-nav-menu") );
</script>

As long as you only address DOM nodes that you know have been declared, you wont run into DOM unavailability problems. As for the IE issue, the developer must strategically include their script so that this doesn't happen. It's not really that big of a concern, nor is it difficult to address. The real problem with this is the "big picture", as described below.

Of course, everything has pros and cons.

Pros

  1. As soon as a DOM element is displayed to the user, whatever functionality that is added to it by JavaScript is almost immediately available as well (instead of waiting for the whole page to load).
  2. In some cases, Pro #1 can result in faster perceived page load times and an improved user experience.

Cons

  1. At worst, you're mixing presentation and business logic, at best you're mixing your script includes throughout your presentation, both of which can be difficult to manage. Neither are acceptable in my opinion as well as by a large portion of the community.
  2. As eyelidlessness pointed out, if the script's in question have external dependencies (a library for example), then those dependencies must be loaded first, which will lock page rendering while they are parsed and executed.

Do I use this technique? No. I prefer to load all script at the end of the page, just before the closing </body> tag. In almost every case, this is sufficiently fast for perceived and actual initialization performance of effects and event handlers.

Is it okay for other people to use it? Developers are going to do what they want/need to get the job done and to make their clients/bosses/marketing department happy. There are trade-offs, and as long as you understand and manage them, you should be okay either way.

Arcuation answered 7/1, 2010 at 22:35 Comment(0)
B
1

The biggest problem with inline scripts is that it cannot be cached properly. If you have all your scripts stored away in one js file that is minified(using a compiler) then that file can be cached just once, for the entire site, by the browser.

That leads to better performance in the long run if your site tends to be busy. Another advantage of having a separate file for your scripts is that you tend to not "repeat yourself" and declare reusable functions as much as possible. DOMContentReady does not lead to bad user experience. Atleast it provides the user with the content before-hand rather than make the user wait for the UI to load which might end up becoming a big turn-off for the user.

Also using inline scripts does not ensure that the UI will be more responsive than that when used with DOMContentReady. Imagine a scenario where you are using inline scripts for making ajax calls. If you have one form to submit its fine. Have more than one form and you end up repeating your ajax calls.. hence repeating the same script everytime. In the end, it leads to the browser caching more javascript code than it would have if it was separated out in a js file loaded when the DOM is ready.

Another big disadvantage of having inline scripts is that you need to maintain two separate code bases: one for development and another for production. You must ensure that both code bases are kept in-sync. The development version contains non-minified version of your code and the production version contains the minified version. This is a big headache in the development cycle. You have to manually replace all your code snippets hidden away in those bulky html files with the minified version and also in the end hope that no code breaks! However with maintaing a separate file during development cycle, you just need to replace that file with the compiled minified version in the production codebase.

If you use YSlow you see that:

Using external JavaScript and CSS files generally produces faster pages because the files are cached by the browser. JavaScript and CSS that are inlined in HTML documents get downloaded each time the HTML document is requested. This reduces the number of HTTP requests but increases the HTML document size. On the other hand, if the JavaScript and CSS are in external files cached by the browser, the HTML document size is reduced without increasing the number of HTTP requests.

I can vouch for inline scripts if and only if the code changes so often that having it in a separate js file is immaterial and would in the end have the same impact as that of these inline scripts. Still, these scripts are not cached by the browser. The only way the browser can cache scripts is if its stored away in an external js file with an etag.

However, this is not JQuery vs Google closure in any manner. Closure has its own advantages. However closure library makes it hard to have all your scripts in external files(though its not that its impossible, just makes it hard). You just tend to use inline scripts.

Buckhound answered 12/7, 2010 at 2:51 Comment(2)
“The only way the browser can cache scripts is if its stored away in an external js file with an etag” is not true: use a Cache-Control: max-age=3155760000 to store a file in cache for a very long time without the browser re-validating the page (see Cache Control Directives Demystified). If you use this HTTP header in conjunction with a version number in the URL (e.g., script.js?v=23), you can just increment the version number when you change the file. Also see developer.yahoo.com/performance/rules.html#etagsTalmudist
Yes but the scripts will still need to be be cached every time you hit a new page, correct? So caching perf wins are only achieved on re-visited pages but with external linking your global JS gets cached once per site visit.Gilding
S
0

One reason to avoid inline scripts is that it requires you place any dependency libraries before them in the document, which will probably negate the performance gains of inline scripts. The best practice I'm familiar with is to place all scripts (in a single HTTP request!) at the very end of the document, just before </body>. This is because script-loading blocks the current request and all sub-requests until the script is completely loaded, parsed, and executed.

Short of a magic wand, we'll always have to make these trade-offs. Thankfully, the HTML document itself is increasingly going to become the least resource-intensive request made (unless you're doing silly stuff like huge data: URLs and huge inline SVG documents). To me, the trade-off of waiting for the end of the HTML document seems the most obvious choice.

Septimal answered 7/1, 2010 at 22:18 Comment(0)
C
0

I think this advise is not really helpful. DOMContentReady may only be bad practice because it currently is over-used (maybe because of jquery's easy-to-use ready event). Many people use it as the "startup" event for any javascript action. Though even jQuery's ready() event was only meant to be used as the startup point for DOM manipulations.

Inferential, DOM manipulations on page load lead to bad user experience!! Because they are not necessary, the server side could just have completely generated the initial page.

So, maybe the closure team members just try to steer in the opposite direction, and prevent people from doing DOM manipulations on page load at all?

Counteraccusation answered 7/1, 2010 at 22:27 Comment(3)
DOM manipulations on page load are essential if you provide a different UI in the course of progressive enhancement. Consider, for example, a sortable list. A fully static-functional UI would provide, say, up/down buttons which submit to the server to reorder the list. A progressively enhanced UI may, instead, provide "grippers" and hide those buttons. This would be desirable at the earliest possible point loading the page. Obviously this small example could be just as simply accomplished with a class switch, but it's not hard to think of how it grows more complex from there.Septimal
DOM manipulations includes events binding and other stuff that can't be done by the server side.Nolie
@eye, @thorn: you both are right, I was a bit over-enthusiastic in my conclusion..Counteraccusation
G
0

If the time it takes to parse, load, and render layout (the point at which domready should fire) without consideration for image loads takes so long that it causes noticeable UI delay, you've probably got something really ugly building that page on the back-end, and that's your real problem.

Furthermore, JavaScript before the end of the page halts HTML parsing/DOM evaluating until the JS is parsed and evaluated. Google should look to their Java before giving advice on JavaScript.

Gilding answered 22/8, 2011 at 20:45 Comment(0)
C
0

If you need to dispose your HTML on your own, Googles approach seems very awkward. They use the compiled approach and can waste their brain-cycles on other issues, which is sane given the complex ui of their apps. If you head for something with more relaxed requirements (read: almost everything else), maybe it's not worth the effort.

It's a pity that GWT only is talking Java, though.

Centigrade answered 5/1, 2012 at 23:40 Comment(0)
M
-1

That's probably because Google doesn't care if you don't have Javascript, they require it for just about everything they do. If you use Javascript as an addition on top of your already-functioning website then loading scripts in DOMContentReady is just fine. The point is to use Javascript to enhance the user's experience, not seclude them if they don't have it.

Meill answered 7/1, 2010 at 22:13 Comment(3)
This question is not about whether to use javascript for progressive enhancement or not, it's about how to implement the JavaScript that you are using.Arcuation
Justin Johnson, the question of whether and how to use "graceful degradation" or "progressive enhancement" does affect the outcome of the question of how to implement the Javascript that you are using.Septimal
Let me rephrase that then. The question wasn't about whether or not to enhance an already working site with JavaScript or to make the site require JavaScript. @Shawn's answer suggests that only by using DOMContentReady can we gracefully degrade. This is not the case. Progressive enhancement doesn't revolve around the DOMContentReady or onload events, and can still be achieved with inline scripts while maintaining graceful degradation. In other words, graceful degradation is not about JavaScript, it's about not JavaScript, so I don't see how this answer applies or is relevant to the OP.Arcuation

© 2022 - 2024 — McMap. All rights reserved.