What are the pros and cons of adding <script> and <link> elements using JavaScript?
Asked Answered
A

9

10

Recently I saw some HTML with only a single <script> element in its <head>...

<head>
    <title>Example</title>
    <script src="script.js" type="text/javascript"></script>
    <link href="plain.css" type="text/css" rel="stylesheet" />
</head>

This script.js then adds any other necessary <script> elements and <link> elements to the document using document.write(...): (or it could use document.createElement(...) etc)

document.write("<link href=\"javascript-enabled.css\" type=\"text/css\" rel=\"styleshet\" />");
document.write("<script src=\"https://ajax.googleapis.com/ajax/libs/jquery/1.5.1/jquery.min.js\" type=\"text/javascript\"></script>");
document.write("<script src=\"https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.10/jquery-ui.min.js\" type=\"text/javascript\"></script>");
document.write("<link href=\"http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.0/themes/trontastic/jquery-ui.css\" type=\"text/css\" rel=\"stylesheet\" />")
document.write("<script src=\"validation.js\" type=\"text/css\"></script>")

Note that there is a plain.css CSS file in the document <head> and script.js just adds any and all CSS and JavaScript which would be used by a JS-enabled user agent.

What are some of the pros and cons of this technique?

Abernethy answered 14/3, 2011 at 18:15 Comment(0)
B
12

The blocking nature of document.write

document.write will pause everything that the browser is working on the page (including parsing). It is highly recommended to avoid because of this blocking behavior. The browser has no way of knowing what you're going to shuff into the HTML text stream at that point, or whether the write will totally trash everything on the DOM tree, so it has to stop until you're finished.

Essentially, loading scrips this way will force the browser to stop parsing HTML. If your script is in-line, then the browser will also execute those scripts before it goes on. Therefore, as a side-note, it is always recommended that you defer loading scripts until after your page is parsed and you've shown a reasonable UI to the user.

If your scripts are loaded from separate files in the "src" attribute, then the scripts may not be consistently executed across all browsers.

Losing browser speed optimizations and predictability

This way, you lose a lot of the performance optimizations made by modern browsers. Also, when your scripts execute may be unpredictable.

For example, some browsers will execute the scripts right after you "write" them. In such cases, you lose parallel downloads of scripts (because the browser doesn't see the second script tag until it has downloaded and executed the first). You lose parallel downloads of scripts and stylesheets and other resources (many browsers can download resources, stylesheets and scripts all at the same time).

Some browsers defer the scripts until after the end to execute them.

The browser cannot continue to parse the HTML while document.write is going on and, in certain cases, when the scripts written are being executed due to the blocking behavior of document.write, so your page shows up much slower.

In other words, your site has just become as slow as it was loading on a decades-old browser with no optimizations.

Why would somebody do it like this?

The reason you may want to use something like this is usually for maintainability. For instance, you may have a huge site with thousands of pages, each loading the same set of scripts and stylesheets. However, when you add a script file, you don't want to edit thousands of HTML files to add the script tags. This is particularly troublesome when loading JavaScript libraries (e.g. Dojo or jQuery) -- you have to change each HTML page when you upgrade to the next version.

The problem is that JavaScript doesn't have an @include or @import statement for you to include other files.

Some solutions

The solution to this is probably not by injecting scripts via document.write, but by:

  1. Using @import directives in stylesheets
  2. Using a server scripting language (e.g. PHP) to manage your "master page" and to generate all other pages (however, if you can't use this and must maintain many HTML pages individually, this is not a solution)
  3. Avoid document.write, but load the JavaScript files via XHR, then eval() them -- this may have security concerns though
  4. Use a JavaScript Library (e.g. Dojo) that has module-loading features so that you can keep a master JS file which loads other files. You won't be able to avoid having to update the version numbers of the library file though...
Bobbe answered 27/3, 2011 at 14:26 Comment(7)
There are some problems with using eval, however, and it's not just about security. The biggest issue, IMO, is that scripts loaded by an XHR and eval will not show in firebug's list of available scripts. This means debugging this code will be very hard. This is not the case with document.write(). Dojo's package management uses eval, btw.Immigration
Doesn't the eval here simply evaulate the code that generates the actual html script tags rather than eval the contents of the files themselves? I think saying this blocks parallel downloads may be misleading, but I may be wrong... Citation?Master
Also, you suggest using @import which does block parallel downloadsMaster
@Madmartigan, I don't think you need to attach a scrip tag. After loading the DOM, just XHR the script text and eval at that point.Bobbe
@Madmartigan, @import embedded inside a CSS file will be downloaded only after that CSS file is downloaded. However, if that CSS file is short (and contains only @import directives) it shouldn't delay the @import'ed files too much. And I believe multiple @import's do download in parallel. And of course, a CSS download does not block downloads of scripts.Bobbe
@Stephen Chung: I'm just saying that this technique and OPs question is not related to parallel downloads. There is nothing in this context stopping the browser from downloading all scripts/css at once unless I am mistaken.Master
@Madmartigan, you are probably right. I only mean that document.write will pause HTML interpretation right at that point (because the browser will never know whether you are trashing the entire DOM tree with the write). However, it may not affect downloads. I'll edit.Bobbe
F
8

One major disadvantage is browser incompatibility. Not all browsers correctly fetch and incorporate the resources into the DOM, so it's risky to use this approach. This is more true of stylesheets than scripts.

Another issue is one of maintainability. Concatenating and writing strings to add DOM elements on the client'side can become a maintenance nightmare. It's better to use DOM methods such as createElement to semantically create elements.

One obvious advantage is that it makes conditional use of resources much easier. You can have logic that determines which resources to load, thereby reducing the bandwidth consumption and overall processing time for the page. I would use a library call such as jQuery $.getScript() to load scripts versus document.write. The advantage being that such an approach is cleaner and also allows you to have code executed when the request is completed or fails.

Florentinoflorenza answered 14/3, 2011 at 18:22 Comment(6)
Can you be a tad more specific about browser incompatibility? Perhaps mention one such browser which would get upset with sort of thing?Abernethy
... and in your second paragraph; what do you mean "semantically create elements"? I feel like that perhaps isn't the correct use of "semantically"...Abernethy
Here's a good post on the performance results of using different techniques to inject scripts: brunosouza.posterous.com/… Candidly, it has been so long since I abandoned document.write() because it is such a poor coding practice that I would have to do some research to describe specific situations. Sorry. I used "semantically" correctly. "createElement" conveys clearly the intent to "create an element" in the DOM.Florentinoflorenza
@techbubble - Ohhhh, now I see. I was thinking semantic like semantic HTML; now I see you meant it in a clarity of cod context.Abernethy
There are some problems with getScript(), however. One, it is loading javascript via ajax and that means no cross scripting - ie no loading javascript that is not from your site. This will be a problem if you want to include https://ajax.googleapis.com as listed in the description. Second, I'm pretty sure included scripts will not show up in firebugs list of page scripts, making debugging quite challenging.Immigration
Actually, that's not quite accurate. You can load scripts from any server with getScript. Also, the scripts show up fine in Firebug.Florentinoflorenza
I
8

Well, I may as well throw my hat in the ring on this one...

If you examine google's closure library, base.js, you will see that document.write is used in their writeScriptTag_() function. This is an essential part of the dependency management system which 'closure' provides, and is an enormous advantage when creating a complicated, multi-file, library-base javascript application - it lets the file/code prerequisites determine loading order. We currently use this technique and have been having little trouble with it. TBH, we have not had a single issue with browser compatibility, and we test regularly on IE 6/7/8, FF3/2, Safari 4/5 and Chrome latest.

The only disadvantage that we have had so far is that it can be challenging to track down issues caused by loading a resource twice, or failing to load one at all. Since the act of loading resources is programmatic, it is subject to programmatic errors, and unlike adding the tags directly to the HTML, it can be hard to see what the exact loading order is. However, this issue can be largely overcome by using a library with some form of dependency management system such as closure, or dojo.

EDIT: I have made a few comments to this nature, but I thought it best to summarize in my answer: There are some problems with dojo.require() and jQuery.getScript() (both which ultimiately perform an ajax request and eval).

  1. loading via ajax means no cross scripting - ie no loading javascript that is not from your site. This will be a problem if you want to include https://ajax.googleapis.com as listed in the description.
  2. Eval'd scripts will not show up in a javascript debugger's list of page scripts, making debugging quite challenging. Recent releases of firebug will show you eval'd code, however filenames are lost making the act of setting breakpoints tedious. AFAIK, Webkit javascript console and IE8 developer tools do not show eval'd scripts.
Immigration answered 27/3, 2011 at 8:34 Comment(2)
+1 - I love this answer! You name specific examples of JS libraries which use this technique, and name specific browsers it has worked on. Thank you.Abernethy
No worries, glad to help! Be sure to read my other comments - there are some problems with using ajax for loading scripts...Immigration
L
4

It has the advantage that you don't need to repeat the script references in each HTML file. The disadvantage is that the browser must fetch and execute the main javascript file before it may load the others.

Lagomorph answered 14/3, 2011 at 18:19 Comment(4)
Could you elaborate a little bit on the disadvantage? I don't quite see how it's necessarily a disadvantage.Abernethy
It means that page loading will be slower. The browser first loads the html page, reads it, then reads the master script and in turn reads all other scripts. If they were referenced directly in the HTML file, the browser could fetch them in parallel (especially since some of them come from a an external server).Lagomorph
Fetching and loading the "main javascript file" in this case is pretty trivial, since all it does is output a few <script> and <link> tags which should be instantly fast. After that, the browser should be seeing normal <script> tags which it will download in parallel, as it would if there were "hardcoded" script tags.Master
That's true - but it still results in reduced parallelism and that's what matters.Lagomorph
G
2

I guess one advantage I could think of would be if you use these scripts on multiple pages, you only have to remember to include one script, and it saves some space.

Ginkgo answered 14/3, 2011 at 18:19 Comment(0)
A
2

At Google PageSpeed, they highly discourage you from using this technique, because it makes things slower. Apart from the sequential loading of your script.js before all the others, there's another catch:

Modern browsers use speculative parsers to more efficiently discover external resources [...] Thus, using JavaScript's document.write() to fetch external resources makes it impossible for the speculative parser to discover those resources, which can delay the download, parsing, and rendering of those resources.

Alesandrini answered 27/3, 2011 at 12:14 Comment(0)
B
2

It's also possible that this was written as a recommendation by an SEO firm to make the head element shorter if possible, so unique content is closer to the top of the document - also creating a higher text-to-HTML ratio. Though it does sound, all-in-all, like not a very good way of doing this; though it would make maintenance more time-consuming, a better approach would probably be to condense javascript into a single .js file and css into a single .css file, if it were deemed utterly necessary to reduce head element size.

Bard answered 27/3, 2011 at 13:42 Comment(0)
B
1

A great disadvantage is that adding scripts into head will pause the processing of the document until those scripts were completely downloaded parsed and executed (because the browser thinks they may use document.write). - This will hurt responsiveness.

Now days it is recommended that you put your script tags right befo </body>. Of course this is not possible 100% of times, but if you are using unobtrusve Javascript (as you should), all scripts can be respositioned at the end of the document.

HTML5 came up with the async attribute which suggests the browser only to execute the scripts after the main document has been loaded. This is the behaviour of script-inserted scripts in many browsers, but notin all of them.

I advise against using document.write at all costs. Even without it, this results in one extra request towards the server. (We like to inimze the number of requests, for example with css sprites.)

And yes, as others mentioned earlier, if scriping is disabled your page will be shown without CSS (which makes it potentially unusable).

Barnard answered 27/3, 2011 at 14:40 Comment(0)
J
0
  1. If JavaScript is disabled - <script> and <link> elements won't be added at all.

  2. If you place JavaScript init functions at the bottom of your page (which is a good practice) and link CSS with JavaScript, it may cause a some delay before CSS loads (broken layout will be visible for a short time).

Janssen answered 26/3, 2011 at 23:9 Comment(5)
With respect to 1 - that's part of the point! Any JavaScript and CSS which is required only when JS is enabled aren't present in the document unless JS is enabled.Abernethy
... and with respect to 2, in the question the JS is explicitly in the document <head>.Abernethy
if js library contains self-init (which is a bad practice) - then point 2 is out of the question.Janssen
with respect to 1 - what I meant is: users with JS off won't have CSS linked either.Janssen
Yes they will have CSS links: they will have <link href="plain.css" type="text/css" rel="stylesheet" /> which is not added with JavaScript. If they don't have JS they don't need any others. What do you mean by self-init?Abernethy

© 2022 - 2024 — McMap. All rights reserved.