Most Efficient Multipage RequireJS and Almond setup
Asked Answered
S

5

34

I have multiple pages on a site using RequireJS, and most pages have unique functionality. All of them share a host of common modules (jQuery, Backbone, and more); all of them have their own unique modules, as well. I'm wondering what is the best way to optimize this code using r.js. I see a number of alternatives suggested by different parts of RequireJS's and Almond's documentation and examples -- so I came up with the following list of possibilities I see, and I'm asking which one is most recommended (or if there's another better way):

  1. Optimize a single JS file for the whole site, using Almond, which would load once and then stay cached. The downside of this most simple approach is that I'd be loading onto each page code that the user doesn't need for that page (i.e. modules specific to other pages). For each page, the JS loaded would be bigger than it needs to be.
  2. Optimize a single JS file for each page, which would include both the common and the page-specific modules. That way I could include Almond in each page's file and would only load one JS file on each page -- which would be significantly smaller than a single JS file for the whole site would be. The downside I see, though, is that the common modules wouldn't be cached in the browser, right? For every page the user goes to she'd have to re-download the bulk of jQuery, Backbone, etc. (the common modules), as those libraries would constitute large parts of each unique single-page JS file. (This seems to be the approach of the RequireJS multipage example, except that the example doesn't use Almond.)
  3. Optimize one JS file for common modules, and then another for each specific page. That way the user would cache the common modules' file and, browsing between pages, would only have to load a small page-specific JS file. Within this option I see two ways to finish it off, to include the RequireJS functionality: a. Load the file require.js before the common modules on all pages, using the data-main syntax or a normal <script> tag -- not using Almond at all. That means each page would have three JS files: require.js, common modules, and page-specific modules. b. It seems that this gist is suggesting a method for plugging Almond into each optimized file ---- so I wouldn't have to load require.js, but would instead include Almond in both my common modules AND my page-specific modules. Is that right? Is that more efficient than loading require.js upfront?

Thanks for any advice you can offer as to the best way to carry this out.

Suffruticose answered 11/6, 2013 at 2:22 Comment(1)
Note that 8 years later option 3 is de-facto the industry standard :)Burson
B
35

I think you've answered your own question pretty clearly.

For production, we do - as well as most companies I've worked with option 3.

Here are advantages of solution 3, and why I think you should use it:

  • It utilizes the most caching, all common functionality is loaded once. Taking the least traffic and generating the fastest loading times when surfing multiple pages. Loading times of multiple pages are important and while the traffic on your side might not be significant compared to other resources you're loading, the clients will really appreciate the faster load times.
  • It's the most logical, since commonly most files on the site share common functionality.

Here is an interesting advantage for solution 2:

  • You send the least data to each page. If a lot of your visitors are one time, for example in a landing page - this is your best bet. Loading times can not be overestimated in importance in conversion oriented scenarios.

  • Are your visitors repeat? some studies suggest that 40% of visitors come with an empty cache.

Other considerations:

  • If most of your visitors visit a single page - consider option 2. Option 3 is great for sites where the average users visit multiple pages, but if the user visits a single page and that's all he sees - that's your best bet.

  • If you have a lot of JavaScript. Consider loading some of it to give the user visual indication, and then loading the rest in a deferred way asynchronously (with script tag injection, or directly with require if you're already using it). The threshold for people noticing something is 'clunky' in the UI is normally about 100ms. An example of this is GMail's 'loading...' .

  • Given that HTTP connections are Keep-Alive by default in HTTP/1.1 or with an additional header in HTTP/1.0 , sending multiple files is less of a problem than it was 5-10 years ago. Make sure you're sending the Keep-Alive header from your server for HTTP/1.0 clients.

Some general advice and reading material:

  • JavaScript minification is a must, r.js for example does this nicely and your thought process in using it was correct. r.js also combines JavaScript which is a step in the right direction.
  • As I suggested, defering JavaScript is really important too, and can drastically improve loading times. Defering execution will help your loading time look fast which is very important, a lot more important in some scenarios than actually loading fast.
  • Anything you can load from a CDN like external resources you should load from a CDN. Some libraries people use today like jQuery are pretty bid (80kb), fetching them from a cache could really benefit you. In your example, I would not load Backbone, underscore and jQuery from your site, rather, I'd load them from a CDN.
Burson answered 29/6, 2013 at 23:9 Comment(5)
Thanks for the explanations. I've been wondering about that last point lately -- loading libraries from CDNs as opposed to concatenating and minifying them with the rest of your scripts. Can you link me to a really good explanation of why some consider it better to use a CDN?Suffruticose
#2180891 sitepoint.com/7-reasons-to-use-a-cdn johnchow.com/… . If this solved your issue, please consider accepting it.Burson
I believe I accepted your answer and awarded the bounty a few minutes ago. Let me know if that didn't happen, I guess. Thanks again.Suffruticose
I also found this question of StackOverflow: #2180891Suffruticose
@daveclark Cool! I didn't even realize there was a bounty, I just ran into the question and thought it was an interesting question and couldn't find a duplicate :) I'm glad I helped. The Yahoo and Google guidelines for page speed are also a must read.Burson
R
14

I created example repository to demonstrate these 3 kinds of optimization.

It can help us to have better understanding of how to use r.js.

https://github.com/cloudchen/requirejs-bundle-examples

Rochette answered 9/9, 2013 at 9:1 Comment(0)
C
1

FYI, I prefer to use option 3, following the example in https://github.com/requirejs/example-multipage-shim

I am not sure whether it is the most efficient.

However, I find it convienient because:

  • Only need to configure the require.config (on the various libraries in one place)
  • During r.js optimization, then decide which are the modules to group as common
Cesar answered 23/6, 2013 at 11:57 Comment(0)
I
0

I prefer to use option 3,and i can surely tell you that why is that.

  1. It's the most logical.
  2. It utilizes the most caching, all common functionality is loaded once. Taking the least traffic and generating the fastest loading times when surfing multiple pages. Loading times of multiple pages are important and while the traffic on your side might not be significant compared to other resources you're loading, the clients will really appreciate the faster load times.

I have listed much better options for the same.

Iciness answered 17/8, 2014 at 10:24 Comment(0)
T
0

You can use any content delivery network (CDN) like MaxCDN to ensure your js files get served to everyone. Also I'll suggest you to put your js files in the footer of your html code. Hope that helps.

Twenty answered 10/5, 2015 at 6:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.