Which CSS selectors or rules can significantly affect front-end layout / rendering performance in the real world?
Asked Answered
A

6

55

Is it worth worrying about CSS rendering performance? Or should we just not worry about efficiency at all with CSS and just focus on writing elegant or maintainable CSS instead?

This question is intended to be a useful resource for front-end developers on which parts of CSS can actually have a significant impact on device performance, and which devices / browsers or engines may be affected. This is not a question about how to write elegant or maintainable CSS, it's purely about performance (although hopefully what's written here can inform more general articles on best-practice).

Existing evidence

Google and Mozilla have written guidelines on writing efficient CSS and CSSLint's set of rules includes:

Avoid selectors that look like regular expressions .. don't use the complex equality operators to avoid performance penalties

but none of them provide any evidence (that I could find) of the impact these have.

A css-tricks.com article on efficient CSS argues (after outlining a load of efficiency best practices) that we should not .. sacrifice semantics or maintainability for efficient CSS these days.

A perfection kills blog post suggested that border-radius and box-shadow rendered orders of magnitude slower than simpler CSS rules. This was hugely significant in Opera's engine, but insignificant in Webkit. Further, a smashing magazine CSS benchmark found that rendering time for CSS3 display rules was insignificant and significantly faster than rendering the equivalent effect using images.

Know your mobile tested various mobile browsers and found that they all rendered CSS3 equally insignificantly fast (in 12ms) but it looks like they did the tests on a PC, so we can't infer anything about how hand-held devices perform with CSS3 in general.

There are many articles on the internet on how to write efficient CSS. However, I have yet to find any comprehensive evidence that badly considered CSS actually has a significant impact on the rendering time or snappiness of a site.

Background

I offered bounty for this question to try to use the community power of SO to create a useful well-researched resource.

Alleviate answered 5/9, 2012 at 10:37 Comment(11)
One thing I can surely tell you: use IDs when IDs should be used, and classes when classes should be used. Performance difference is negligible, semantics isn't. IDs for elements which — by definition — appear only once; classes for those which can repeat throughout the page. Just consider the extreme case when you use a class for a fixed CSS position.Oceanus
@MichałGórny IDs should be used in markup where they are appropriate, but many people believe (myself included) that IDs should never be used in CSS selectors. Read this article for a (hopefully unbiased) elaboration: screwlewse.com/2010/07/dont-use-id-selectors-in-cssAlleviate
Well, that article agrees with me on when IDs can and should be used. And my extreme example of position: fixed is an example when CSS simply shouldn't be reused. Not that I'm advocating on doing something like that.Oceanus
Remember that most browsers already try to optimize selectors as best as they can. Take the well-known example of right-to-left matching on a per-element basis. Most selectors are only as slow as there are elements on your page. If you have a very simple page with just three children of body and nothing else, any selector you throw at it shouldn't cause a browser to glitch out or even freeze.Pronation
Also, does this question pertain only/mainly to selectors, or every aspect of CSS? CSS performance doesn't just lie in selectors, you know. Browsers have to worry about things like layout, positioning, computing and drawing gradients, alpha compositing, and all kinds of other stuff. And that's rendering. Selector matching is not rendering; it's just figuring out which elements to render in what ways.Pronation
@RobinWinslow Amusingly enough, the page you mentioned with the article about ID selectors styles elements by ID many times :)Caves
@Pronation I'm interested in all elements of styling that can significantly effect rendering performance. Although selectors are easier to define best-practice for, so those tips will probably have the most traction.Alleviate
I don't have the source for this information anymore, but one site I was reading a month or so ago while I was researching mobile devices claimed that certain CSS properties are more costly to render (ie. eat more battery life). They mentioned 3 specific ones as being the worst offenders, but the only one that stuck out in my mind as likely to be used with this generation of browsers is box-shadow.Pippo
This is just a benchmark tool for your websites in general normally i use it to optimize my websites. Pagespeed(by google) : chrome.google.com/webstore/detail/…, Browser reads every brake you make in your css so the more compact the file is the better. loads of tools that can do it. Tought i dont know about which tags does better.Graceless
I like the first sentence of article #2 from Simon West's post below: "You probably didn’t notice it..." He's right: I didn't. I wanted there to be a silver bullet here - there's not. I'd rather write less code, more semantic code, and save the milisecond-hunting for those with lots of free time. I don't believe it's selectors, or IDs vs Classes - the only thing I've ever found to impact performance is file size, and line breaks/count in CSS files (even those that are bloated with class selectors, which bloated the markup as well). <-- taking us back to semantics.Scully
Possible duplicate of Choosing efficient selectors based on computational complexityCudlip
C
49

The first thing that comes to mind here is: how clever is the rendering engine you're using?

That, generic as it sounds, matters a lot when questioning the efficiency of CSS rendering/selection. For instance, suppose the first rule in your CSS file is:

.class1 {
    /*make elements with "class1" look fancy*/
}

So when a very basic engine sees that (and since this is the first rule), it goes and looks at every element in your DOM, and checks for the existence of class1 in each. Better engines probably map classnames to a list of DOM elements, and use something like a hashtable for efficient lookup.

.class1.class2 {
    /*make elements with both "class1" and "class2" look extra fancy*/
}

Our example "basic engine" would go and revisit each element in DOM looking for both classes. A cleverer engine will compare n('class1') and n('class2') where n(str) is number of elements in DOM with the class str, and takes whichever is minimum; suppose that's class1, then passes on all elements with class1 looking for elements that have class2 as well.

In any case, modern engines are clever (way more clever than the discussed example above), and shiny new processors can do millions (tens of millions) of operations a second. It's quite unlikely that you have millions of elements in your DOM, so the worst-case performance for any selection (O(n)) won't be too bad anyhow.


Update:

To get some actual practical illustrative proof, I've decided to do some tests. First of all, to get an idea about how many DOM elements on average we can see in real-world applications, let's take a look at how many elements some popular sites' webpages have:

Facebook: ~1900 elements (tested on my personal main page).
Google: ~340 elements (tested on the main page, no search results).
Google: ~950 elements (tested on a search result page).
Yahoo!: ~1400 elements (tested on the main page).
Stackoverflow: ~680 elements (tested on a question page).
AOL: ~1060 elements (tested on the main page).
Wikipedia: ~6000 elements, 2420 of which aren't spans or anchors (Tested on the Wikipedia article about Glee).
Twitter: ~270 elements (tested on the main page).

Summing those up, we get an average of ~1500 elements. Now it's time to do some testing. For each test, I generated 1500 divs (nested within some other divs for some tests), each with appropriate attributes depending on the test.


The tests

The styles and elements are all generated using PHP. I've uploaded the PHPs I used, and created an index, so that others can test locally: little link.


Results:

Each test is performed 5 times on three browsers (the average time is reported): Firefox 15.0 (A), Chrome 19.0.1084.1 (B), Internet Explorer 8 (C):

                                                                        A      B      C
1500 class selectors (.classname)                                      35ms   100ms  35ms
1500 class selectors, more specific (div.classname)                    36ms   110ms  37ms
1500 class selectors, even more specific (div div.classname)           40ms   115ms  40ms
1500 id selectors (#id)                                                35ms   99ms   35ms
1500 id selectors, more specific (div#id)                              35ms   105ms  38ms
1500 id selectors, even more specific (div div#id)                     40ms   110ms  39ms
1500 class selectors, with attribute (.class[title="ttl"])             45ms   400ms  2000ms
1500 class selectors, more complex attribute (.class[title~="ttl"])    45ms   1050ms 2200ms

Similar experiments:

Apparently other people have carried out similar experiments; this one has some useful statistics as well: little link.


The bottom line:

Unless you care about saving a few milliseconds when rendering (1ms = 0.001s), don't bother give this too much thought. On the other hand, it's good practice to avoid using complex selectors to select large subsets of elements, as that can make some noticeable difference (as we can see from the test results above). All common CSS selectors are reasonably fast in modern browsers.

Suppose you're building a chat page, and you want to style all the messages. You know that each message is in a div which has a title and is nested within a div with a class .chatpage. It is correct to use .chatpage div[title] to select the messages, but it's also bad practice efficiency-wise. It's simpler, more maintainable, and more efficient to give all the messages a class and select them using that class.


The fancy one-liner conclusion:

Anything within the limits of "yeah, this CSS makes sense" is okay.

Caves answered 5/9, 2012 at 10:50 Comment(12)
I'm afraid I was hoping for a more in-depth answer than this (I'll probably add a bounty when I can in a couple of days if I haven't got a great answer). Obviously it depends on the rendering engine, but I'm unsurprisingly particularly interested in how recent versions of Webkit (Chrome / Safari), Gecko (Firefox) and Trident (IE) (and to a lesser extent, Presto) perform. And as to your general point that rendering performance doesn't matter, are you sure that applies to complex CSS3 queries, like the regular-expression-like ones mentioned in my question?Alleviate
@RobinWinslow It's not that it doesn't matter; you just can't optimize it much by changing minor stuff (like 'avoiding IDs'). Regular expressions aren't as evil as you're connoting -- again, don't forget you're dealing with strings that are almost never more than 10 characters long. On the other hand, avoiding using more complex selectors when you can gives you: A) a cleaner CSS file. B) a boost in performance. If IDs did suck as much as some articles are claiming, the CSS specification wouldn't have included them.Caves
@Abody I really don't want to discuss the "should you use IDs" thing 'cos it's off-topic, but you can't surely be suggesting that the CSS spec was flawless? In response to A) yes it makes CSS cleaner (which is good), but once again I'm specifically asking about performance effects. I'd still welcome some concrete examples of people actually measuring rendering performance.Alleviate
@RobinWinslow Okay, I've updated the answer with actual test results.Caves
@Pronation I can do more tests if needed.Caves
@RobinWinslow I also added a link to the tests if you'd like to test for yourself.Caves
I find it surprising that both Firefox and IE outperform Chrome (fairly significantly) in pretty much all of those benchmarks. Firefox must also have a really good rendering engine to outperform Chrome and IE by such a significant amount in the last two tests.Damiendamietta
@SeanDunwoody I found it surprising as well. I have performed some JavaScript benchmarking a couple of weeks ago, and Chrome was way faster than both Firefox and IE (although Firefox was faster than IE). It's weird that Chrome has worse CSS-rendering performance. I haven't tested on any other WebKit browser, though, so I don't know if the issue is WebKit or Chrome-specific.Caves
Yeah, I know Chromes JavaScript Engine demolishes pretty much every other engine out there (with the possibility of maybe IonMonkey) I guess that's a LOT more important (and noticable) and popular sites like Facebook, Google etc. that make very heavy use of JavaScript; hence why they may have neglected their CSS engine a little.Damiendamietta
@SeanDunwoody Yes. I guess the point here is that most common CSS selectors are reasonably fast enough in all browsers (100ms isn't too bad), as long as you don't do reluctant-fanciness like using complex selectors to select large sets of elements. What matters the most is that your CSS "makes sense". If you're building a chat page, and you want to style the messages, and all message divs have a title. One could do: .chatpage div[title], but it's surely better to just give all the messages a class, and then style them using .message. It's simpler, more maintainable, and more efficient.Caves
@Abody97: this is even more impressive after checking your profile. +1Cudlip
Having just stumbled upon this, I find those tests rather odd. Why do you think this timing method actually measures what you want to measure? Just because the script is in the <head> and just before the end of the document does not necessarily mean the CSS layouting processing happens between them. I guess the reason those numbers are a bit weird is that at least Firefox just executes the scripts independently of CSS layouting. This would explain why it gets almost constant results. Reliably measuring the time it takes until a 'visually complete' page is probably going to be hard.Cairns
C
13

Most answers here focus on selector performance as if it were the only thing that matters. I'll try to cover some spriting trivia (spoiler alert: they're not always a good idea), css used value performance and rendering of certain properties.

Before I get to the answer, let me get an IMO out of the way: personally, I strongly disagree with the stated need for "evidence-based data". It simply makes a performance claim appear credible, while in reality the field of rendering engines is heterogenous enough to make any such statistical conclusion inaccurate to measure and impractical to adopt or monitor.

As original findings quickly become outdated, I'd rather see front-end devs have an understanding of foundation principles and their relative value against maintainability/readability brownie points - after all, premature optimization is the root of all evil ;)


Let's start with selector performance:

Shallow, preferably one-level, specific selectors are processed faster. Explicit performance metrics are missing from the original answer but the key point remains: at runtime an HTML document is parsed into a DOM tree containing N elements with an average depth D and than has a total of S CSS rules applied. To lower computational complexity O(N*D*S), you should

  1. Have the right-most keys match as few elements as possible - selectors are matched right-to-left^ for individual rule eligibility so if the right-most key does not match a particular element, there is no need to further process the selector and it is discarded.

    It is commonly accepted that * selector should be avoided, but this point should be taken further. A "normal" CSS reset does, in fact, match most elements - when this SO page is profiled, the reset is responsible for about 1/3 of all selector matching time so you may prefer normalize.css (still, that only adds up to 3.5ms - the point against premature optimisation stands strong)

  2. Avoid descendant selectors as they require up to ~D elements to be iterated over. This mainly impacts mismatch confirmations - for instance a positive .container .content match may only require one step for elements in a parent-child relationship, but the DOM tree will need to be traversed all the way up to html before a negative match can be confirmed.

  3. Minimize the number of DOM elements as their styles are applied individually (worth noting, this gets offset by browser logic such as reference caching and recycling styles from identical elements - for instance, when styling identical siblings)

  4. Remove unused rules since the browser ends up having to evaluate their applicability for every element rendered. Enough said - the fastest rule is the one that isn't there :)

These will result in quantifiable (but, depending on the page, not necessarily perceivable) improvements from a rendering engine performance standpoint, however there are always additional factors such as traffic overhead and DOM parsing etc.


Next, CSS3 properties performance:

CSS3 brought us (among other things) rounded corners, background gradients and drop-shadow variations - and with them, a truckload of issues. Think about it, by definition a pre-rendered image performs better than a set of CSS3 rules that has to be rendered first. From webkit wiki:

Gradients, shadows, and other decorations in CSS should be used only when necessary (e.g. when the shape is dynamic based on the content) - otherwise, static images are always faster.

If that's not bad enough, gradients etc. may have to be recalculated on every repaint/reflow event (more details below). Keep this in mind until the majority of users user can browse a css3-heavy page like this without noticeable lag.


Next, spriting performance:

Avoid tall and wide sprites, even if their traffic footprint is relatively small. It is commonly forgotten that a rendering engine cannot work with gif/jpg/png and at runtime all graphical assets are operated with as uncompressed bitmaps. At least it's easy to calculate: this sprite's width times height times four bytes per pixel (RGBA) is 238*1073*4≅1MB. Use it on a few elements across different simultaneously open tabs, and it quickly adds up to a significant value.

A rather extreme case of it has been picked up on mozilla webdev, but this is not at all unexpected when questionable practices like diagonal sprites are used.

An alternative to consider is individual base64-encoded images embedded directly into CSS.


Next, reflows and repaints:

It is a misconception that a reflow can only be triggered with JS DOM manipulation - in fact, any application of layout-affecting style would trigger it affecting the target element, its children and elements following it etc. The only way to prevent unnecessary iterations of it is to try and avoid rendering dependencies. A straightforward example of this would be rendering tables:

Tables often require multiple passes before the layout is completely established because they are one of the rare cases where elements can affect the display of other elements that came before them on the DOM. Imagine a cell at the end of the table with very wide content that causes the column to be completely resized. This is why tables are not rendered progressively in all browsers.


I'll make edits if I recall something important that has been missed. Some links to finish with:

http://perfectionkills.com/profiling-css-for-fun-and-profit-optimization-notes/

http://jacwright.com/476/runtime-performance-with-css3-vs-images/

https://developers.google.com/speed/docs/best-practices/payload

https://trac.webkit.org/wiki/QtWebKitGraphics

https://blog.mozilla.org/webdev/2009/06/22/use-sprites-wisely/

http://dev.opera.com/articles/view/efficient-javascript/

Cudlip answered 15/9, 2012 at 8:24 Comment(3)
I certainly agree that "field of rendering engines is heterogenous", but that doesn't seem any reason not to have statistics. How is a front-end dev to decide the "relative value" of "foundation principles" against "maintainability/readability" without being informed by some statistics? Just because there are the field is diverse and always changing is no excuse to act without evidence.Alleviate
@RobinWinslow: you're misinterpreting my answer - I'm not sipportive of seeking evidence-based data when original processing logic is readily available for analysis instead. For instance you want to argue against deep overspecified selectors - you could run hundreds of tests which are ultimately influenced by the browser and its version, system hardware, test case specifics... or you could read up on RTL processing and see that such selectors will increase processing overhead no matter how many optimizations a browser engine introduces.Cudlip
TL;DR: don't analyse the outcomes, analyse the model. Anyway I did warn you this was an IMO ;)Cudlip
M
5

While it's true that

computers were way slower 10 years ago.

You also have a much wider variety of device that are capable of accessing your website these days. And while desktops/laptops have come on in leaps and bounds, the devices in the mid and low end smartphone market, in many cases aren't much more powerful than what we had in desktops ten years ago.

But having said that CSS Selection speed is probably near the bottom of the list of things you need to worry about in terms of providing a good experience to as broad a device range as possible.

Expanding upon this I was unable to find specific information relating to more modern browsers or mobile devices struggling with inefficient CSS selectors but I was able to find the following:

  1. http://www.stevesouders.com/blog/2009/03/10/performance-impact-of-css-selectors/

    Quite dated (IE8, Chrome 2) now but has a decent attempt of establishing efficiency of various selectors in some browsers and also tries to quantify how the # of CSS rules impacts page rendering time.

  2. http://www.thebrightlines.com/2010/07/28/css-performance-who-cares/

    Again quite dated (IE8, Chrome 6) but goes to extremes in inefficient CSS selectors * * * * * * * * * { background: #ff1; } to establish performance degradation.

Mafaldamafeking answered 5/9, 2012 at 11:8 Comment(3)
+1 for mentioning the proliferation of devices, but while smartphones are less powerful, rendering engines have become waaaay more efficient. I'd particularly welcome concrete examples of rendering problems that smartphones struggle with.Alleviate
I wasnt able to find any examples online of mobile browsers struggling with rendering based on inefficient selectors but did managed to find a couple of slightly dated examples where people tried to actually put some numbers to the various ineffcient css selectors, i've updated my answer accordingly and hopefully you'll find it useful.Mafaldamafeking
awesome those are exactly the sort of resources I was looking for. It seems the main conclusions from those two articles are that even if you're really trying to craft inefficient queries it's only going to make negligible difference, which is exactly the sort of conclusion I was looking for. It would still be awesome if we can find any tests including mobile devices. I'm going to leave this question open for a while to see what other people can come up with, but this is definitely the top candidate answer.Alleviate
M
4

For such a large bounty I am willing to risk the Null answer: there are no official CSS selectors that cause any appreciable slow-downs in the rendering, and (in this day of fast computers and rapid browser iteration) any that are found are quickly solved by browser makers. Even in mobile browsers there is no problem, unless the unwary developer is willing to use non-standard jQuery selectors. These are marked as risky by the jQuery developers, and can indeed be problematic.

In this case the lack of evidence is evidence of the lack of problems. So, use semantic markup (especially OOCSS), and report any slow-downs that you find when using standard CSS selectors in obscure browsers.

People from the future: CSS performance problems in 2012 were already a thing of the past.

Microbalance answered 13/9, 2012 at 19:12 Comment(0)
U
1

isn't css a irrelevant way to make it faster, it must be the last thing you look at when you look at performance. Make your css in what ever way that suites you, compile it. and then put it in the head. This might be rough but their are loads of other things to look for when your looking in to browser performance. If you work at a digital bureau you wont get paid to do that extra 1ms in load time.

As i commented use pagespeed for chrome its a google tool that analyze the website in 27 parameters css is 1 of them.

My post just concern exactly, wouldn't rather have around 99% of web users be able to open the website and see it right, even the people with IE7 and such. Than closing out around 10% by using css3, (If it turns out that you can get an extra 1-10ms on performance).

Most people have atleast 1mbit/512kbit or higher, and if you load a heavy site it takes around 3 secounds to load, but you can save 10ms maybe on css??

And when it comes to mobile devices you should make sites just for mobiles so when you have a device with screen size less than "Width"px, you have a separate site

Please comment below this is my perspective and my personal experience with web development

Uncurl answered 11/9, 2012 at 11:19 Comment(4)
These performance practices are well known and accepted. This question is about rendering performance. Given that rendering concerns are less important than transfer concerns, I am trying to find out how much rendering performance matters and just how complex selectors or rendering rules have to get before it matters. Thanks for adding your voice on the side of "it doesn't matter at all", but other than that this answer is not actually contributing to the debate.Alleviate
It's contributing in a way that all devices are to fast with the rendering process that it doesn't make sense to really look into unless you are using pictures with 150dpi or higher, which is irrelevant cause web only show as 72dpi. And i might add that if you can render 3d in the browser 2d is way faster than to care. But hope you find some prove that its significantly faster, i added as favorite in that matterGraceless
Okay so your 150dpi comment is exactly the sort of thing I'm looking for - but I want evidence rather than just your assertion: evidence that 150dpi makes a difference and evidence that other rendering concerns aren't an issue. I personally believe that there must be some websites out there whose design is so complex that rendering the CSS is at least a little slow, at least on mobile devices.Alleviate
i see where your going, but still its only 72dpi on the web, but to render 150 you ofc have to render twice the amount of pixels. If you add the download speed to your rendering time you might have a case, like if you make rounded corners with css 3 or css 2, then you have the download time + the render time, compared to just rendering.Graceless
R
1

While not directly code-related, using <link> over @import to include your stylesheets provides much faster performance.

'Don’t use @import' via stevesouders.com

The article contains numerous speed test examples between each type as well as including one type with another (ex: A CSS file called via <link> also contains @import to another css file).

Ransdell answered 14/9, 2012 at 13:38 Comment(3)
Off Topic I'm afraid, and also one of the more simplistic performance tweaks that I imagine most front-end devs already know.Alleviate
Perhaps, but one shouldn't assume. Covering the basics is never a bad idea.Ransdell
Except when it's off topic :pAlleviate

© 2022 - 2024 — McMap. All rights reserved.