I am going to play a devil's advocate for a moment. I have been always wondering why browser detection (as opposed to feature detection) is considered to be a flat out as a bad practise. If I test a certain version of certain browser and confirm that, that certain functionality behaves is in some predictable way then it seems OK to decide to do special case it. The reasoning is that it will be in future foolproof, because this partial browser version is not going to change. On the other hand, if I detect that a DOM element has a function X, it does not necessarily mean that:
- This function works the same way in all browsers, and
- More crucially, it will work the same way even in all future browsers.
I just peeked into the jQuery source and they do feature detection by inserting a carefully constructed snippet of HTML into DOM and then they check it for certain features. It’s a sensible and solid way, but i would say that it would be a bit too heavy if i just did something like this in my little piece of personal JavaScript (without jQuery). They also have the advantage of practically infinite QA resources. On the other hand, what you often see people doing is that they check for the existence of function X, and then based on this, they assume the function will behave certain way in all browsers which have this function.
I’m not saying anything in the sense that feature detection is not a good thing (if used correctly), but I wonder why browser detection is usually immediately dismissed even if it sounds logical. I wonder whether it is another trendy thing to say.