Assumptive Development
As web developers, we want a way to ask "can you do this?" And there are varying degrees to which we can determine this.
One of those ways is to use user agent (UA) detection. We ask the browser some information about itself and it tells us. Based on what we know about a browser, we can make certain assumptions. If a browser tells you it is Internet Explorer, chances are you support the HTML, CSS and JavaScript that Internet Explorer supports. This detection could happen on the server-side or client-side.
Another way is feature detection. Feature detection uses JavaScript to test for a feature before using it. If I can access document.getElementById then almost assuredly I can use document.getElementById. Of course, to test for every feature before we use it would be extremely redundant. Generally, we'll test for a few known scenarios and with that, assume that other features are also available. For example, if I can access document.attachEvent, I'm almost definitely in IE and therefore can expect that all of the other event-related features of IE will work.
Circumvention
What happens when a bunch of web developers decide to use a particular approach to determining the capabilities of the browser? A new browser comes out and now needs to mimic or duplicate that approach to bypass the detection mechanism.
With UA detection, browsers only have one recourse: change the UA to say that it supports the features of another browser by saying it's another browser.
User agent strings are extremely limited, though, in the amount of information it can convey. They can't possibly describe the myriad of functionality it does or doesn't support.
Feature detection excels by being able to provide information at a micro level. Test the feature, use the feature. If a future browser comes out that implements the feature then the test should pass and the developer should (theoretically) not have any more work on their hands.
Sometimes a browser may sidestep feature detection with a partial implementation. For example, Firefox developed a limited implementation of document.all to get around feature detection. It only works in quirks mode and behaves differently than IE: if(document.all){}
would fail but if(document.all.maincontent){}
(assuming maincontent is an element on the page) would succeed.
At worst, a browser may test positive for the existence of a property, even when it doesn't really support it. Modernizr, for example, runs into the occasional false positive and has to use alternate methods that test other specific expectations before determining a more accurate true or false. Do a search for "false positive" in the source and you'll see a few cases where things aren't always as they seem.
What does it mean?
We need to accept that, as web developers, we cannot test every permutation and, therefore, have to make assumptions somewhere.
While my argument for using UA detection versus feature detection may lean towards feature detection, let it not be the only recourse.
Alex Russell, for example, speaks of using UA detection as a first line of offence. Use it to determine capabilities among a known subset of browsers and then fall back to using feature detection for the unknown browsers. In doing so, you gain the performance advantage of avoiding client-side feature detection (and avoid downloading an additional resource, in the case of Modernizr).
Client-side feature detection also falls victim to environments where JavaScript may not be enabled, which could be as much as 2% of your audience or more. Yes, I understand that 2% may not sound like very much, but it's another set of users whose user experience you should plan for.
Missing the point
Those wishing to debate user agent detection versus feature detection may be missing the bigger conundrum: At the end of the day, you still have to decide what to do with a user who fails that test.
What happens to users who do not have JavaScript enabled? What happens to users who do not support the feature that you're testing for? What happens to users on browsers for which you can't test for?
There are a few options as to how to handle a failed test:
- Deny the user
- Warn the user
- Limit functionality
- Hope for the best
On Yahoo! Mail, for example, we do a combination of items 1 and 2. For some browsers, we deny them outright. Sorry, Internet Explorer 6. You're just out of luck. For other browsers, you'll get a warning page letting you know that you might run into a quirk or two. Yes, we use user agent detection to do this. Once you're in the application, however, we generally want you to be able to access everything. Feature and UA testing may be used to fork past differing browser implementations. For example, file uploads could use the Flash uploader or the basic POST upload.
For most of us, though, we just hope for the best. When was the last time you tested your project on Firefox 2 or Safari 3?
Relying on UA detection alone isn't likely to be very resilient in the long run, nor is relying on feature detection likely to solve every problem in every situation for every user. Like much of web development, it's a series of choices along the way that we have to make to craft the best experience for the most users as possible, even if that means making some assumptions along the way.
Conversation
One of the most important things to remember is that we as web developers are partly responsible for this mess. Coding a site to say "Sorry bud, you're out of luck!" to users of downlevel browsers based on UA or feature sniffing often encourages browser developers to hack around our detection scripts.
The more you block users based on detection (and without graceful degradation), the more this crap happens. Some is unavoidable, but it bears keeping in mind.
I haven't tested Firefox 2 and some others in a while, but most of my clients don't typically want to pay for such thorough (and time consuming) accuracy.
For projects of mine, new product launches and the like, I test everything I can get my hands on.
Nice post, Jonathan.
I dig most of the post. I would like to point out that Opera also supports `document.attachEvent` so when thinking of feature testing (an umbrella term for testing, detection, and weak inference) we are thinking less of which browser we are in and more of which feature is available. The more I use feature testing (FT) for bug/feature checks the less I worry about the specific browser (I usually add comments to remind me of the current browsers it happens to work in).
Also, when devs point to false positives with FT it's usually because of using a weak inference. I cover that a bit in the comments of my recent screencasts. Weak inferences require more assumptions, not as much as UA sniffing, but more than the other techniques I list.
I'm glad you mentioned fallbacks. Usually if you try a series of FT you fallback to a basic implementation in JS. For example selector engines will try `document.querySelectorAll` and then fallback to manual dom traversal.
There is always a level of assumed support. For example, I usually don't test for a working `document.getElementById` or a working `Array#push`. However, I'm not opposed to doing a series of tests/inferences for a minimum requirements and warning the user if some key basic functionality is missing.
This is a good article. Just Keep it up! Thanks!
Very good point about not testing Firefox 2 or Safari 3. I'm guilty of this, but I guess there are a fair proportion of people still using FF2 (I'm thinking my dear grandmother who I introduced to Firefox years ago who has probably denied updates ever since).
Something to consider next time we do a thorough test of one of our front end layouts...
Nice article! I love it very much and i have share with my friends! Thanks!
This is a good article. Just Keep it up! Thanks!
Nice article! I love it very much and i have share with my friends! Thanks!