What browsers really need is most likely not what you think it is
Saturday, November 24th, 2012 at 4:30 pmDisclaimer: While I work there this post is on my personal blog and does not necessarily reflect the opinion of Mozilla. Feedback bickering about how Firefox fails to meet your needs will be blocked out by me by means of looking at cute pictures online (this one for example which kills me every single time). Thank you.
“What should a browser do to really make you more effective and allow you to build great products?” is an interesting question to ask. I’ve asked it a few times in different environments and I got a lot of different answers. When you dig deeper and ask for details why browsers need what people wanted it gets interesting – most of the time they can’t say why.
New toys with older and better implementations
In many cases as developers we want new toys, we see a cool experimental feature in one browser and want that, right here, right now, regardless of its usefulness.
Many of these hot new and cool features in browsers have been in Flash and implemented in a more flexible way so to a degree, we still play catch-up. This is annoying as we could have learned from what Flash gave the web and how it moved it ahead (yes, without Flash we would have no video, audio or gaming on the web, let’s face facts) and copied and enhanced instead of going on a parallel innovation stream.
A battlefield with not-so-hidden agendas
In addition to this urge of wanting the new and shiny there is a need for some companies to sell their browser and get mindshare of developers to point them to their other products. This results in a browser battlefield that tries to impress with cool and new rather than building a predictable and stable platform. Granted, this is mostly the impression you get through marketing materials and not talking to the browser engineers, but, for example, I don’t see any of the players making a real effort to get all HTML5 input types supported and style-able – a need we really have as forms will not go away any time soon.
Regardless of the lack of support for a few of the basics in HTML5 when it comes to moving browsers forward I think the main feature we need is feature testing APIs in the browser. What do I mean by that and why do we need them? Well, here goes.
We need native feature testing
Web development always came with a choice:
- Do you want to use a feature and demand that your users have the ability to consume it and risk broken experiences or
- Do you want to test before you use a certain feature and give it only to those who can consume it?
The former are the sites that come with disclaimers like “this web page feature $x and needs a modern browser like $y” and can look extremely silly a few years down the line. The latter is what we call progressive enhancement, giving everybody what they can deal with and not more. There is also an unhappy middle ground called graceful degradation, which means you build something with all the features and a fallback solution and hopefully maintain both.
If you go for the progressive enhancement approach you need to provide several solutions or you need to test. The several solutions approach for example would be to ensure you do not have a button with white text on white background by providing a fallback background colour for browsers that don’t render gradients:
button { color: #fff; border: none; /* defining a background colour for all browsers */ background: #0c0; /* defining a gradient for those who can */ background: linear-gradient(0deg,#000, #0c0); } |
This is not much work and should be easy to do for everybody (that there are a lot of buttons with only webkitlinear-gradient backgrounds is baffling to me). It gets harder though when you want to use more advanced features.
Testing mechanisms
What we want to do is to test a feature before we apply it. The worst way of doing that is by trying to detect the browser and make the assumption that it really is the one you are targeting and that it can do what it promises. Reading out the user agent string of the browser is the standard way of doing that and let me just tell you now that this is where madness lies. Forget trying to detect the browser, they will lie to you and you lock yourself into an ever-changing game of hide and seek with new browsers coming out all the time.
The more robust way of testing is feature detection, you plain and simply ask the user agent if it can do something before you apply it. This could be as simple as asking if navigator.geolocation
is an object before trying to call its methods. This can result in a lot of code though and there are browser quirks to consider, which is why we have to be very thankful for the people who build and maintain Modernizr to do all that work for us. Using modernizr we have handles in the form of class names on the root element and a very handy JavaScript API to test for features and under the hood it checks thoroughly and forks for several browser quirks – pain we don’t need to inflict onto ourselves.
To me the existence and success of Modernizr shows that there is a massive need for being able to test features in browsers before applying them and getting a definite answer back. This is where browsers don’t do too well yet.
Native testing mechanisms
As shown with the background example our native testing in most cases just means applying things the browser might not understand and rely on it skipping the bits it doesn’t know about. This works in CSS, as CSS doesn’t throw errors and stops executing. JavaScript, on the other hand, does.
CSS is doing a lot in that respect at the moment. Media Queries, for example, are feature detection mechanism. I love that I can apply a style only when the browser window is a certain size or in a certain orientation. It is simple and works. More of that.
Well, funnily enough there is more already. The current Opera and Firefox Aurora support the CSS3 Conditional Rules Module Level 3 which means you can test for CSS features with an @supports
block:
@supports (-webkit-transform: rotateX(0deg)) or (-moz-transform: rotateX(0deg)) or (-ms-transform: rotateX(0deg)) or (-o-transform: rotateX(0deg)) or (transform: rotateX(0deg)) { // this will only apply to browsers with 3D transforms } |
This also comes with a JavaScript API to test for features whose syntax is still currently discussed. You could end with window.supportsCSS('display:flex');
or CSS.supports('display:flex')
respectively, but still the power of this idea is staggering and it could mean Modernizr would be not needed in the future (or a lot smaller as we could for @supports and lazy load the rest of Modernizr only on demand).
If you want to learn more about @supports check Chris Mills’ Native CSS feature detection via the @supports rule.
Pointers in the right direction
All of this makes me hopeful that the next generation of browsers will give us developers what we really need: a way to test that what we do can be applied before we do it. Right now we need to spend far too much of our development time on adding polyfills and libraries to test for and – if needed – simulate functionality we should be able to just use. A great case is touch and mouse support in browsers which right now is not fun to apply which is why we do have to use libraries like pointer.js.
Fact is that we will never have one browser and one environment for the web. That would neuter it and violates its main principles. We can dupe ourselves into believing that we can control and demand what our users take to look at our products, but this is a fool’s gambit. Hundreds of web sites and enterprise systems out there demanding you in 2012 to use Internet Explorer 6 to get the best experience should teach us that this is a mistake not to repeat.
The much more likely future is that we will have to have a much more diverse environment to support. The web design community is already harping on about that for quite a while under the Future Friendly label. Seeing the amount of smartphones, tablets and web-enabled TV sets and game consoles out there it is not hard to grasp that testing and being flexible in our approach of web development is the future indeed.
Native testing for all the things!
What’s happening in CSS is encouraging, we need to move that idea further ahead. Some other parts in browsers less so. For example when we got native video we realised early on that it would be great to have a feature test for which video formats are supported. I agree, this is very needed indeed, but getting an API that returns “”, “maybe” or “probably” is a joke and not a helpful addition. On the other hand the Page visibility API and its under-the-hood implementation in requestAnimationFrame is a wonderful addition and exactly how it should be. I can ask the browser if the user sees what I do and if she doesn’t it stalls until it needs to animate again. The WebAPI work of course is also very encouraging – we just need to get them supported cross-browser.
Retina displays and their need for large (by dimension and file size) images are the big new issue. Should you add large images and risk people on a mobile device with a small screen burning through their data allowance plans or should you risk a blurry view and load larger files later on demand? When on demand though? When the display really can show larger images or when the connection is fast enough to load them?
Right now there is no way of detecting the speed of a connection and if it is OK to load lots of data. PPK is doing some research to see what that could look like and it is not easy. But we need that, I am not kidding here. Simply assuming people can load lots of data is not delivering a good service.
Quick example: I got that a lot on my blog when I embedded HTML5 videos. I thought this is a good dogfooding exercise but got feedback that I should switch back to YouTube embeds as their Flash versions delivers lower quality streaming versions of the video to people on bad connections. HTML5 video doesn’t do connection negotation yet (which is a dirty hack of chunking a video into 4k bits and sending as many as possible at a time). So in order to deliver a good service, I needed to go back to “closed” technology as I had no means of testing if my readers can consume what I gave them.
We push browsers to their limits these days and we don’t get much feedback as to what is possible. Say you do a massive CSS animation with complex transitions, drop shadows and transparency. How can you be sure this will be smooth and not an annoying flickering slide show experience? Imagine a browser telling you the FPS it runs the animation on, imagine – even better – that it would have an internal threshold to stop animations and automatically move to their end when the browser is too slow (something YUI had for quite a while now).
There are many things we need to be able to know in the nearer future, and I wish browsers would add a few of those testing features:
- What is the connection speed/reliability?
- How busy is the machine? Does it make sense to start this animation or will it be laggy anyways?
- What is the battery status? Should my app offer a low-level experience to be alive for longer?
- Is the audio/video channel of the machine busy already? Why play a sound when the user is listening to music anyways?
- Are there already fonts available or do I need to load a web font?
- Can I store content locally and what is the quota already used up?
- Is this a touch device with gestures or a kinnect or a mouse-enabled device or what?
- What codecs are installed, what media can be played?
- What is hardware accelerated and what isn’t?
- Is the browser talking to an assistive device layer and what assistive technology is in use?
- What is going on in the browser cache?
A lot can be added to the list and I really hope that browser vendors understand that we can only work effectively if we have the right tools. Right now I feel we need to resort to libraries, polyfills and shims for far too many things. Which might be an alternative future, but I for one would love the browser to do that work for me so I can concentrate on building apps running in it.