Current advancements in ECMAScript are a great opportunity, but also a challenge for the web. Whilst adding new, important features we’re also running the danger of breaking backwards compatibility.
These are my notes for a talk I gave at the MunichJS meetup last week. You can see the slides on Slideshare and watch a screencast on YouTube. There will also be a recording of the talk available once the organisers are done with post-production.The video of the talk is live on YouTube
JavaScript – the leatherman of the web
Accurate visualisation of the versatility of JavaScript
JavaScript is an excellent tool, made for the web. It is incredibly flexible, light-weight, has a low learning threshold and a simple implementation mechanism. You add a SCRIPT element to an HTML document, include some JS directly or link to a JS file and you are off to the races.
JavaScript needs no compilation step, and is independent of IDE of development environment. You can write it in any text editor, be it Notepad, VI, Sublime Text, Atom, Brackets or even using complex IDEs like Eclipse, Visual Studio or whatever else you use to put text into a file.
JavaScript – the enabler of new developers
JavaScript doesn’t force you to write organised code. From a syntax point of view and when it comes to type safety and memory allocation it is an utter mess. This made JavaScript the success it is now. It is a language used in client environments like browsers and apps. For example you can script illustrator and Photoshop with JavaScript and you can now also automate OSX with it. Using node or io you can use JavaScript server-side and write APIs and bespoke servers. You can even run JS directly on hardware.
The forgivefulness of JS is what made it the fast growing success it is. It allows people to write quick and dirty things and get a great feeling of accomplishment. It drives a fast-release economy of products. PHP did the same thing server-side when it came out. It is a templating language that grew into a programming language because people used it that way and it was easier to implement than Perl or Java at that time.
JavaScript broke with conventions and challenged existing best practices. It didn’t follow an object orientated approach and its prototypical nature and scope trickery can make it look like a terribly designed hack to people who come from an OO world. It can also make it a very confusing construct of callbacks and anonymous functions to people who come from it from CSS or the design world.
But here’s the thing: every one of these people is cordially invited to write JavaScript – for better or worse.
JavaScript is here to stay
The big success of JavaScript amongst developers is that it was available in every browser since we moved on from Lynx. It is out there and people use it and – in many cases – rely on it. This is dangerous, and I will come back to this later, but it is a fact we have to live with.
As it is with everything that is distributed on the web once, there is no way to get rid of it again. We also can not dictate our users to use a different browser that supports another language or runtime we prefer. The fundamental truth of the web is that the user controls the experience. That’s what makes the web work: you write your code for the Silicon Valley dweller on a 8 core state-of-the-art mobile device with an evergreen and very capable browser on a fast wireless connection and much money to spend. The same code, however, should work for the person who saved up their money to have a half hour in an internet cafe in an emerging country on a Windows XP machine with an old Firefox connected with a very slow and flaky connection. Or the person whose physical condition makes them incapable to see, speak, hear or use a mouse.
Our job is not to tell that person off to keep up with the times and upgrade their hardware. Our job is to use our smarts to write intelligent solutions. Intelligent solutions that test which of their parts can execute and only give those to that person. Web technologies are designed to be flexible and adaptive, and if we don’t understand that, we shouldn’t pretend that we are web developers.
The web is a distributed system of many different consumers. This makes it a very hostile development environment, as you need to prepare for a lot of breakage and unknowns. It also makes it the platform to reach much more people than any – more defined and closed – environment could ever reach. It is also the one that allows the next consumers to get to us. It’s hardware independence means people don’t have to wait for availability of devices. All they need is something that speaks HTTP.
New challenges for JavaScript
This is all fine and well, but we reached a point in the evolution of the web where we use JavaScript to such an extend that we need to start to organise it better. It is possible to hack together large applications and even server-side solutions in JavaScript, but in order to control and maintain them we need to consider writing cleaner JavaScript and be more methodical in our approach. We could invent new ways of using it. There is no shortage of that happening, seeing that there are new JavaScript frameworks and libraries published almost weekly. Or, we could try to tweak the language itself to play more by rules that have proven themselves over decades in other languages.
In other words, we need JavaScript to be less forgiving. This means we will lose some of the first-time users as stricter rules are less fun to follow. It also means though that people coming from other, higher and more defined languages can start using it without having to re-educate themselves. Seeing that there is a need for more JavaScript developers than the job market can deliver, this doesn’t sound like a bad idea.
JavaScript – the confused layer of the web
Whilst JS is a great solution to making the web respond more immediately to our users, it is also very different to the other players like markup and style sheets. Both of these are built to be forgiving without stopping execution when encountering an error.
A browser that encounters a unknown element shrugs, doesn’t do anything to it and moves on in the DOM to the next element it does understand and knows what to do with. The HTML5 parser encountering an unclosed element or a wrongly nested element will fix these issues under the hood and move on turning the DOM into an object collection and a visual display.
A CSS parser encountering a line with a syntax error or a selector it doesn’t understand skips that instruction and moves on to the next line. This is why we can use browser-prefixed selectors like – webkit – gradient without having to test if the browser really is WebKit.
JavaScript isn’t that way. When a script encounters a syntax error or you try to access a method, object or property that doesn’t exist it stops executing and throws an error. This makes sense seeing that JavaScript is much more powerful than the others and even can replace them. You are perfectly able to create a web product with single empty BODY element and let JavaScript do the rest.
JavaScript – playing it safe by enhancing progressively
This makes JavaScript a less reliable technology than the other two. We punish our end users for mistakes made by us, the developers. Ironically, this is exactly the reason why whe shunned XHTML and defined HTML5 as its successor.
Many things can go wrong when you rely on JavaScript and end users deliberately turning it off is a ridiculously small part of that. That’s why it is good practice to not rely on JavaScript, but instead test for it and enhance a markup and page reload based solution when and if the browser was able to load and execute our script. This is called progressive enhancement and it has been around for a long time. We even use it in the physical world.
When you build a house and the only way to get to the higher floors is a lift, you broke the house when the lift stops working. If you have stairs to also get up there, the house still functions. Of course, people need to put more effort in to get up and it is not as convenient. But it is possible. We even have moving stairs called escalators that give us convenience and a fall-back option. A broken down escalator is a set of stairs.
Our code can work the same. We build stairs and we move them when the browser can execute our JavaScript and we didn’t make any mistakes. We can even add a lift later if we wanted to, but once we built the stairs, we don’t need to worry about them any more. Those will work – even when everything else fails.
JavaScript – setting a sensible baseline
The simplest way to ensure our scripts work is to test for capabilities of the environment. We can achieve that with a very simple IF statement. By using properties and objects of newer browsers this means we can block out those we don’t want to support any longer. As we created an HTML/Server solution to support those, this is totally acceptable and a very good idea.
There is no point punishing us as developers by having to test in browsers used by a very small percentage of our users and that aren’t even available on our current operating systems any longer. By not giving these browsers any JavaScript we have them covered. We don’t bother them with functionality the hardware they run on is most likely not capable to support anyways.
The developers in the BBC call this “cutting the mustard” and published a few articles on it. The current test used to not bother old browsers is this:
if ('querySelector' in document &&
'localStorage' in window &&
'addEventListener' in window) {
// Capable browser.
// Let's add JavaScript functionality
} |
if ('querySelector' in document &&
'localStorage' in window &&
'addEventListener' in window) {
// Capable browser.
// Let's add JavaScript functionality
}
Recently, Jake Archibald of Google found an even shorter version to use:
if ('visibilityState' in document) {
// Capable browser.
// Let's add JavaScript functionality
} |
if ('visibilityState' in document) {
// Capable browser.
// Let's add JavaScript functionality
}
This prevents JavaScript to run in Internet Explorer older than 10 and Android browsers based on WebKit. It is also extensible to other upcoming technologies in browsers and simple to tweak to your needs:
if ('visibilityState' in document) {
// Capable browser.
// Let's load JavaScript
if ('serviceWorker' in navigator) {
// Let's add offline support
navigator.serviceWorker.register('sw.js', {
scope: './'
});
}
} |
if ('visibilityState' in document) {
// Capable browser.
// Let's load JavaScript
if ('serviceWorker' in navigator) {
// Let's add offline support
navigator.serviceWorker.register('sw.js', {
scope: './'
});
}
}
This, however, fails to work when we start changing the language itself.
Strict mode – giving JavaScript new powers
In order to make JavaScript safer and cleaner, we needed its parser to be less forgiving. To ensure we don’t break the web by flagging up all the mistakes developers did in the past, we needed to find a way to opt-in to these stricter parsing rules.
A rather ingenious way of doing that was to add the “use strict” parser instruction. This meant that we needed to preceed our scripts with a simple string followed by a semicolon. For example, the following JavaScript doesn’t cause an error in a browser:
The lenient parser doesn’t care that the variable x wasn’t initiated, it just sees a new x and defines it. If you use strict mode, the browser doesn’t keep as calm about this:
In Firefox’s console you’ll get a “ReferenceError: assignment to undeclared variable x”.
This opt-in works to advance JavaScript to a more defined and less memory consuming language. In a recent presentation Andreas Rossberg of the V8 team in Google proposed to use this to advance JavaScript to safer and cleaner versions called SaneScript and subsequently SoundScript. All of these are just proposals and – after legitimate complaints of mental health community – there is now a change to call it StrongScript. Originally the idea was to opt in to this new parser using a string called ‘use sanity’, which is cute, but also arrogant and insulting to people suffering from cognitive problems. As you can see, advancing JS isn’t easy.
ECMAScript – changing the syntax
Opting in to a new parser with a string, however, doesn’t work when we change the syntax of the language. And this is what we do right now with ECMAScript, which is touted as the future of JavaScript and covered in many a post and conference talk. For a history lesson on all of this, check out Florian Scholz’s talk at jFokus called “Whats next for JavaScript – ES6 and beyond”.
ECMAScript has a few excellent new features. You can see all of them in the detailed documentation on MDN and this post also has a good overview. It brings classes to JavaScript, sanitises scope issues, allows for template strings that span several lines and have variable replacement, adds promises, does away with the need of a lot of anonymous functions to name just a few.
It does, however, change the syntax of JavaScript and by including it into a document or putting it inside a script element in a browser that doesn’t support it, all you do is create a JavaScript error.
There is no progressive enhancement way around this issue, and an opt-in string doesn’t do the job either. In essence, we break backwards compatibility of scripting of the web. This could be not a big issue, if browsers supported ES6, but we’re not quite there yet.
ES6 support and what to do about it
The current support table of ECMAScript6 in browsers, parsers and compilers doesn’t look too encouraging. A lot is red and it is unknown in many cases if the makers of the products we rely on to run JavaScript will take the plunge.
In the case of browsers, the ECMAScript test suite to run your JavaScript engine against is publicly available on GitHub. That also means you can run your current browser against it and see how it fares.
If you want to help with the adoption of ECMAscript in browsers, please contribute to this test suite. This is the one place all of them test against and the better tests we supply, the more reliable our browsers will become.
Ways to use the upcoming ECMAScript right now
The very nature of the changes to ECMAScript make it more or less impossible to use it across browsers and other JavaScript-consuming tools. As a lot of the changes to the language are syntax errors in JavaScript and the parser is not lenient about them, we advance the language by writing erroneous code for legacy environments.
If we consider the use cases of ECMAScript, this is not that much of an issue. Many of the problems solved by the new features are either enterprise problems that only pay high dividends when you build huge projects or features needed by upcoming functionality of browsers (like, for example, promises).
The changes mostly mean that JS gets real OO features, is more memory optimised, and that it becomes a more enticing compile target for developers starting in other languages. In other words, targetted at an audience that is not likely to start writing code from scratch in a text editor, but already coming from a build environment or IDE.
That way we can convert the code to JavaScript in a build process or on the fly. This is nothing shocking these days – after all, we do the same when we convert SASS to CSS or Jade to HTML.
Quite some time ago, new languages like TypeScript got introduced that give us the functionality of ECMAScript6 now. Another tool to use is Babel.js, which even has a live editor that allows you to see what your ES6 code gets converted to in order to run in legacy environments.
Return of the type attribute?
Another way to get around the issue of browsers not supporting ECMAScript and choking on the new syntax could be to use a type attribute. Every time you add a type value to a script element the browser doesn’t understand, it skips it and doesn’t bother the script engine with its content. In the past we used this to create HTML templates and Microsoft had an own JS derivate called JScript. That one gave you much more power to use Windows internals than JavaScript.
One way to ensure that all of us could use the ECMAScript of now and tomorrow safely would be to get browsers to support a type of ‘ES’ or something similar. The question is if that is really worth it seeing the problems ECMAScript is trying to solve.
We’ve moved on in the web development world from embedding scripts and view-source to development toolchains and debugging tools in browsers. If we stick with those, switching from JavaScript to ES6 is much less of an issue than trying to get browsers to either parse or ignore syntactically wrong JavaScript.
Update: Axel Rauschmayer proposes something similar for ECMAScript modules. He proposes a MODULE element that gets a SCRIPT with type of module as a fallback for older browsers.
It doesn’t get boring
In any case, this is a good time to chime in when there are discussions about ECMAScript. We need to ensure that we are using new features when they make sense, not because they sound great. The power of the web is that everybody is invited to write code for it. There is no 100% right or wrong.