Christian Heilmann

Web Truths: JavaScript can’t be trusted

Tuesday, September 26th, 2017 at 12:34 pm

This is part of the web truths series of posts. A series where we look at true sounding statements that we keep using to have endless discussions instead of moving on. Today I want to tackle the issue of JavaScript and how much we should rely on it.

Good vs. eval()

JavaScript is the love/hate topic of the “modern web”. People who have been around for a long time have learned the hard way not to blindly rely on it. People who just started don’t even know how you could turn it off or why it could be a problem.

This gives us an endless opportunity to rant in one direction or the other. The old guard points out that the new builds fragile products that demand too much of the users. The new guard points out that a web without JavaScript is neither fun for users nor easy to maintain. A lot of times, this argument is about developer convenience trumping sturdiness of the web. And we keep going in circles viewing it from both ends of the spectrum.

When we got JavaScript, it ran in the browser and made a lot of things possible that HTML and CSS alone could not do yet. We could react to things our users were doing without reloading the page. We could read out and react to the environment the script was running in. JavaScript was the interactive glue of the web. We could make less able browsers do things others could. We could create interaction models that HTML forms didn’t provide. And we could create a consistent experience across browsers and platforms.

And this is where the main issue came up: for many, the web isn’t meant to look and work the same everywhere. Instead it should give a working experience for all users and get fancier the more able the user agent is. By relying on JavaScript in our solutions we put up a barrier for the sake of control. This was especially bad when we didn’t deliver any functionality when JavaScript failed.

And JavaScript can fail in many ways. From non supporting environments to coding errors and connectivity issues – anything can break.

JavaScript isn’t like CSS or HTML. Both these building blocks of the web are fault-tolerant. This means when you write invalid HTML, the browser tries to fix it. If you use bleeding edge CSS in an old browser, it ignores it. Not so with JavaScript. Syntax errors or accessing an unknown object cause the interpreter to give up and say no. Even the most capable environments don’t support JavaScript until the first script loaded and ran without a problem.

Stuart Langridge explains what can go wrong until your script does what you wanted it to do in his Everyone has JavaScript, right? site.

The main power of JavaScript was that it executes on the fly and in the browser. It didn’t need any compilation, it didn’t need any fancy development environment. That was a huge part of its success. Compared to other languages that have similar power, it is much more approachable. And it seems easy to fix a problem you have with JavaScript that seems complex in CSS if you’re not used to its syntax. JavaScript can do everything: it can load extra information, it can create HTML and it can change styles and images. It is the jack-of-all-trades of the web.

When something is easy to apply there is always a danger that people over-use it. It is tempting to see your well-connected development device as the world. And to expect the same speeds and computing powers of your users. In this scenario loading a few megabyte of JavaScript is not a high price to pay to allow for easy maintenance. When you’re on a metered, slow or unreliable connection or on a low-end device this convenience can soon turn into frustration. This is an even bigger problem as it is hard – if not impossible – to detect these conditions.

So, yes, JavaScript is a fair-weather friend and can break in many ways. You may also block out a lot of users because you crave more control over things you aren’t meant to control.

There is a flip-side to this truth though. JavaScript has evolved from a scripting language in the browser to a development environment in its own right. The rise of Node and other server-side and embedded systems put JavaScript on the map as a key skill of our market.

JavaScript isn’t a client-side problem, it is a whole bigger set of offers these days. I talked about this beginning of the year at ScriptConf in Austria

Universal, isomorphic JavaScript – or whatever other buzzword we come up with – is the answer to the lack of fault tolerance of the language. We can run the JavaScript in a space we control like the server or a build process and we render out plain HTML. We can use client-side JavaScript in a fair weather situation. If that fails, we can rely on a JS based API and routing mechanism to still give the user the content they came for.

The real, pragmatic approach to the flimsiness of JavaScript though is much easier: people use it anyways, let’s concentrate on keeping it safe and reliable.

Whilst we still complain that JavaScript breaks in the client, we have a huge group of developers who use JavaScript in everything. While we worry about support for a certain new browser feature, people rely on hundreds of package dependencies to build very basic functionality. Whilst we worry about DOM bugs, people use libraries with virtual DOMs and scripted routing instead of HTTP.

JavaScript is a given and a language that empowers and inspires hundreds of new developers each day. Our job as lovers of the web is not to tell people that they are wrong using it in the first place. Our job is to allow these developers to be as creative with this new use of it. Much like we were when a standardised DOM was still a dream to come.

We’re not here to call the shots. We’re here to embrace a new use of a valid technology and help with our knowledge to not repeat age-old mistakes. But we need to make sure that we learn in that process, too. It is far too easy to find glaring mistakes in new applications of old technology. It is much harder to help people solve new problems they face with guidance of past experience. But it is much more rewarding as it doesn’t create a “us old sages vs. those new cowboy-coders” world.

With more defined and controlled environments, JavaScript becomes much easier to trust. The thing we need to worry more about now is to ensure that it doesn’t get too complex.

Instead of worrying about the non-fault-tolerance of JavaScript, here are some other things to worry about:

  • How safe is it to rely on a loosely curated package repository for our projects? How can we make sure that in the dozens of NPM modules we use none of them is malware? How can we ensure people use packages safely, keep them up-to-date and not face disaster when one of them breaks?
  • How can we reap the rewards of abstractions without creating an unhealthy dependency? The vue.js of tomorrow may well be the jQuery UI of today. Yes, we create faster and more with an abstraction. But we miss out on understanding how what we create works. We don’t want to have a lot of developers and products that become ineffective once an abstraction is out of fashion.
  • How can we create a professional development environment for JavaScript without overwhelming new developers? Back in the days we needed a text editor and a browser. Now we need to have command-line knowledge, toolchains, unit tests, continuous integration and heavily customised editors. Each of these things make sense, but can look daunting to a new developer.
  • How can we move the language itself ahead without relying on transpilation? JavaScript is finally standardised and new functionality should be used by anyone, not only in a compilation step.
  • How can we still reap the rewards of the just-in-time compilation of JavaScript when we use it like a compiled language?
  • How can our tooling help new and experienced developers without overwhelming one group and boring the other? Is linting the answer or is it expecting developers to be experts in browser tools?

Share on Mastodon (needs instance)

Share on Twitter

My other work: