Christian Heilmann

You are currently browsing the archives for the General category.

Archive for the ‘General’ Category

Web Truths: Publishing on the web using web standards is easy and amazing

Monday, November 27th, 2017

This is part of the web truths series of posts. A series where we look at true sounding statements that we keep using to have endless discussions instead of moving on. Today I want to talk about the notion of the web as an open publication platform and HTML as the way to publish.

Publishing on the web using web standards is easy and amazing

Those of us who were lucky enough to be around when the web started growing cherish this concept. It was exciting to be able to open a text editor – any text editor – write some tags and see our text come to life in a browser. Adding a H1 here gave us a headline, adding a P described paragraphs and gave free margins. Adding an A allowed us to point to other resources or a target in the same page. Adding an HR gave us a horizontal ruler. The latter later on turned out to be a pain in the backside to style and was generally a bad idea. We learned by doing and we learned by trying things out.

We also smirked at the people who looked down on us as not being “real developers”. These people considered us crazy for relying on a browser we don’t control to render our output. Could be that we were, but at least we didn’t have to use a convoluted IDE and follow a slow build process to get a result. Our work was immediate and satisfaction was only a ctrl+s, shift-tab and ctrl-r (or F5) away.

We also loved the fact that all we need is some hosting space and an FTP client to publish our work to the world.

If we disliked what our server companies did, we moved to the next one. After all, your domain is the thing you send to people, not your IP address. To get people to read what we wrote we published on mailing lists, in forums and on IRC. We linked to each other in webrings and banner exchanges. When Google came around we were the old guard that Google loved. Easy to index, with sensible TITLE and proper text content. Not like Flash pages that tried to trick Google with META keyword spamming or hidden text.

All the good things about HTML and publishing on your own server are still valid. It is a beautiful, free and open way to publish with nobody to answer to. You are your own marketing department and your server is your playground. If you are vigilant and you make sure that you have a lot of time to delete spam and defend yourself against attacks.

But here is the problem with telling this story over and over again. It doesn’t quite work any longer. And it is not as fascinating for people these days as it was for us. I remember the exhiliration of hearing a successful modem handshake and my HTML rendering in Netscape 3. I remember explaining to people that “when it sounds everything is broken, then you are online”. People these days don’t expect to be offline. Often they only experience being offline when they are on the go and the mobile connection dies. Those lucky enough to have a home with a fat wireless connection never put effort in to reach the content. They consume, much like we did when we watched TV.

The same happens to publication. It isn’t about writing the perfect article or blog post. It is about creating something fast that gets a lot of eyeballs. And if you want eyeballs, you keep publishing. Faster and faster, more and more. And as it is hard to create your own, you re-hash what other people have done instead and ride any success train of the day. There is no place for HTML or proper standards publication in this world.

I’m not saying at all that this is a good thing. But it is where we are. The web hasn’t won over Facebook, WhatsApp and other closed environments because it needs more effort to use. You still need to show interest and skill to build a web site. Writing three words and picking a GIF from a collection is easier.

The web we love and explain as the amazing publishing platform will survive. It will be a playground of enthusiasts and specialists. And old people. The disruptive platform of the past hasn’t become the mainstream. Everyone has the potential to be a creator and maker. But the marketing machine of the world wants us to be consumers instead. And the best way to keep people consuming is to lock them into a place that is ridiculously easy to publish in. More importantly you need to give them the feeling of being part of a community of cool people. And the explosive growth of the web and tweaking of search algorithms to show the “new” instead of the “correct” isn’t a group of cool people. It is hard work. There are no likes, kudos, claps or whatever on the web. Adding an immediate feedback channel for people is 90% removing horrible content and spam. The web isn’t a small group of cool people, but mainstream media makes pretty sure to tell us it is full of dangers and wrong information. Better stay where it is safe. In a controlled environment that has very enticing immediate feedback.

If you think this is dark, check out André Staltz’ The Web began dying in 2014, here’s how where he paints a pretty bleak future for the mainstream web. And darn him, there is some pretty good evidence in there that finding web content in the future not published inside Google, Amazon or Facebook products will be close to impossible.

So, yes, publishing on the web is amazing. Nobody denies that. But we’re dealing with a new generation of people who grew up with the web and don’t care about it. It is there, like water is when you open the tap. You don’t think about how it gets there or what is involved until it stops coming out. And that might be the same for the web.

Instead of painting a romantic view of how the open web keeps prevailing, it may be time to tell people more about what their use of closed platforms does. How much they give away for the convenience of publishing something and harvesting some fake internet points. We’re past the format of the publication. We need to get people excited again about owning their data. For our sake and theirs.

Web Truths: the web is broken and backwards compatibility is holding us back

Tuesday, November 21st, 2017

This is part of the web truths series of posts. A series where we look at true sounding statements that we keep using to have endless discussions instead of moving on. Today I want to tackle the issue of the web not moving fast enough for people and clinging on to seemingly terrible ideas from the past.

The web is broken and backwards compatibility is holding us back

This is the counter-argument to the one I discussed in the last post. Much like the exaggurated praise for the web and its distributed nature it has been around for as long as I can remember online discussions.

There is no doubt that many things about the web are sub-optimal. It is also true that carrying the burden of never blocking out old content can slow us down. Yes there are many features of CSS and JavaScript that in hindsight are terrible ideas. And it is true that by sticking to the bleeding edge, you have a lot more fun as a developer. First, there are more things to play with. And, more modern environments also come with better tooling and deeper insights.

There is a problem with this though: as Calvin said it, the problem with the future is that it always turns into the present.

calvin and hobbes strip

So, whenever we embrace new and bleeding edge technology and damn the consequences we create debt. Often arguments against backwards compatibility stem from actions like these. Some standards we have to keep adding to browsers have been rash decisions or based on a need of one player in the W3C at the time.

Breaking changes in any new version of software aren’t ever fun for users and maintainers. This gets worse with how popular your software becomes and how many people use it. And I’d argue that the web is the most used piece of software out there.

Back in the day the argument against the web stack was always Flash. It seemed to be the right thing to use. There was 99% coverage in browsers. It had far advanced tooling in comparison to Firebug (RIP). And there was a sort of built-in code protection. People couldn’t look at and steal your code without jumping any barriers.

Turns out, Flash wasn’t the amazing platform these arguments made it out to be. In the Flash Games Post Mortem keynote at GDC. John Cooney of Kongegrate talks about the story of Indie gaming and Flash.

I love this talk. It shows that Flash and Web developers weren’t that different. Except Flash developers were more pragmatic about wanting to make money. And they had less delusions about their code lasting forever but knew that they had a short window of opportunity.

And this is what this argument boils down to. When it comes to betting on the web, there’s a lot of good faith and wanting to create something lasting involved. If that is your thing it will make you more understanding for the failures of the web as a software platform. And it makes backwards compatibility a no-brainer as this is what ensures the longevity of the web. When Flash changed and the support from the one company that owned it faded, a lot of developers felt forgotten. We now have the problem that a lot of creativity and a lot of work will go away as the platform to execute it is gone. Backwards compatibility ensures that isn’t the case.

If your thing is to release something quick, make some money and know it will go away, the web isn’t as interesting. Even worse, those defending it can come across as evangelical or condescending. But there is nothing wrong with what you want to do. A lot of innovation stems from this approach, and the web can learn from its successes and failures. Much like HTML5 learned a lot from Flash. But that doesn’t mean your approach is better or that the web is broken – it just doesn’t fit your goals. Without the web, Flash wouldn’t have happened the way it has either. Air proved that. The distribution model of the web works. And you can benefit from that without having to replace it.

There are of course some valid arguments for the abandonment of older ideas and non-support of broken platforms. Seeing how fast JavaScript moves, it seems detrimental to the cause to support older browsers. And some of the new ideas we have now solve important performance and security issues.

But all in all, the backwards compatibility of the web is what made it survive all the other platforms set out to replace it. And there will not be a time where we need to run the web in emulators because of it. That is what makes the argument of the web as broken and backwards compatibility holding us back invalid. Of course we can do better, but are we also 100% sure that what we think is amazing now really stands the test of time?

There are more pressing matters to consider:

  • How can we ensure that despite backwards compatibility we get people to upgrade their environments? Seeing that a malware targeted at Windows XP is a huge success in 2017 is more than worrying.
  • How can we enhance older solutions to become better without breaking them? Chrome’s passive Event Listener extension to addEventListener seems to break backwards compatibility . Arrow functions are arguably only syntactic sugar (despite fixing “this”) but will always be just a syntax error for older browsers.
  • How can we make developers embrace newer solutions to old problems that have less side effects? It seems there is a certain point where we stop caring to keep up-to-date and use whatever worked in the past instead.
  • How can we make newer developers embrace the idea of the web as a platform without overloading them with borderline evangelical and philosophical messages? How can we make the web speak for itself?

Web Truths: The web is better than any other platform as it is backwards compatible and fault tolerant

Saturday, November 18th, 2017

This is part of the web truths series of posts. A series where we look at true sounding statements that we keep using to have endless discussions instead of moving on. Today I want to tackle the issue of the web as a publication platform and how we keep repeating its virtues that may not apply to a publisher audience.

The web is better than any other platform as it is backwards compatible and fault tolerant

This has been the mantra of any web standards fan for a very long time. The web gets a lot of praise as it is to a degree the only platform that has future-proofing built in. This isn’t a grandiose statement. We have proof. Web sites older than many of today’s engineers still work in the newest browsers and devices. Many are still available, whilst those gone are often still available in cached form. Both search engines and the fabulous wayback machine take care of that – whether you want it or not. Betting on the web and standards means you have a product consumable now and in the future.

This longevity of the web stems from a few basic principles. Openness, standardisation, fault tolerance and backwards compatibility.

Openness

Openness is the thing that makes the web great. You publish in the open. How your product is consumed depends on what the user can afford – both on a technical and a physical level. You don’t expect your users to have a certain device or browser. You can’t force your users to be able to see or overcome other physical barriers. But as you published in an open format, they can, for example, translate your web site with an online system to read it. They can also zoom into it or even use a screenreader to hear it when they can’t see.

One person’s benefit can be another’s annoyance, though. Not everybody wants to allow others to access and change their content to their needs. Even worse – be able to see and use their code. Clients have always asked us to “protect their content”. But they also wanted to reap the rewards of an open platform. It is our job to make both possible and often this means we need to find a consensus. If you want to dive into a messy debate about this, follow what’s happening around DRM and online video.

Standardisation

Standardisation gave us predictability. Before browsers agreed on standards, web development was a mess. Standards allowed us to predict how something should display. Thus we knew when it was the browser’s fault or ours when things went wrong. Strictly speaking standards weren’t necessary for the web to work. Font tags, center tags, table layouts and all kind of other horrible ideas did an OK job. What standards allow us to do is to write quality code and make our lives easier. We don’t paint with HTML. Instead, we structure documents. We embed extra information and thus enable conversion into other formats. We use CSS to define the look and feel in one central location for thousands of documents.

The biggest benefactors of standards driven development are developers. It is a matter of code quality. Standards-compliant code is easier to read, makes more sense and has predictable outcome.

It also comes with lots of user benefits. A button element is keyboard, touch and mouse accessible and is available even to blind users. A DIV needs a lot of developer love to become an interactive element.

But that doesn’t mean we need to have everything follow standards. If we had enforced that, the web wouldn’t be where it is now. Again, for better or worse. XHTML died because it was too restrictive. HTML5 and lenient parsers were necessary to compete with Flash and to move the web forward.

Backwards compatibility

Backwards compatibilty is another big part of the web platform. We subscribed to the idea of older products being available in the future. That means we need to cater for old technology in newer browsers. Table layouts from long ago need to render as intended. There are even sites these days publishing in that format, like Hacker News. For browser makers, this is a real problem as it means we need to maintain a lot of old code. Code that not only has a diminishing use on the web, but often even is a security or performance issue. Still, we can’t break the web. Anything that goes into a “de facto standard” of web usage becomes a maintenance item. For a horror story on that, just look at all the things that can go in the head of a document. Most of these are non-standard, but people do rely on them.

Fault tolerance

Fault tolerance is a big one, too. From the very beginning web standards like HTML and CSS allow for developer errors. In the design principles of the language the “Priority of Constituencies” states it as such:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity

This idea is there to protect the user. A mistake made by a developer or a third party piece of code like and ad causing a problem should not block out users. The worrying part is that in a world where we’re asked to deliver more in a shorter amount of time it makes developers sloppy.

The web is great, but not simple to measure or monetise

What we have with the web is an open, distributed platform that grants the users all the rights to convert content to their needs. It makes it easy to publish content as it is forgiving to developer and publisher errors. This is the reason why it grew so fast.

Does this make it better than any other platform or does it make it different? Is longevity always the goal? Do we have to publish everything in the open?

There is no doubt that the web is great and was good for us. But I am getting less and less excited about what’s happening to it right now. Banging on and on about how great the web as a platform is doesn’t help with its problems.

It is hard to monetise something on the web when people either don’t want to pay or block your ads. And the fact that highly intrusive ads and trackers exist is not an excuse for that but a result of it. The more we block, the more aggressive advertising gets. I don’t know anyone who enjoys interstitials and popups. But they must work – or people wouldn’t use them.

The web is not in a good way. Sure, there is an artisinal, indie movement that creates great new and open ways to use it. But the mainstream web is terrible. It is bloated, boringly predictable and seems to try very hard to stay relevant whilst publishers get excited about snapchat and other, more ephemeral platforms.

Even the father of the WWW is worried: Tim Berners-Lee on the future of the web: The system is failing .

If we love the web the way we are happy to say all the time we need to find a solution for that. We can’t pretend everything is great because the platform is sturdy and people could publish in an accessible way. We need to ensure that the output of any way to publish on the web results in a great user experience.

The web isn’t the main target for publishers any longer and not the cool kid on the block. Social media lives on the web, but locks people in a very cleverly woven web of addiction and deceit. We need to concentrate more on what people publish on the web and how publishers manipulate content and users.

Parimal Satyal’s excellent Against a User Hostile Web is a great example how you can convey this message and think further.

In a world of big numbers and fast turnaround longevity isn’t a goal, it is a nice to have. We need to bring the web back to being the first publishing target, not a place to advertise your app or redirect to a social platform.

Web Truths: We need granular control over web APIs, not abstractions

Monday, October 16th, 2017

This is part of the web truths series of posts. A series where we look at true sounding statements that we keep using to have endless discussions instead of moving on. Today I want to tackle the issue of offering new functionality to the web. Should we deliver low-level APIs to functionality to offer granular control? Or should we have abstractions that get people started faster? Or both?

In a perfect scenario, both is the obvious answer. We should have low-level APIs for those working “close to the metal”. And we should offer abstractions based on those APIs that allow for easier access and use.

In reality there is quite a disconnect between the two. There is no question that newer web standards learned a lot from abstractions. For example, jQuery influenced many additions to the DOM specification. When browsers finally got querySelector and classList we expected this to be the end of the need for abstractions. Except, it wasn’t and still isn’t. What abstractions also managed to do is to even out implementation bugs and offer terser syntax. Both of these things resonate well with developers. That’s why we have a whole group of developers that are happy to use an abstraction and trust it to do the right thing for them.

Before we had a standardised web, we had to develop to the whims of browser makers. With the emergence of standards this changed. Web standards were our safeguard. By following them we had a predictable way of debugging. We knew what browsers were supposed to do. Thus we knew when we made a mistake and when it was a bug in the platform. This worked well for a textual and forms driven web. When HTML5 broke into the application space web standards became much more complex. Add the larger browers and platform fragmentation and working towards standards and on the web became much harder. It doesn’t help when some of the standards felt rushed. An API that returns empty string, “possibly” or “maybe” when asked if the current browser can play a video doesn’t fill you with confidence. For outsiders and beginners, web standards are not considered any more the “use this and it will work” approach. They seem convoluted in comparison with other offers and a lot of changes seem to be a lot of work for developers to keep up. Maybe too much work.

Here’s what it boils down to:

  • Abstractions shield developers from a lot of implementation quirks and help them work on what they want to achieve instead
  • Low-level APIs allow for leaner solutions, but expect developers to know them, keep track of changes and to deal in a sensible way with non-supporting environments (see: Progressive Enhancement)

What do developers want?

As web developers in the know, you want to have granular control. We’ve been burnt too often by “magical” abstractions. We want to know what we use and see where it comes from. That way we can create a lot of different solutions and ensure that what we want to standardise works. We also want to be able to fix and replace parts of our solutions when a part becomes problematic. What we don’t want is to be unable to trace back where a certain issue comes from. We also want to ensure that new functionality of the web stays transparent and secure. We achieve this by creating smaller, specialised components that can get mixed and matched.

As new developers who haven’t gone through the pains of the browser wars or don’t need to know how browsers work, things look different. We want to code and reach our goal instead of learning about all the different parts along the way. We want to re-use what works and worry about our product instead. We’re not that bothered about the web as a platform and its future. For us, it is only one form factor to build against and to release products on. Much like iOS is or gaming plaforms are.

This is also where our market is going: we’re not paid to understand what we do – we’re expected to already know. We’re paid to create a viable product in the shortest amount of time and with the least effort.

The problems is that the track record of the web shows that we often have to start over whenever there is a new technology. And that instead of creating web specific functionality we got caught up trying to emulate what other platforms did.

The best case in point here is offline functionality. When HTML5 became the thing and Flash was declared dead we needed a fast solution to offer offline content. AppCache was born, and it looked like a simple solution to the issue. As it turns out, once again what looked too good to be true wasn’t that great at all. A lot of functionality of AppCache was unreliable. In retrospect it also turned out to be more of a security issue than we anticipated.

There was too much “magic” going on that browsers did for us and we didn’t have enough insight as implementers. That’s how Service Workers came about. We wanted to do the right thing and offer a much more granular way of defnining what browsers cache where and when. And we wanted to give developers a chance to intercept network requests and act on them. This is a huge endeavour. In essence we replicate the networking stack of a browser with an API. By now, Service Workers are doing much more than just offline functionality. They also should deal with push notifications and app updates in the background.

This makes Service Workers tougher to work with as they seemed complex. Add to that the lack of support in Safari (which is now changing) and you lost a lot of developer enthusiasm.

There is more use in abstractions like Workbox as they promise you to keep up-to-date whilst the changes in the spec are ironed out. Instead of getting a “here are all the lego bricks, build your own car”, it has a “so you want to build a car, here are some ways to do so” approach.

This is a good thing. Of course we need to define more granular and transparent standards and solutions to build the web on. But there is a reluctance in developers to take part in the definition phase and keep an eye on changes. We can not expect everybody who wants to build for the web to care that much. That is not how the web grew – not everybody had to be a low level engineer or know JavaScript. We should consider that the web outgrew the time where everyone was deeply involved with the standards world.

We need to face the fact that the web has become much more complex than it used to be. We demand a lot from developers if we want them all to keep up to date with standards. Work that often isn’t as appreciated by employers or clients than shipping products is.

This isn’t good. This isn’t maintainable or future facing. And it shouldn’t have come to this. But it is a way of development we allowed to take over. Development has become a pretty exhausting and competitive environment. Deliver fast and in a short cadence. Move fast and break things. If you can re-use something, do it, don’t worry too much if you don’t know what it does or if it is secure to do so. If you don’t deliver it first to market someone else will.

This attitude is not healthy and we’re rubbing ourselves raw following it. It also ensures that diversity in our market it tough to achieve. It is an aggressive game that demands a lot of our time and an unhealthy amount of competitiveness.

We need to find a way to define what’s next on the web and make it available as soon as possible. Waiting for all players to support a new feature makes it hard for developers to use things in production.

Relying on abstractions seems to be the way things are going anyways. That means as standards creators and browser makers we need to work more with abstraction developers. It seems less and less likely that people are ready to give up their time to follow specs as they change and work with functionality behind flags. Sure, at conferences and in our talks everyone gets excited. The hardware and OS configurations we have support all the cool and new. But we need to get faster to market and reach those who aren’t already sold on our ideas.

So, the question isn’t about granular definition of specifications, small parts that work together or abstractions. It is about getting new and sensible, more secure and better performing solutions into production code quicker. And this means we need both. And abstractions should have a faster update cycle to incorporate new APIs under the hood. We should work on abstractions using standards, not patching them.

Web Truths: JavaScript can’t be trusted

Tuesday, September 26th, 2017

This is part of the web truths series of posts. A series where we look at true sounding statements that we keep using to have endless discussions instead of moving on. Today I want to tackle the issue of JavaScript and how much we should rely on it.

Good vs. eval()

JavaScript is the love/hate topic of the “modern web”. People who have been around for a long time have learned the hard way not to blindly rely on it. People who just started don’t even know how you could turn it off or why it could be a problem.

This gives us an endless opportunity to rant in one direction or the other. The old guard points out that the new builds fragile products that demand too much of the users. The new guard points out that a web without JavaScript is neither fun for users nor easy to maintain. A lot of times, this argument is about developer convenience trumping sturdiness of the web. And we keep going in circles viewing it from both ends of the spectrum.

When we got JavaScript, it ran in the browser and made a lot of things possible that HTML and CSS alone could not do yet. We could react to things our users were doing without reloading the page. We could read out and react to the environment the script was running in. JavaScript was the interactive glue of the web. We could make less able browsers do things others could. We could create interaction models that HTML forms didn’t provide. And we could create a consistent experience across browsers and platforms.

And this is where the main issue came up: for many, the web isn’t meant to look and work the same everywhere. Instead it should give a working experience for all users and get fancier the more able the user agent is. By relying on JavaScript in our solutions we put up a barrier for the sake of control. This was especially bad when we didn’t deliver any functionality when JavaScript failed.

And JavaScript can fail in many ways. From non supporting environments to coding errors and connectivity issues – anything can break.

JavaScript isn’t like CSS or HTML. Both these building blocks of the web are fault-tolerant. This means when you write invalid HTML, the browser tries to fix it. If you use bleeding edge CSS in an old browser, it ignores it. Not so with JavaScript. Syntax errors or accessing an unknown object cause the interpreter to give up and say no. Even the most capable environments don’t support JavaScript until the first script loaded and ran without a problem.

Stuart Langridge explains what can go wrong until your script does what you wanted it to do in his Everyone has JavaScript, right? site.

The main power of JavaScript was that it executes on the fly and in the browser. It didn’t need any compilation, it didn’t need any fancy development environment. That was a huge part of its success. Compared to other languages that have similar power, it is much more approachable. And it seems easy to fix a problem you have with JavaScript that seems complex in CSS if you’re not used to its syntax. JavaScript can do everything: it can load extra information, it can create HTML and it can change styles and images. It is the jack-of-all-trades of the web.

When something is easy to apply there is always a danger that people over-use it. It is tempting to see your well-connected development device as the world. And to expect the same speeds and computing powers of your users. In this scenario loading a few megabyte of JavaScript is not a high price to pay to allow for easy maintenance. When you’re on a metered, slow or unreliable connection or on a low-end device this convenience can soon turn into frustration. This is an even bigger problem as it is hard – if not impossible – to detect these conditions.

So, yes, JavaScript is a fair-weather friend and can break in many ways. You may also block out a lot of users because you crave more control over things you aren’t meant to control.

There is a flip-side to this truth though. JavaScript has evolved from a scripting language in the browser to a development environment in its own right. The rise of Node and other server-side and embedded systems put JavaScript on the map as a key skill of our market.

JavaScript isn’t a client-side problem, it is a whole bigger set of offers these days. I talked about this beginning of the year at ScriptConf in Austria

Universal, isomorphic JavaScript – or whatever other buzzword we come up with – is the answer to the lack of fault tolerance of the language. We can run the JavaScript in a space we control like the server or a build process and we render out plain HTML. We can use client-side JavaScript in a fair weather situation. If that fails, we can rely on a JS based API and routing mechanism to still give the user the content they came for.

The real, pragmatic approach to the flimsiness of JavaScript though is much easier: people use it anyways, let’s concentrate on keeping it safe and reliable.

Whilst we still complain that JavaScript breaks in the client, we have a huge group of developers who use JavaScript in everything. While we worry about support for a certain new browser feature, people rely on hundreds of package dependencies to build very basic functionality. Whilst we worry about DOM bugs, people use libraries with virtual DOMs and scripted routing instead of HTTP.

JavaScript is a given and a language that empowers and inspires hundreds of new developers each day. Our job as lovers of the web is not to tell people that they are wrong using it in the first place. Our job is to allow these developers to be as creative with this new use of it. Much like we were when a standardised DOM was still a dream to come.

We’re not here to call the shots. We’re here to embrace a new use of a valid technology and help with our knowledge to not repeat age-old mistakes. But we need to make sure that we learn in that process, too. It is far too easy to find glaring mistakes in new applications of old technology. It is much harder to help people solve new problems they face with guidance of past experience. But it is much more rewarding as it doesn’t create a “us old sages vs. those new cowboy-coders” world.

With more defined and controlled environments, JavaScript becomes much easier to trust. The thing we need to worry more about now is to ensure that it doesn’t get too complex.

Instead of worrying about the non-fault-tolerance of JavaScript, here are some other things to worry about:

  • How safe is it to rely on a loosely curated package repository for our projects? How can we make sure that in the dozens of NPM modules we use none of them is malware? How can we ensure people use packages safely, keep them up-to-date and not face disaster when one of them breaks?
  • How can we reap the rewards of abstractions without creating an unhealthy dependency? The vue.js of tomorrow may well be the jQuery UI of today. Yes, we create faster and more with an abstraction. But we miss out on understanding how what we create works. We don’t want to have a lot of developers and products that become ineffective once an abstraction is out of fashion.
  • How can we create a professional development environment for JavaScript without overwhelming new developers? Back in the days we needed a text editor and a browser. Now we need to have command-line knowledge, toolchains, unit tests, continuous integration and heavily customised editors. Each of these things make sense, but can look daunting to a new developer.
  • How can we move the language itself ahead without relying on transpilation? JavaScript is finally standardised and new functionality should be used by anyone, not only in a compilation step.
  • How can we still reap the rewards of the just-in-time compilation of JavaScript when we use it like a compiled language?
  • How can our tooling help new and experienced developers without overwhelming one group and boring the other? Is linting the answer or is it expecting developers to be experts in browser tools?