Christian Heilmann

You are currently browsing the archives for the General category.

Archive for the ‘General’ Category

A billion new developers thanks to AI?

Thursday, September 12th, 2024

This is a translation of my German article for the AI mag.

Demetris Cheatham of GitHub on stage with the 1 billion developer road map

At the WeAreDeveloper World Congress in Berlin in July, GitHub announced that the company will use artificial intelligence and assistants to turn a billion people into developers in a very short time. Amazon’s Cloud CEO, on the other hand, explained in an internal fireside chat that soon no one will have to develop software anymore, as machines can do it better anyway. Two big opposing statements, and you have to ask yourself who will be right and what that means for developers and those who want to enter this market. So let’s take a quick look back, a look at the current situation and what may come.

Being a software developer in the past

I myself have been a professional, well-paid and sought-after developer since 1995. I worked for several years at Yahoo, Mozilla and Microsoft and worked with Google on the Chromium project. At the beginning of my career, it was frowned upon that software that I wrote during my working hours and on company computers should be made available to others for free, and much of what I worked on was solely for internal use.

Open Source as an introduction

But soon Open Source came around and changed everything. First it was a tool and idea only for geeks, but later it proved itself as a sensible approach to software development. The cloud runs mostly on Linux, Android is numerically superior to iOS, and a huge part of the web is based on WordPress.

Open Source and Creative Commons were a thing that just made sense to me. I didn’t want to be the only one who understood and edited the software. I wanted others to look at my work, check it, and take over when I no longer had time or muse to keep this project going.

Open Source allowed thousands of developers to start their careers. As a lead developer, it meant that I didn’t have to look for new employees, but could hire them from inside the project. I could see who used or contributed to the project. I already had insight not only into what these new employees are technically capable of, but also how they documented their work, dealt with criticism, and how they communicate in the team.

When I started with Open Source, Microsoft was still the evil empire. When I left Mozilla to join Microsoft in 2015, the main reason was the promise that I would help bury Internet Explorer, a dream I had had as a web developer for a long time. At the same time, Microsoft released Visual Studio Code. An Open Source code editor that would completely revolutionise the developer world in a very short time.

When rumours started that Microsoft would buy GitHub, grumblings went through the developer community and many predicted that this would be the end of the platform. I saw it differently. GitHub was a wonderful platform that simplified version control and allowed anyone to create a community around their software product in a very short time. But it was a startup from San Francisco, and many traditional European companies would never put their software or data there. With Microsoft as the corporation behind it, that was a completely different story.

What I’m trying to say is that Open Source has long since made it possible to democratise software development and allow anyone to start a new career as a developer. Of course, the web is the other big invention that helped with this before, but now you can use professional tools that the big players in the market use for free and also collaborate on them.

So far, so good. But then came the AI ​​hype and suddenly we are back in the middle of a boom reminiscent of the .com bubble at the beginning of the millennium.

Welcome to the world of AI hype

The idea of ​​artificial intelligence is nothing new, but there are currently only two major differences:

  • Computers are fast enough to provide the power that AI systems need
  • ChatGPT, LLMs and RAG have made AI accessible to everyone and are currently being used everywhere, whether it makes sense or not.

Conversing with a seemingly intelligent machine and obtaining information this way is a dream for anyone who grew up with Star Trek. It is also a major change in the general approach to computers and knowledge. While people used to read books and then find websites via portals and search engines, today they ask the machine and get an answer immediately. And if the answer is not right, they can ask for more information.

Software as a mass product generated by AI?

Whenever there is a change in user behaviour, CEOs of large companies turn into fortune tellers predicting the future. When smartphones came along, everything had to be an app, because only these could provide the end user with the best service. When digital assistants such as Siri, Cortana, Bixby, Alexa and so on were introduced, the prediction was that soon there would be no more apps, but that these assistants will be able to fulfil all our wishes. The model here was WeChat in China, which really is the solution for everything there. However, this is also a market where the free Internet is not available.

Now many people predict that every software solution could be an extension for ChatGPT.

Every time that happens, there is immediately a marketplace where you can offer your extensions or apps. These marketplaces soon turn into digital rubbish heaps, as companies create hundreds of apps automatically, digital attackers present viruses and trojans as legitimate offers, and there are hundreds of cheap copies of every successful product.

In other words, software is becoming a mass product and many people are being fooled into believing that they, too, could become millionaires tomorrow with the killer app.

Often, however, only the market operators benefit from successful offers, and in the case of AI extensions, there have recently been a lot of cases where successful ideas were simply offered in the system itself, and the entrepreneur suddenly saw all users disappear. However, this is nothing new, as it has often been the case with browser plugins and developer environments.

If you look at it from the outside, it is similar to streaming services. In the past, you bought the CD or DVD, but today you have immediate access to everything. But you also have no claim to the content and cannot rely on finding it again if you want to watch it once more. Just like you don’t always get the same answer from ChatGPT, but sometimes odd answers. The correct thing wasn’t available, so here’s, well, something.

Whenever a new technology is supposed to conquer the market, you hear the same statements. One is that it will very soon be possible to create great software solutions without having to program a single line. This was already the case in the days of Visual Basic or later with WYSIWYG (“What You See Is What You Get”) environments such as Frontpage or Dreamweaver. Today there are a whole lot of “low code” or “no code” solutions with the same promises, which make it easier to create products, but also deliver highly unoptimised results.

GPT demo turning a hand drawn app on paper into HTML, CSS and JS

Of course, this was predictable to be a statement in the AI ​​field and one of the first “Wow” presentations from ChatGPT created a web application from a design scribbled on paper. Later, “Devin” was the first fully effective software developer as an AI. Both brought a lot of big headlines and applause, but with Devin in particular it quickly became clear that it was a nice presentation, but not really a solution.

Who needs developers?

Whether we even need developers anymore depends on what we want to create. ChatGPT’s “From Paper to Code” demo application was a website that displays a joke at the touch of a button. Nobody needs this application, and it is feels much more like a lesson of a programming course. And even as an interview question, this app would be 15 years too late to test the knowledge of candidates.

If our job is to create solutions like this, we don’t need a professional developer. But we don’t need AI either, because low- and no-code products could do those for years.

It is true that a lot of the work you do as a developer is based on existing products. And if it really is just about assembling existing components, then an AI can do that, too.

However, there are also a lot of problems that require more complex, human solutions, and for those, we need trained developers. Throughout my career, I have noticed more and more that writing a program is the smallest part of the work. Rather, it is about developing software that is understandable and accessible to humans, and that is a task that AI cannot do for us. Accessibility and usability cannot be automated, no matter what advertising for great new software promises.

How do developers learn?

For every job, you need the right tool. In the case of software, that is the development environment. It should make it easy for me to write code, find errors, make changes and – if possible – immediately see the results. A good development environment tells me while I am writing that I am making a mistake or how to use a method. Similar to how a word processor underlines errors whilst I type.

If I want to learn about syntax, names of methods or how to approach a problem, I can consult documentation. Books, online documentation, courses, and also videos. And there is a lot of it available. It is almost a full-time job to distinguish the good from the bad.

And that is why there are forums and social media on which you can exchange ideas.

When GitHub came up with the idea of​ GitHub Copilot for VS Code, I was immediately hooked and from day one I was a tester, helping to find bugs and request new functionality.

The great thing was that I didn’t have to go to a website like ChatGPT to ask questions about programming. Instead, it happened inside my development environment, as suggestions on how to continue the feature I was just starting. I can also highlight part of the source code and ask the AI ​​what it’s all about. I used to do this on forums or as a comment on GitHub. I learned whilst I was programming, and thus created a lot more in less time. I could also tell the system to only refer to the current project and not to give me some result from the internet. Furthermore, the system learns from me what I expect. The more I used Copilot, the more it gave me suggestions in a format I would have written anyways. It started to mimic my style, instead of offering random suggestions.

In other words, the research tasks are automated and part of the work. And that’s where GitHub has a clear advantage over others, which is why they have the chance to fulfil the big task of turning a billion people into developers.

GitHub is where I store source code, I can edit the code in the browser with one keypress, and I have access to a huge number of experts who also communicate on the same platform. All the learning steps inside one environment. There are more players that offer that now, but GitHub has the advantage of being a huge community as well as a platform.

But the technical part of development is only a fraction of the task. A large part of my job as a developer is to filter and convert data. You never get perfect data, and good software is written defensively, testing the input, expecting false information and filtering it. And that’s where AI in its current marketing form is a real problem.

AI creates naive code

If you present a great new solution to the world, it can not make any mistakes. And that’s a problem with the AI ​​hype at the moment. Instead of a chat bot not giving me an answer, or simply admitting that more information is needed, most systems return something. In chat and when creating images or videos, these are so-called “hallucinations”. In code generation, these are either the first results from the official documentation, or those that were chosen as the best by most developers. And that is not the best solution, but the simplest.

Many AI code generators are based on data from forums such as Stack Overflow, Reddit, official documentation, and personal blogs of well-known developers. The problem is that most of the time a solution is shown that represents the simplest and fastest result, and not the safest or optimised one.

For decades, I have written courses and books, and every publisher or platform wanted exactly that: give the participant a quick, positive experience by showing a simple example without pointing out all the nuances right away.

These examples were also the ones that were voted as the best by the community on forums, because they are simple and give an immediate result. Forum participants did not want the “why”, only the “how”. And these are the code examples that an AI chat bot shows as the first result. And even if users tell the bot that this is not a good result, the underlying model is not changed because that would be too expensive and time-consuming.

A lack of transparency

The hardest thing is to find out where the bot got the solution it is offering. Instead of laying its cards on the table, the AI ​​sector is currently thriving on stealth. When millions of investments are at stake, people like to hide what makes their product special. It actually started well. OpenAI, as a prime example, was initially open and changed later to inspire higher investments. But it would be in the interest of the end users to know which data the models are based on so that you, as the original developer, could explain why it is a bad example or update and fix known security or performance problems. For example, I know which of my Open Source Github repositories have been read and taken over by AI bots, and some of them were very successful, but only because they were a funny trick or a very dirty shortcut.

Who owns the source code?

There is currently an arms race on the Internet about how to protect your open content from being taken over by AI bots. A lot of companies have already been sued for, for example, ingesting YouTube content without paying attention to the license or asking the owner. While many developers releasing their work as Open Source have no problem with others building on it, it is a different matter when a machine comes along and uses your code in chatbot answers as part of a paid-for service without context or accreditation. There are a lot of blocker lists that are supposed to protect your own blog or source code repository from getting indexed. However, AI ​​providers do not identify their crawler bots and mask themselves as normal browsers. After all, it’s about being able to offer the most data, not about ethics or adhering to licenses.

In a lecture at Stanford, the former CEO of Google recently explained without any beating around the bush that it’s totally OK to steal content when it comes to innovation and getting to market quickly. Entrepreneurs shouldn’t worry about it, but leave it to the lawyers. Brave new world.

Europe as a second-class market?

Europe has many rules and laws that can be considered detrimental to some Silicon Valley startups, and in my work with American companies I have spent a lot of time explaining GDPR and similar things to my colleagues and apologising for not being able to show user information because it is illegal to record it in Germany without their knowledge. That’s good, the privacy of our users and their security is the most important thing. But that just doesn’t fit in the world of explosive growth and rapid software distribution. We are currently at a crossroads where more and more AI systems and products are either not being offered at all or months later in Europe.

Politics isn’t helping either. Historically, Europe has always had many Open Source companies and developers, but with the cut in OSS funding from the European Union, many of these providers will have to find other ways to pay the bills. And that will make it difficult to compete against companies in other countries with fewer laws when it comes to finding investors.

In general, the problem is still that many people think that Open Source is free. An old idiom goes that OSS is “free as in a puppy”, so if you get a free puppy, that’s great, but you also have to take care of it. You have to train the animal and there may be accidents on the carpet.

One of these accidents recently shook the OSS world. An important Open Source component in almost all systems, xz, was almost replaced by malware that could have infected all Linux machines. The problem was that the original developer no longer had time to maintain the product and handed it over to a maintainer. This is completely normal behaviour in the OSS world. But the maintainer turned out to be someone who planned to replace the component with malware and took his time to cover his intent. We have to ask ourselves now how to ensure maintenance of system-relevant components without worrying about similar security concerns in the future. And that will be difficult without financial support.

The European Artificial Intelligence Act (AI Act) came into force on August 1st and is intended to regulate the AI ​​world, bring more transparency and allow European companies to be well positioned in international competition. However, it also poses a major problem for Open Source offerings, as these are exempt. One of the reasons was security, as open systems are easier to attack and can be used for nefarious reasons as well as legitimate ones without any feedback or ask for permission.

Security through obscurity?

In the IT security sector, there is a thing that has always been a lie: security through obscurity. Just because you can’t analyse closed source systems directly doesn’t mean that they are more secure.

Recently, there have been increasing reports that all closed AI systems have been attacked and data has been lost. Many code generators have also been used via prompt injection to offer unsafe code to end users and thus install malware. Microsoft in particular has been in the crossfire of the media and has now even made bonus payments dependent on the impact employees have on the company’s security. Interestingly, this came a few months after many of the security experts were laid off in the 11,000 employee layoff wave.

These and other problems such as the Azure Masterkey loss and the Crowdstrike outage have also damaged developers’ trust in the cloud and large companies, and almost all lectures or articles about AI warn against relying on just one provider. Which of course also means that you either have to spend more or rely on locally installed systems. These would then have to be Open Source.

What can happen now…

GitHub has set itself an ambitious task and is well positioned to achieve it. The only question is what a “developer” really is in the age of AI. Discussing this requires a whole separate article, as there are many facets to this.

What most companies are hiding, however, is that the AI ​​business model does not work right now. Most companies are currently paying extra – the revenue of various Copilots and systems is not enough to cover the computation cost. The technical costs are insanely high. Before the GenAI revolution, almost all large companies advertised that they would soon be “carbon neutral” or only dependent on green energy, but this has not been the case quite some time and all have become suspiciously silent about the topic. Generative AI is currently an insane waste of energy – every image created requires as much electricity as charging a cell phone.

Therefore, the AI-on-device idea will become more and more interesting. Instead of hosting the models in the cloud, all Open Source AI models can also be used locally, and Google, for example, is already toying with integrating Gemini into Chrome. There are also some Open Source projects that offer AI chat systems without cloud dependency.

In general, however, these are very interesting times, and the market always needs more developers. I don’t think developers can be replaced yet, and I do think that intelligent and easily accessible development environments give a lot of new people the chance to get involved.

The question is, how do I turn these newbies into developers who can also be proud of their work, and what can we do to make the next learning steps appealing to them after the AI ​​says “take this and everything will work”.

Link resources:

Quick tip: using flatMap() to extract data from a huge set without any loop

Friday, September 6th, 2024

A capybara wearing a flat cap and holding a pint with the name Flat Cap crossed out and .flatMap() instead.

I just created a massive dataset of all the AI generated metadata of the videos of the WeAreDeveloper World Congress and I wanted to extract only the tags.

The dataset is a huge array with each item containing a description, generated title, an array of tags, the original and their title, like this:

{
  "description": "The talk begins with an introduction to Twilio…", 
  "generatedtitle: "Enhancing Developer Experience: Strategies and Importance",
  "tags": ["Twilio", "DeveloperExperience", "CognitiveJourney"],
  "title": "Diving into Developer Experience"
}

What I wanted was an alphabetical lost of all the tags in the whole dataset, and this is a one-liner if you use flatMap():

data.flatMap(d => d.tags);

You can sort them alphabetically with sort():

data.flatMap(d => d.tags).sort();

And you can de-dupe the data and only get unique tags when you use Set():

new Set(data.flatMap(d => d.tags).sort());

You can try this in this codepen.

No more “Expert, Intermediate, Beginner”: Classifying talks in Call for Papers/Conference agendas

Friday, September 6th, 2024

Old crack intro offering a level skipper for a game

I am currently working on creating the new Call for Papers for the next WeAreDevelopers World Congress and one of the feedback items we got was that levels like “Expert, Intermediate and Beginner” don’t make much sense. First of all, speakers do not choose the right level as they are worried that a beginner or expert talk will not attract enough audience. Secondly, attendees might feel peer pressure to not watch the “beginner” talk, as that might be more suited to be a workshop.

So I thought that instead of levels, I ask speakers for classifications:

  • Case Study – “How we use Kololores.js in company Blumentopferde and how it made us 30% more effective”
  • Deep Dive – “Looking under the hood of Kokolores.js and why it works so well”
  • Technology Introduction – “How Databaserandomising will change the way you think about structured databases”
  • Tool Explanation – “Taking the pain out of Kokolores.js with Pillepalle – a visual interface and API to get you started quicker”
  • Thought Piece – “Kokolores.js isn’t the answer – we need to approach this in a different way”
  • Expert Advice – “How we scaled Kokolores.js to 231242 users and what to look out for”
  • Level Up – “So you started using Kokolores.js – here is how to become more efficient with it”
  • Learnings – “How we got rid of Kokolores.js and what it meant for our users”
  • Creative – “Did you know you can use Kokolores.js to do Pillepalle?”

This should make it easier for audiences to pick a talk without having to value themselves. What do you think?

Eine Milliarde neuer Entwickler dank KI? Linkliste

Monday, September 2nd, 2024

Hier sind die Resourcen meines Artikels im AI Magazin zum selbst nachlesen:

Die meisten dieser Artikel waren Teil des Dev Digest Newsletters, den ich jede Woche an 150,000 subscriber schicke.

Talk notes: Let’s make a simpler, more accessible web

Monday, August 5th, 2024

I am just on my way back home from presenting at the Typo3 Developer Days in Karlsruhe, Germany. I had a great time and met a lot of interesting people. I also had the opportunity to present my talk on making the web simpler. The talk was well received and there were some requests to share the slides. So here is a write-up of what I talked about:

An annoying, broken web

I traveled to the event by train and used the free Wifi services offered on German trains called WIFIonICE, which always sparks a stupid image in my head of an ice skating Wifi signal, but that’s not what I wanted to talk about.

I wanted to talk about the web still being a bad experience on patchy connections. It isn’t that things don’t load or show up. It was, for example, not a problem to read through my feeds using Feedly, check the conference web site or this blog. The problems I had were all on web applications that try to give me a native experience by loading and replacing content in-app.

Web pages and apps showing up empty or with an indefinite loading bar

I tried, for example, to buy a plane ticket and all I got was endless loading screens and ghost screens promising me interaction but failing to deliver. Instead, I looked at “please wait” interaction patterns that left me wondering if I already bought tickets or not. The promise of app-like convenience turned into frustration. Switching from my laptop to my phone also didn’t help as the native app also loads the same, flakily designed web solution.

The web is built on resilient technologies – we just don’t use them

The weird thing is that the web should not fail that easily. It is built using HTML to structure things and CSS to apply visuals. Both of these technologies have highly forgiving parsers. If I nest HTML wrongly it will show one element after the other. If I write elements that don’t exist, browsers show them much like they would do with a DIV. If I have a syntax error in CSS, browsers go to the next line and keep trying there. If a browser doesn’t support CSS I use, it doesn’t apply it – but it also doesn’t stop rendering.

Instead of building our products on these technologies, we create everything with JavaScript. A highly powerful technology, but also a brittle one. Of the things we use to build web apps, JavaScript is the one that throws in the towel and stops executing at the first syntax problem or inability to access and alter elements we tell it to. So why do we rely on it that much?

There are a few reasons. The first one is that JavaScript puts us into the driver’s seat. We control everything it does and write the instructions how to show and alter content. With CSS and HTML we need to rely on the browser to do things right and we don’t get a way to validate the success or find out what went wrong. With JavaScript, we have a full debugging environment and we can even halt and continue execution to inspect the current state. This gives us a feeling of control the other technologies don’t give us. And not having full control can feel strange.

Another thing JavaScript allows us to do is to use upcoming, standardised features of the web right now by simulating them in other ways. And as we are impatient or want to constantly match what other platforms offer, we keep doing that. We keep simulating interaction patterns or UI elements foreign to the web in less stable ways rather than waiting for the platform to provide them.

As JavaScript allows us to generate content or alter the look and feel in a programatic manner it gives us a feeling of more control. For example, many developers want to control the frames per seconds of animations instead of defining animations in CSS and letting the browser do the job for them. As you can generate HTML with JavaScript and alter styles it feels to some that learning HTML, CSS and JavaScript is not necessary when you can do it all with one technology.

The other reason people rely on JavaScript is that we used to fix faulty standard implementations in browsers with it. Internet Explorer was, of course, the biggest culprit there, but even now you often find patch code for Safari and other browsers. One of the biggest selling points of abstractions like jQuery was that it made browser support predictable. Often new standards go through a few rounds of ropey integration and many a “using X considered harmful” blog post has not helped inspiring trust in the platform for developers. Instead they know that by using JavaScript, they can make things happen without worrying about browser versions.

Remembering Unobtrusive JavaScript

Screenshot of the unobtrusive javascript course

Twenty years ago I published a course called Unobtrusive JavaScript. In it, I shared my experience moving from browser-specific script and markup in the DHTML days to web standards based development. Specifically I explained how we stopped hacking together web content display and moved on to separation of concerns, saving lots of not needed code in the process. Instead of using tables and font elements, we had semantic HTML and CSS.

I then proceeded to explain how to stop mixing HTML and JavaScript and how to not rely on the latter as it could fail in many different ways. I put it this way:

Javascript is an enhancement, not a secure functionality. We only use Javascript to enhance a functionality that is already given, we don’t rely on it. Javascript can be turned off or filtered out by proxies or firewalls of security aware companies. We can never take it for granted. This does not mean we cannot use Javascript, it only means we add it as an option rather than a requirement.

This has not changed. JavaScript still is an unreliable technology and yet the apps that frustrated me on my journey made the mistake of relying on it without providing a more stable fallback solution.

The concept of unobtrusive JavaScript is closely related to that of progressive enhancement. Another sensible idea that came out of fashion. Instead of enhancing working solutions to become better when and if certain criteria are met, we build solutions hoping that everything will be fine.

Instead of using JavaScript and things non-web-standard sparingly, we go full in. And by doing so we made Web Development much harder and complex than it needs to be.

Starting a new web project now

Back in the days, you started a web project with an index.html file and built on that. These days, things are different. You need to:

  • Get the right editor with all the right extensions
  • Set up your terminal with the right font and all the cool dotfiles
  • Install framework flügelhorn.js with bundler wolperdinger.io
  • Go to the terminal and run packagestuff –g install
  • Look at all the fun warning messages and update dependencies
  • Doesn’t work? Go SUDO, all the cool kids are …
  • Don’t bother with the size of the modules folder
  • Learn the abstraction windfarm.css – it does make you so much more effective
  • Use the templating language funsocks – it is much smaller than HTML
  • Check out the amazing hello world example an hour later…

Of course, this is glib, but there is a lot of truth to it.

A simpler web for developers…

I am constantly amazed how hard we made it for people to get started as web developers by advocating complex build processes and development tool chains. If you look at the state of the web though, we live in exciting times as developers and maybe many of these abstractions are not needed.

  • Browsers are constantly updated.
  • The web standardisation process is much faster than it used to be.
  • We don’t all need to build the next killer app. Many a framework promises scaling to infinity and only a few of us will ever need that.
  • Our goal should not be to optimise our developer experience.
  • Our goal should be satisfied visitors using working, resilient products.

The current focus on making front end work highly architected frustrates seasoned developers and deters new ones. People use low code or non-code environments to build products instead of using web technologies. My current company uses one of these tools to build our marketing pages and the resulting product is pretty and well maintained, but also bloated and much more complex than it needs to.

A simpler web for users…

We shouldn’t deliver complex solutions because they are easier to maintain or develop. What ends up on our user’s devices is what’s important, and how easy it is to consume, regardless of setup, connectivity or – most importantly – physical ability.

Browsers are damn good at optimising the user experience. The reason is that browser makers are measured by how speedy the browser is and how resource hungry it gets. But it can only do so much. If we, for example, animate things in JavaScript instead of allowing the CSS engine to optimise animations under the hood, we miss out on some really convenient browser behaviour.

Operating systems allow users to customise the experience to their needs. People can use light or dark mode, quite a few people need to turn off animations as it would distract them and others even use their systems in high contrast modes.

Users spend a lot of time doing customising their operating systems and devices. We should always value that effort and build on top of it. And, our solutions should build on existing interaction patterns like loading pages and going back in the browser history instead of building a UI we need to explain to people.

So, how do we make a simpler web for developers and end users alike?

Optimise what you control

What our users end up getting is in your control. We own the server and basically the whole experience until it is delivered to the end user. From there on, it is theirs to customise the look and feel to their needs, translate our content in other languages, block certain content and many other things. But until then, we can do quite a few things to ensure a great basic experience:

  • Send lots of semantic HTML – it can not break.
  • Use newest server setups – servers can help auto-optimise a lot of things.
  • Use the best formats – WebP, Avif, PNG and others. We should optimise them before integration or on the fly via a CDN service.
  • Pick our servers where our users are – long travel through cables is still slow.
  • If we don’t understand what that great helper library does or if we only use 10% of what it does, we should not use it.

Cache and offer offline content

Using Service Workers we can prevent our users from having to load content over and over again. This helps them get a snappier experience, and lowers our traffic bills. MDN has a great in-depth guide on Offline and Background Operation of web apps. This was hit and miss for a long time, but pretty solid to use now across browsers and devices. In any case, these are solutions designed to enhance, not to rely on.

Remove old band aids

There was a longer period in web development where browsers innovated faster than the standard bodies and other browsers refused to die. During that time we relied on polyfills and libraries to even the playing field and empower us to concentrate on developing our products rather than fixing cross-browser issues. These are a thing of the past now and the helper library of yesterday is the security, performance or maintenance issue of today. So let’s do some spring cleaning:

  • If it fixes things that aren’t an issue anymore – bin it.
  • If it makes things more convenient but has a web platform native equivalent – bin it.
  • If it is only used in one interaction in a part of your app – load it on demand, don’t bundle it upfront.

Outdated versions of jQuery and the likes and plugins that automatically bundled and minified CSS and JavaScript are always showing up on top of attack reports. Don’t let an old helper become the door opener for evildoers in your products.

Don’t take on any responsibility you shouldn’t take on…

There is a fabulous saying: “not my circus, not my monkeys”. By leaving some things in the control of the browser and even more things for the end user to change to their needs, we give up control, but also responsibility. Some things you don’t want to be responsible for and browsers are great at are:

  • Keeping the history state
  • Allow for interception and reload
  • Telling you when a connection fails
  • Caching and preloading things
  • Allowing for bookmarking and sharing

A really interesting new(ish) concept you can take a peek at right now are View transitions. For example, this video shows a seemingly in-app experience, but if you look at the URL bar you see that this is moving from document to document, thus allowing for history navigation, bookmarking and sharing. What the browser stopped doing is wipe the slate clean every time you go to another page.

Only deliver what is needed…

This is a big one. As we work on fast computers on fat connections, we tend to get overly excited about using packages and resources and bundle them in our products. We might need them later anyway, so why not have them delivered and cached? Well, because they clog up the internet and can cause a massive amount of unnecessary traffic.

Take is-number for example. This npm package had almost 68 million downloads this week and 2711 other packages depend on it. If you look at the source, it is this:

module.exports = function(num) {
  if (typeof num === 'number') {
    return num - num === 0;
  }
  if (typeof num === 'string' && num.trim() !== '') {
    return Number.isFinite ? Number.isFinite(+num) : isFinite(+num);
  }
  return false;
};

One of the dependents is is-odd which returns if a number is odd.

This one requires is-number, checks things and throws errors if they fail, but eventually boils down to returning if the number’s 2 modulo result is 1.

const isNumber = require('is-number');
 
module.exports = function isOdd(value) {
  const n = Math.abs(value);
  if (!isNumber(n)) {
    throw new TypeError('expected a number');
  }
  if (!Number.isInteger(n)) {
    throw new Error('expected an integer');
  }
  if (!Number.isSafeInteger(n)) {
    throw new Error('value exceeds maximum safe integer');
  }
  return (n % 2) === 1;
};

The `is-odd` package has 290k weekly downloads and 120 dependent packages. One of those is `is-even`:

var isOdd = require('is-odd');
 
module.exports = function isEven(i) {
  return !isOdd(i);
};

From a package user point of view, this makes sense. But over time, dependencies can add up to a lot of data on the web for simple problems.

Other packages are even empty, like the - package-. This one has 48k weekly downloads and 382 dependencies though. The reason is most likely typos as people using `npm i – g package` instead of `npm i -g package` do install the package with the name `-` instead.

Luckily, this package does nothing, but it would be a great one for malware creators to take over, considering how many people unwittingly use it.

Andrey Akinshin showed the impact of the `package first` thinking on Twitter. By removing the `is-number` package and replacing it with its code, a product he worked on now saves 440GB traffic every week.

One line fix resulting in saving 440GB of traffic

Let’s think about this when we build our products. For example, by using Media Queries in our link elements we only load the colour scheme CSS that is needed:

<link rel="stylesheet" href="/dearconsole/assets/light-theme.css" 
media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)">
<link rel="stylesheet" href="/dearconsole/assets/dark-theme.css"
media="(prefers-color-scheme: dark)">

Users with a dark scheme never load CSS they don’t need. Same applies for those who have not set a preference or use a light theme. This is not hard to implement, but can have a huge impact.

Another important thing to remember is not to use resources like images in classes that get re-used. Currently we are investigating why the traffic on our site is causing tons of data being transferred and we found that a photo of mine is used as a background image in a CSS class that isn’t only used on the page the photo is shown, but all over the product. This causes the 400K image to be loaded 235k times, which means 92GB traffic per month.

A background image in a well used CSS class causing 92GB of traffic, although it is only shown on one page.

Mind you, this doesn’t mean that people see it, it is simply requested every time the CSS class was applied as background images are not loaded on demand. Most likely someone in the low-code environment thought in the past that using it as a background image gives them more flexibility, but with object-fit, images inside our HTML are as flexible.

Even better, using the loading HTML attribute set to `lazy`, we can ensure that browsers only load images when and if they can be displayed and not cause unnecessary traffic beforehand.

The same applies to scripts, by using the defer attribute, we can make sure our scripts get loaded after the document has shown, thus not delaying things.

Keep up to date

The interesting thing about recent innovations on the web platform is that they are all about giving up control and moving away from pixel perfect layouts. Instead, CSS especially is embracing the concept of the web being an international platform and development not only happening on a page, but also on a component level. One could say that CSS is giving up control:

  • From fixed container sizes to flexbox/grid growing and shrinking with the content
  • From pixels to percentages to em/rem to viewport sizing to container sizing
  • From setting width and height to auto to aspect ratio
  • From box model definitions to independence of writing order

CSS and JavaScript also intersect, which means you can get some information via scripting and hand it over to the CSS engine for display.

  • You can read and write CSS custom properties (“variables”) in JS – thus give, for example, the mouse position to CSS.
  • CSS animations and transitions fire events for JS to use – and are part of the browser rendering.
  • Media Queries can be used in both languages to detect if a user has dark mode, doesn’t want to see animations, uses a touch device…

On the HTML front, we also have a lot of great features that got promoted from a “nice in the future” to “safe to use now”.

Just consider how much you can achieve with form element attributes. In this form demo codepen you can try out the effects of the following HTML:

<form action="#">
  <label for="cms">Your CMS choice</label>
  <input id="cms" 
         autocomplete="off" 
         required 
         pattern="\w+\d">
  <input type="submit" value="go">
</form>
<div popover id="msg">Excellent, isn't it?</div>

You can not send the form without entering a CMS name. You can not send any CMS name that isn’t a word followed by a number. And when you submit the form with a valid value you get an overlay message telling you that it is excellent. This is using native HTML form validation and the Popover API. Another thing to check out is the new(ish) Dialog element, which offers even more overlay options.

Another useful element group is details/summary. This allows you to create parts of the page that are hidden and can be shown by activating an arrow. All without JavaScript. On Chromium browsers you can even use Ctrl|CMD + F to search in the page and the browser will automatically expand sections that match. I’ve used this lately to build a browsing interface for the video metadata of all the talks at our conference and you can see it in action in the following video:

One thing that seems to become fashionable is that people add their own dark/light switches to web apps. This feels like back in the days when we built font resizing widgets for Internet Explorer users. As people use their devices in dark or light mode it makes a lot more sense to read out this setting and automatically change your design accordingly. You can do that with media queries, but there is even a handy new(ish) light-dark colour function in CSS to achieve the same functionality. Check out this Codepen to see it in action.

Dark light switching in action using only CSS and one button

Forget browser delays having an impact

In general, there used to be a time where the argument of “browser X is still around” was a valid one when not embracing the web platform and – more importantly – taking on new functionality and taking part in the discussions around it. Browser makers are accessible to us and desperate for feedback. New browser versions come out all the time and even those tied to OS updates are picking up the pace. What’s “not supported right now” is often ready by the time your products is shipped. So let’s not hold back.

Spend more time testing, less time trying to invent the web

Browser developer tools do not only give you insights into your JavaScript, Network activity and DOM but are also chock-full of accessibility testing features. Instead of trying to shoe-horn the coolest, newest framework into your product, spending some time using these very early on in the process will pay massive dividends later.

Things we should aim for

In conclusion, here are some things we should aim for.

  • Control over what ends up in our apps – code reviews should include dependencies.
  • Contribution back to the community – let’s propose more web platform features to standards bodies and browser makers.
  • Being diligent in what we use for what purpose – no need to add the kitchen sink every single time
  • Finding joy in keeping things simple and using the platform…

Thanks!