Christian Heilmann

You are currently browsing the Christian Heilmann blog archives for June, 2014.

Archive for June, 2014

Flight mode on

Monday, June 9th, 2014

Much like everybody else these days, I use my phone as an alarm clock. In addition to this, however, I also made a conscious decision to turn on flight mode on during the night. The reason is updates coming in that may or may not make it buzz or make a sound. Of course, I could turn that off. Of course I could not care about it. Of course that is nonsense as we are wired to react to sounds and blinking lights.

men in black flasher

In many applications the option to turn audio or visual or buzz notifications off is hidden well as their sole business model is to keep you interacting with them. And we do. All the time. 24/7. Because we might miss something important. That can so not wait. And we need to know about it now, now, now…

I also started turning off my phone when I am on the go – on my bicycle or on the train and bus. There is no point for me keeping it on as there is no connectivity in trains in London and I get car sick trying to interact with my phone on a bus. Furthermore, so many apps are built with woefully bad offline and intermittent connection support. I am just tired of seeing spinners.

museum of loading

So what? Why am I telling you this? The reason is that I am starting to get bored and annoyed with social media. I sense a strong feeling of being part of a never-ending current of mediocrity, quick wins and pointless data consumption. Yes, I know the “irony” of me saying this, seeing how active I am on Twitter and how much “pointless” fluffy animal material I intersperse with technical updates.

The point for myself is that I miss the old times of slow connections and scarcity of technical information. Well, not really miss, but I think we are losing a lot by constantly chasing the newest and most amazing and being the first to break the “news” of some cool new script or solution.
birthday without wifi

When I started in web development I had a modem. I also paid for my connection by the minute. I didn’t have a laptop. At work I wasn’t allowed to read personal mails or surf the web – I was there to attend meetings, slice up photoshop files, add copy to pages and code.

At home I had a desktop. I connected to the internet, downloaded all my emails and newsgroup items (most of the time the headers only), surfed the web a bit, disconnected and started answering my emails. I subscribed to email forums like webdesign-l,, CSS Discuss and many others. In these forums I found articles of A List Apart, Webmonkey, Digital Web and many others worth reading.

Sounds inconvenient and terrible by nowadays standards, when we are annoyed that TV series don’t stream without buffering while we are on planes. It was, but it also meant one thing I think we lost: I cherished every email and every article much more than I do now. I appreciated the work that went into them as they were more scarce. To get someone’s full attention these days you need to be either outrageous or overpromising. The wisdom of the crowds seems to get very dubious when limited to social media updates. Not the best bubbles up, but the most impressive.

Meeting Point

I also forged close relationships with the people subscribed in these lists and forums by interacting more closely than 140 characters. A List Apart, for example, was not only about the articles – the more interesting and amazing solutions came from the discussions in the comments. I made my name by taking part in these discussions and agreeing and disagreeing with people. Many people I know now who speak, coach, run companies and have high positions in the mover and shaker companies of the web came from this crowd.

I took my time to digest things, I played with technology and tried it out and gave feedback. We took our time to whittle away the rough edges and come up with something more rounded.

We call this web of now social. We have amazing connections and collaboration tools. We have feedback channels beyond our dreams. But we rush through them. Instead of commenting and giving feedback we like, share and +1. Instead of writing a thought out response, we post a reaction GIF. Instead of communicating, we play catch up.

The sheer mass of tech articles, videos, software betas, updates and posts released every hour makes it almost impossible to catch up. Far too many great ideas, solutions and approaches fall through the cracks because ending up on Hackernews and getting lots of likes is the goal. This means you need to be talking about the newest thing, not the thing that interests you the most.

Maybe this makes me sound like an old fart. So be it. I think we deserve some downtime from time to time. And the content other people create and publish deserves more attention than a fly-by, glancing over it and sharing, hoping to be seen as the person with the cool news.

Be a great presenter: deliver on and off-stage

Thursday, June 5th, 2014

As a presenter at a conference, your job is to educate, entertain and explain. This means that the few minutes on stage are the most stressful, but should also be only a small part of your overall work.

Christian Heilmann listening to translations at TEDxThessaloniki

A great technical presentation needs a few things:

  • Research – make sure your information is up-to-date and don’t sell things that don’t work as working
  • Sensible demonstrations – by all means show what some code does before you talk about it. Make sure your demo is clean and sensible and easy to understand.
  • Engagement materials – images, videos, animations, flowcharts, infographics. Make sure you have the right to use those and you don’t just use them for the sake of using them.
  • Handover materials – where should people go after your talk to learn more and get their hands dirty?
  • An appropriate slide deck – your slides are wall-paper for your presentation. Make them supportive of your talk and informative enough. Your slides don’t have to make sense without your presentation, but they should also not be distracting. Consider each slide an emphasis of what you are telling people.
  • A good narration – it is not enough to show cool technical things. Tell a story, what are the main points you want to make, why should people remember your talk?
  • An engaging presentation – own the stage, own the talk, focus the audience on you.

All of this needs a lot of work, collecting on the web, converting, coding, rehearsing and learning to become better at conveying information. All of it results in materials you use in your talk, but may also not get to use whilst they are very valuable.

It is not about you, it is about what you deliver

A great presenter could carry a talking slot just with presence and the right stage manner. Most technical presentations should be more. They should leave the audience with a “oh wow, I want to try this and I feel confident that I can do this now” feeling. It is very easy to come across as “awesome” and show great things but leave the audience frustrated and confused just after they’ve been delighted by the cool things you are able to do.

Small audience, huge viewer numbers

Great stuff happens at conferences, great demos are being shown, great solutions explained and explanations given. The problem is that all of this only applies to a small audience, and those on the outside lack context.

This is why a lot of times parts of your presentation might get quoted out of context and demos you showed to make a point get presented as endorsed by you missing the original point.

In essence: conferences are cliquey by design. That’s OK, after all people pay to go to be part of that select group and deserve to get the best out of it. You managed to score a ticket – you get to be the first to hear and the first to talk about it with the others there.

It gets frustrating when parts of the conference get disseminated over social media. Many tweets talking about the “most amazing talk ever” or “I can’t believe the cool thing $x just showed” are not only noise to the outside world, they also can make people feel bad about missing out.

This gets exacerbated when you release your slides and they don’t make any sense, as they lack notes. Why should I get excited about 50MB of animated GIFs, memes and hints of awesome stuff? Don’t make me feel bad – I already feel I am missing out as I got no ticket or couldn’t travel to the amazing conference.

misleading infographic

If you release your talk materials, make them count. These are for people on the outside. Whilst everybody at an event will ask about the slides, the number of people really looking at them afterwards is much smaller than the ones who couldn’t go to see you live.

Waiting for recordings is frustrating

The boilerplate answer to people feeling bad about not getting what the whole twitter hype is about is “Oh, the videos will be released, just wait till you see that”. The issue with that is that in many cases the video production takes time and there is a few weeks up to months delay between the conference and the video being available. Which is OK, good video production is hard work. It does, however water down the argument that the outside world will get the hot cool information. By the time the video of the amazing talk right now is out we’re already talking about another unmissable talk happening at another conference.

Having a video recording of a talk is the best possible way to give an idea of how great the presentation was. It also expects a lot of dedication of the viewer. I watch presentation videos in my downtime – on trains, in the gym and so on. I’ve done this for a while but right now I find so much being released that it becomes impossible to catch up. I just deleted 20 talks from my iPod unwatched as their due-date has passed: the cool thing the presenter talked about is already outdated. This seems a waste, both for the presenter and the conference organiser who spent a lot of time and money on getting the video out.

Asynchronous presenting using multiple channels

Here’s something I try to do and I wished more presenters did: as a great presenter should be aware that you might involuntarily cause discontent and frustration outside the conference. People talk about the cool stuff you did without knowing what you did.

Instead of only delivering the talk, publish a technical post covering the same topic you talked about. Prepare the post using the materials you collected in preparation of your talk. If you want to, add the slides of your talk to the post. Release this post on the day of your conference talk using the hashtag of the conference and explaining where and when the talk happens and everybody wins:

  • People not at the conference get the gist of what you said instead of just soundbites they may quote out of context
  • You validate the message of your talk – a few times I re-wrote my slides after really trying to use the technology I wanted to promote
  • You get the engagement from people following the hashtag of the conference and give them something more than just a hint of what’s to come
  • You support the conference organisers by drumming up interest with real technical information
  • The up-to-date materials you prepared get heard web-wide when you talk about them, not later when the video is available
  • You re-use all the materials that might not have made it into your talk
  • Even when you fail to deliver an amazing talk, you managed to deliver a lot of value to people in and out of the conference

For extra bonus points, write a post right after the event explaining how it went and what other parts about the conference you liked. That way you give back to the organisers and you show people who went there that you were just another geek excited to be there. Who knows, maybe your materials and your enthusiasm might be the kick some people need to start proposing talks themselves.

Write less, achieve meh?

Wednesday, June 4th, 2014

In my keynote at HTML5DevConf in San Francisco I talked about a pattern of repetition those of us who’ve been around for a while will have encountered, too: every few years development becomes “too hard” and “too fragmented” and we need “simpler solutions”.

chris in suit at html5devconf

In the past, these were software packages, WYSIWYG editors and CMS that promised us to deliver “to all platforms without any code overhead”. Nowadays we don’t even wait for snake-oil salesmen to promise us the blue sky. Instead we do this ourselves. Almost every week we release new, magical scripts and workflows that solve all the problems we have for all the new browsers and with great fall-backs for older environments.

Most of these solutions stem from fixing a certain problem and – especially in the mobile space – far too many stem from trying to simulate an interaction pattern of native applications. They do a great job, they are amazing feats of coding skills and on first glance, they are superbly useful.

It gets tricky when problems come up and don’t get fixed. This – sadly enough – is becoming a pattern. If you look around GitHub you find a lot of solutions that promise utterly frictionless development with many an unanswered issue or un-merged pull request. Even worse, instead of filing bugs there is a pattern of creating yet another solution that fixes all the issues of the original one . People simply should replace the old one with the new one.

Who replaces problematic code?

All of this should not be an issue: as a developer, I am happy to discard and move on when a certain solution doesn’t deliver. I’ve changed my editor of choice a lot of times in my career.

The problem is that completely replacing solutions expects a lot of commitment from the implementer. All they want is something that works and preferably something that fixes the current problem. Many requests on Stackoverflow and other help sites don’t ask for the why, but just want a how. What can I use to fix this right now, so that my boss shuts up? A terrible question that developers of every generation seem to repeat and almost always results in unmaintainable code with lots of overhead.

That’s when “use this and it works” solutions become dangerous.

First of all, these tell those developers that there is no need to ever understand what you do. Your job seems to be to get your boss off your back or to make that one thing in the project plan – that you know doesn’t make sense – work.

Secondly, if we found out about issues of a certain solution and considered it dangerous to use (cue all those “XYZ considered dangerous” posts) we should remove and redirect them to the better solutions.

This, however, doesn’t happen often. Instead we keep them around and just add a README that tells people they can use our old code and we are not responsible for results. Most likely the people who have gotten the answer they wanted on the Stackoverflows of this world will never hear how the solution they chose and implemented is broken.

The weakest link?

Another problem is that many solutions rely on yet more abstractions. This sounds like a good plan – after all we shouldn’t re-invent things.

However, it doesn’t really help an implementer on a very tight deadline if our CSS fix needs the person to learn all about Bower, node.js, npm, SASS, Ruby or whatever else first. We can not just assume that everybody who creates things on the web is as involved in its bleeding edge as we are. True, a lot of these tools make us much more efficient and are considered “professional development”, but they are also very much still in flux.

We can not assume that all of these dependencies work and make sense in the future. Neither can we expect implementers to remove parts of this magical chain and replace them with their newer versions – especially as many of them are not backwards compatible. A chain is as strong as its weakest link, remember? That also applies to tool chains.

If we promise magical solutions, they’d better be magical and get magically maintained. Otherwise, why do we create these solutions? Is it really about making things easier or is it about impressing one another? Much like entrepreneurs shouldn’t be in love with being an entrepreneur but instead love their product we should love both our code and the people who use it. This takes much more effort than just releasing code, but it means we will create a more robust web.

The old adage of “write less, achieve more” needs a re-vamp to “write less, achieve better”. Otherwise we’ll end up with a world where a few people write small, clever solutions for individual problems and others pack them all together just to make sure that really everything gets fixed.

The overweight web

This seems to be already the case. When you see that the average web site according to HTTParchive is 1.7MB in size (46% cacheable) with 93 resource requests on 16 hosts then something, somewhere is going terribly wrong. It is as if none of the performance practices we talked about in the last few years have ever reached those who really build things.

A lot of this is baggage of legacy browsers. Many times you see posts and solutions like “This new feature of $newestmobileOS is now possible in JavaScript and CSS - even on IE8!”. This scares me. We shouldn’t block out any user of the web. We also should not take bleeding edge, computational heavy and form-factor dependent code and give it to outdated environments. The web is meant to work for all, not work the same for all and certainly not make it slow and heavy for older environments based on some misunderstanding of what “support” means.

Redundancy denied

If there is one thing that this discouraging statistic shows then it is that future redundancy of solutions is a myth. Anything we create that “fixes problems with current browsers” and “should be removed once browsers get better” is much more likely to clog up the pipes forever than to be deleted. Is it – for example – really still necessary to fix alpha transparency in PNGs for IE5.5 and 6? Maybe, but I am pretty sure that of all these web sites in these statistics only a very small percentage really still have users locked into these browsers.

The reason for denied redundancy is that we solved the immediate problem with a magical solution – we can not expect implementers to re-visit their solutions later to see if now they are not needed any longer. Many developers don’t even have the chance to do so – projects in agencies get handed over to the client when they are done and the next project with a different client starts.

Repeating XHTML mistakes

One of the main things that HTML5 was invented for was to create a more robust web by being more lenient with markup. If you remember, XHTML sent as XML (as it should, but never was as IE6 didn’t support that) had the problem that a single HTML syntax error or an un-encoded ampersand would result in an error message and nothing would get rendered.

This was deemed terrible as our end users get punished for something they can’t control or change. That’s why the HTML algorithm of newer browsers is much more lenient and does – for example – close tags for you.

Nowadays, the yellow screen of death showing an XML error message is hardly ever seen. Good, isn’t it? Well, yes, it would be – if we had learned from that mistake. Instead, we now make a lot of our work reliant on JavaScript, resource loaders and many libraries and frameworks.

This should not be an issue – the “JavaScript not available” use case is a very small one and mostly by users who either had JavaScript turned off by their sysadmins or people who prefer the web without it.

The “JavaScript caused an error” use case, on the other hand, is very much alive and will probably never go away. So many things can go wrong, from resources not being available, to network timeouts, mobile providers and proxies messing with your JavaScript up to simple syntax errors because of wrong HTTP headers. In essence, we are relying on a technology that is much less reliable than XML was and we feel very clever doing so. The more dependencies we have, the more likely it is that something can go wrong.

None of this is an issue, if we write our code in a paranoid fashion. But we don’t. Instead we also seem to fall for the siren song of abstractions telling us everything will be more stable, much better performing and cleaner if we rely on a certain framework, build-script or packaging solution.

Best of breed with basic flaws

One eye-opener for me was judging the Static Showdown Hackathon. I was very excited about the amazing entries and what people managed to achieve solely with HTML, CSS and JavaScript. What annoyed me though was the lack of any code that deals with possible failures. Now, I understand that this is hackathon code and people wanted to roll things out quickly, but I see a lot of similar basic mistakes in many live products:

  • Dependency on a certain environment – many examples only worked in Chrome, some only in Firefox. I didn’t even dare to test them on a Windows machine. These dependencies were in many cases not based on functional necessity – instead the code just assumed a certain browser specific feature to be available and tried to access it. This is especially painful when the solution additionaly loads lots of libraries that promise cross-browser functionality. Why use those if you’re not planning to support more than one browser?
  • Complete lack of error handling – many things can go wrong in our code. Simply not doing anything when for example loading some data failed and presenting the user with an infinite loading spinner is not a nice thing to do. Almost every technology we have has a success and an error return case. We seem to spend all the time in the success one, whilst it is much more likely that we’ll lose users and their faith in the error one. If an error case is not even reported or reported as the user’s fault we’re not writing intelligent code. Thinking paranoid is a good idea. Telling users that something went wrong, what went wrong and what they can do to re-try is not luxury – it means building a user interface. Any data loading that doesn’t refresh the view should have an error case and a timeout case – connections are the things most likely to fail.
  • A lack of very basic accessibility – many solutions I encountered relied on touch alone, and doing so provided incredibly small touch targets. Others showed results far away from the original action without changing the original button or link. On a mobile device this was incredibly frustrating.

Massive web changes ahead

All of this worries me. Instead of following basic protective measures to make our code more flexible and deliver great results to all users (remember: not the same results to all users; this would limit the web) we became dependent on abstractions and we keep hiding more and more code in loaders and packaging formats. A lot of this code is redundant and fixes problems of the past.

The main reason for this is a lack of control on the web. And this is very much changing now. The flawed solutions we had for offline storage (AppCache) and widgets on the web (many, many libraries creating DOM elements) are getting new, exciting and above all control-driven replacements: ServiceWorker and WebComponents.

Both of these are the missing puzzle pieces to really go to town with creating applications on the web. With ServiceWorker we can not only create apps that work offline, but also deal with a lot of the issues we now solve with dependency loaders. WebComponents allow us to create reusable widgets that are either completely new or inherited from another or existing HTML elements. These widgets run in the rendering flow of the browser instead of trying to make our JavaScript and DOM rendering perform in it.

The danger of WebComponents is that it allows us to hide a lot of functionality in a simple element. Instead of just shifting our DOM widget solutions to the new model this is a great time to clean up what we do and find the best-of-breed solutions and create components from them.

I am confident that good things are happening there. Discussions sparked by the Edge Conference’s WebComponents and Accessibility panels already resulted in some interesting guidelines for accessible WebComponents

Welcome to the “Bring your own solution” platform

The web is and stays the “bring your own solution platform”. There are many solutions to the same problem, each with their own problems and benefits. We can work together to mix and match them and create a better, faster and more stable web. We can only do that, however, when we allow the bricks we build these solutions from to be detachable and reusable. Much like glueing Lego bricks together means using it wrong we should stop creating “perfect solutions” and create sensible bricks instead.

Welcome to the future – it is in the browsers, not in abstractions. We don’t need to fix the problems for browser makers, but should lead them to give us the platform we deserve.