Christian Heilmann

You are currently browsing the Christian Heilmann blog archives for October, 2012.

Archive for October, 2012

Book idea: The Vanilla Web Diet

Tuesday, October 30th, 2012

I right now feel the itch to write a book again. I see a lot of people buying books and making a living selling them and I feel that there is a space for what I have in mind. I also don’t see how I could cover all the things I want to cover right now in talks or blog posts. It is presumptuous to think you’d follow a series of them so using a book form with code and possibly a series of screencasts seems to be the right format.

vanilla cupcakes

I have a few outstanding offers by publishers but having my first publisher just hand over a second edition of my first book to someone else without waiting for my yay or nay makes it less interesting to me to go through the traditional publishing route. Whilst I am pondering other distribution offers (and if you have some, talk to me) here is what I am planning to write about:

The web needs a diet

Web development as we know it has gone leaps and bounds lately. With HTML5 we have a massive opportunity based on a predictable rendering algorithm across browsers. CSS has evolved from removing the underline of links and re-adding it on hover to a language to define layout, animation and transformations. The JavaScript parts of HTML5 give us a much simpler way to access the DOM and manipulate content than traditional DHTML and DOMScripting allows us to.

Regardless of that, we still clog the web with lots and lots of unnecessary code. The average web site is well beyond a megabyte of data with lots of HTTP requests as our desktop machines and connections allow us to add more and more – in case we need it later.

On mobile the whole thing looks different though and connectivity is more flaky and each byte counts. In any case we should be thinking about slimming down our output as we put more and more code that is not needed out, adding to a landfill or quickly dating solutions that will not get updated and fixed for future environments and browsers. We litter the web right now, and the reasons are:

  • A wrong sense of duty to support outdated technology (yes, OLDIE) without really testing if our support really helps users of it
  • A wrong sense of thinking that everything needs to be extensible and architected on a enterprise level instead of embracing the fleeting nature of web products and using a YAGNI approach.
  • An unwarranted excitement about technology that looks shiny but is not to be trusted to be around for long

Shed those kilobytes with the fat-free vanilla approach

In the book I’d like to outline a pragmatic and backwards compatible way of thinking and developing for the web:

  • We start with standards-compliant code that works without relying on hacks and temporary solutions
  • We improve when and if the environment our code is consumed in supports what we want to do
  • Instead of shoe-horning functionality into outdated environments, we don’t leave promises of functionality when it can not be applied
  • We write the right amount of code to be understandable and maintainable instead of abstracting code to write the least amount without knowing what the final outcome is

The book will be opinionated and challenge a few ideas that we started to love because of their perceived usefulness for developers. In the end though I want to make people aware of our duty to produce the best products for our end users and to write code for the person who will take over from us when we want to move on to other things.

The book will teach you a few things:

  • How to build with instead of just for the web
  • How to use what browsers can do to build without writing much code
  • How to avoid writing code that will fail in the near future
  • How to not make yourself dependent on code you don’t control
  • How to learn to let go of “best practices” that made a lot of sense in the past but are not applicable any longer
  • How to have fun with what we have as web developers these days without repeating mistakes of the past
  • How to embrace the nature of the web – where everybody is invited, regardless of ability, location or technology

What do you think? Tell me on Facebook or Google+.

Welcome to the New Web – Keynote at Eclipsecon Europe 2012

Thursday, October 25th, 2012

This morning I gave the keynote at the Eclipsecon Europe in Ludwigsburg, Germany. Around 500 Eclipse and Java fans waited for some information about the latest and greatest in the web and here is what I gave them.


The slides are available online and the screencast is up on YouTube.

I will follow up with a more detailed explanation of the messaging on the Mozilla Hacks blog tomorrow.

Don’t call it “open source” unless you mean it

Monday, October 22nd, 2012

In terms of releasing code into the wild we live in terribly exciting times. Products like GitHub, Dropbox, online collaboration tools like JSFiddle, JSBin, Codepen and Dabblet make it very easy to show our code to the outside world. Furthermore, a lot of products are build in a modular manner which means you can simply participate by writing a plugin or add-on instead of coming up with your own solutions. jQuery and WordPress are living proof of that.

Quick release, moving on fast

One of the biggest dangers of a very simple infrastructure is the one of inflation. When it is easy to release something, a lot will be released. This means it becomes much harder to find good quality content and we are tempted to release more rather than releasing a few things we really care about and are ready to also care for in the future. Much like we write shorter, and less thought-out emails than letters we tend to get into a frenzy of releasing smaller, shorter and less documented products.

This is open source – or is it?

This is where we run a current danger of cheapening the term “open source”. Releasing an open source product is much more than making it available for free. It is a process, an ongoing commitment to nurturing something by sharing it with the world. Open source and its merits can actually be a blueprint of a much more democratic world to come as Clay Shirky explains in How the Internet will (one day) transform government.

During the Fronteers conference, David DeSandro gave a talk about monetising jQuery plugins and how to keep your sanity at the same time. His main concerns were that his plugins (mostly Masonry) cost a lot of his time and didn’t bring in enough to make a living off them. Several times during the talk he explained that he does have a job and that he has no time to answer every email. After all, there should be no need to ask questions or get support as the code is available and on Github and people could help each other and be happy that the code was released as open source. How come there was no magical community appearing out of nowhere that took care of all this?

Open Source is more than releasing code

Well, this wasn’t an open source project. The code was released in the open and is available on a repository, but it is not open source. Open source means – at least to me and a lot of people I talked to – that products are produced and maintained in the open.

This encompasses much more than just putting the code on GitHub. It means:

  • being ready to take on pull request and patches and communicate to the people who do them when and if they are released
  • helping people to get involved
  • communicate changes to the outside world
  • reacting to security issues and performance issues
  • answering feature requests
  • dealing with licensing issues
  • ensuring that the product evolves and changes as a reaction to new environments and market needs
  • encouraging people to contribute and help others
  • recognising people for helping out others and sharing the fruits of the labour with them

Open source is a lot of work and needs a community

This means a lot of work and it is the reason why a true open source project is not done by one person but from the very beginning should be planned around a group of people. People who are happy to share ownership and responsibilities as much as the benefits and the income from the project with others. People who are ready to hand over responsibilities should they get tired of them and to train their predecessors with that in mind. To be truly open source you should think about the maintenance and the future of the project before you release it. It is a group effort and the very tricky part is finding the group without hiring them.

A lot of what people call open source these days feels like “Tada-source” or “Pasture-source” to me:

  • Ta-da source – products that are released as a final commercial version as-is and get a source release to the world later. Instead of making the world part of the creation process it becomes maintenance staff to fix bugs and add the things they need to the product. This allows you to say you are open without having to deal with people’s needs and to stick to your own agenda when it comes to core functionality.
  • Pasture source – this is when products were financially viable and interesting but became a nuisance. Instead of maintaining them they go to the pasture of open source where kind shepherds without pay will ensure the products will live happily ever after. Pasture source happens either because of the workload of communication getting too much or because the business you work for doesn’t see the product as something they should pour resources into. A lot of times this is a PR exercise – “hey this product didn’t do well, but now that it is open it will be the next new cool thing for the open source community”

Brackets – a positive surprise

When Adobe said they’ll release their editor Brackets as Open Source and have it as Edge Code in their new set of tools for web developers I was not alone in being wary about this. It sounded like a closed source company trying to play with the rebel kids. Technically this is not new, as Adobe already did a lot of open releases with Air and even released the books under Creative Commons but it still felt “well, let’s see what is coming there”.

When seeing Adam Lehman talk about Brackets and their approach at the Adobe event in London I was very impressed to see how the project is run as an open source project. The code, of course, is available on Github, but that is not all. The project is following an Agile process using Scrum, pull requests are being reviewed every day, it has a 2.5 week release cycle and external contributions take priority. A detailed blog with release notes of each sprint is also available.

The project is managed in the open on Trello and a very clever way to get new contributers is to triage simple bugs as “quick wins” for new contributors rather than having more advanced developers spend their time on fixing them.

This is a great example of approaching an open source release of a product with the right tools and the right mindset. Yes, it is a lot of work, but I am quite confident that this means Brackets will be around for a long time, even if the Edge suite failed.

Don’t stop releasing

I am not saying that we should stop releasing things and making them available on GitHub and others. I am simply saying that open source means much more than that and that we shouldn’t be surprised if we get less contributors than we think if all we do is throw some code out and wait for magic to happen. Open Source means we get people’s time to build with us, not for us.

So when you release things, don’t call it an open source project, unless you are ready to go the full distance. Just put it out there and tell people that it is free and available, and that it is up to them what happens with it.

Data attributes rock – as both CSS and JavaScript know them

Wednesday, October 10th, 2012

Currently my better half Kasia is working on a JavaScript training course and wanted to explain the concepts of JavaScript with a game. So we sat down and did a simple game example whilst she was fretting over the structure of the course. As she wanted to explain how to interact with the DOM in JavaScript rather than using Canvas we had some fun using CSS animation in conjunction with simple keyboard controls. More on the game in due time, but here is a quick thing we found to be extremely useful and not really used enough in the wild – the interplay of data attributes, CSS and changing states.

Defining a player element

We wanted to make the game hackable, people playing with HTML could change it. That was more a request by me as Mozilla has the Webmaker project and there will be a lot of game hacking at Mozfest in November.

In order to define a player element the semantic fan in me would do something like this:

<div id="player">
		<li class="name">Joe</li>
		<li class="score">100</li>

This makes sense in terms of HTML, and is accessible, too. However, in terms of accessing this in JavaScript, it is quite annoying as you need three element matches. Also in terms of maintenance it means three elements. In JS you’d need to do something like:

var player = document.querySelector('#player'),
    name   = document.querySelector('#player .name'),
    score  = document.querySelector('#player .score');

In order to change the score value, you’d need to change the innerHTML of the score reference.

score.innerHTML = 10;

Aside: yes I know there are lots of HTML templating solutions and I am sure dozens of jQuery solutions for that, but let’s stick to vanilla JS as this was about teaching that.

A HTML5 player element

Instead of going through these pains, we found it to be much easier to go with data attributes:

<div id="dataplayer" data-name="Joe" data-score="100">

The clever thing here is that HTML5 already gives us an API to change this data:

var player = document.querySelector('#dataplayer');
// read
alert('Score:' + player.dataset.score);
alert('Name:' +;
// write
player.dataset.score = 10;
// read again
alert('Score:' + player.dataset.score);

Re-using attribute values

Another benefit of using data attributes is that CSS gets them. Say for example you want to show the colour of the score value in red when it reaches 10. In the first HTML using a list you’d need to do the testing in JavaScript and add a class to have a different display. You could of course also change the colour directly with the style collection but that is awful in terms of maintenance. It can cause reflows in your rendering and also means another thing to explain to maintainers.

function changescore(newscore) {
	if (newscore === 10) {
	} else {

#player .low {
	color: #c00;

Aside: jQuery has contains(‘foo’) to match elements with the text in their node content in the selector engine, but it has been deprecated as a CSS standard, so that is not the way to go.

When using data attributes you don’t need that – all you need is an attribute selector in CSS:

#dataplayer[data-score='10'] {
	color: #c00;

To display the scores you can use generated content in CSS:

#dataplayer::after {
	content: attr(data-name);
	position: absolute; 
	left: -50px;
#dataplayer::before {
	opacity: 0;
	content: attr(data-score);
	position: absolute; 
	left: 100px;

Check out the following fiddle to see all in action:

The only downside I can think of is that only Firefox allows for transitions and animations on generated content. All in all we found data attributes incredibly useful though.

Comments? Here are the threads on Google+ or Facebook.

Fronteers12 – Q&A results, quick reviews and impressions from the stage

Monday, October 8th, 2012

Last week the fifth annual Fronteers conference lured a few hundred developers, designers and managers to Amsterdam, The Netherlands to hear about what’s hot and new in web development. This year I did not speak, but played the MC and interviewer instead.

I have a very soft spot for Fronteers as a conference. I spoke at every one of them and I am always amazed by how much the audience knows. You speak to a group of experts and as such speakers are expected and do deliver sensible, useful talks with lots of technical detail.

Having an audience in the know also makes for a buzzing back channel at a conference and in the past this one was ruthless – shooting down speakers who held back or didn’t know 100% what they were talking about in flames.

In order to turn this into a more productive environment I proposed to the organisers last year that I’d volunteer to introduce the speakers and instead of a traditional Q&A do a sit-down interview with them directly after the talk. I’ve done this before at Highland Fling and found it to be a much more efficient way to handle questions.

And that’s what we did. As Fronteers has a working wireless it was simple to convey to the audience the procedure:

  • I introduce the speaker
  • The speaker gives his presentation during which the audience can tweet questions they have using the #fqa hashtag (Fronteers Q&A)
  • I sit down with the speaker on the side of the stage and conduct an interview using the questions during which the next speaker can set up

All in all this is an incredible effective way of running the conference as you use the time normally wasted in between speakers and you get much more questions answered. There is no waiting for roaming microphones and there is no “can you repeat, I can’t understand you”. Having a 120 character limit also means that people think of their questions much more.

Here are all the talks with a quick note by me and links to the collected tweets. I will try to contact all the speakers to grab them and answer them on their own blogs which I will link from here should that happen:

Fronteers Day 1

Mark Boulton, Adapting to Responsive Design

I’ve just seen Mark talk at Smashingconf and at Reasons to be Creative and still I am not bored of him. Good insights and a very “story telling” approach to speaking.

Addy Osmani, The New And Improved Developer Toolbelt

Addy works tirelessly to collect great information and build and connect tools to make our lives easier. This talk covered the need for tooling and build processes and ended introducing Yeoman. All in all this was a good talk, but to my taste it had far too much content. At times Addy read out his slides with a sentence per bullet to the audience and I found myself basically wanting the deck as I was overwhelmed with the offerings.

Peter-Paul Koch, A Pixel is not a Pixel

PPK did a great job explaining why viewports and pixel densities are not an easy matter and showed a lot of examples how hard it is to build a consistent experience across various browsers just on one device. A good advertisement for PPK’s research into the matter and why we need it.

Alex Graul, Using JS to build bigger, better datavis to enlighten and elate

Alex gave me the first slight heart attack of the event as he had a mix-up with his slides, spoke far too fast and hard to understand if you are not used to speaking to Britishers, and finished after 15 minutes or so. This is where I came in to cover 35 minutes of Q&A until the catering staff was ready for the feeding of the hordes. It was incredible to see though how Alex caught himself and calmed down a lot when in an interview situation rather than a “where are my slides, what is this” one. I think all in all I got much more out of Alex than he’d have covered in his talk this way and as the topic was incredibly interesting it was easy to chat for a bit.

Mathias Bynens, Ten things I didn’t know about HTML

Mathias is dangerous. He is very intelligent, charming and does a lot of research into the ins and outs of markup and browser rendering. Based on that he shows us just how much code is not needed to write for a browser to show a page. That this code is necessary for people to understand what you do is something Mathias believes a lot himself, I just hope that when he says it people still listen rather than just going for the quick “oh, good, I need no closing tag”. Talent like Mathias makes me confident about the future of the web, when I will be sitting on my porch, chasing ducks with my cane and grumbling about darn kids eating my cherries.

Stephen Hay, Style guides are the new Photoshop

Stephen, the only other speaker apart from me who spoke at every Fronteers, is an institution and rightfully so. In this talk, which he also gave at Smashingconf he showed how to automatically generate style guides from mockups, thus making our workflow much shorter. A designer who likes the CLI and uses VIM. What more do you want?

Antoine Hegeman, Bor Verkroost, Bram Duvigneau & Chris Heilmann, Accessibility panel

OK, this was the moment in the conference where I was – as one says – shitting bricks. I know my a11y and I have seen live demos of a11y technology fail spectacularly on stage over and over again. It shows just how professional and pragmatic the panelists were that nothing went wrong at all, and I’d say that this was one of the most informative a11y sessions at a conference I’ve ever seen.

Lea Verou, More CSS secrets: Another 10 things you may not know about CSS

Lea once again dazzled with amazing CSS tricks, once shown before at Smashingconf and coded live on stage. Great stuff, but she was quite over time sadly enough. That said, play with what she showed here, lots to learn.

Fronteers Day 2

Marcin Wichary, The biggest devils in the smallest details

Marcin is the master of Google doodles, builds his own slides using two browsers talking to each other via Node, doesn’t get phased too much when he drops his laptop on stage and in general is a total tinkerer. Great speaker. Lovely, lovely talk.

David DeSandro, Keep it Simple, Smartypants

David changed his talk in the last minute after realising how much in the know the audience is and instead of his planned session talked about trying to make money with “open source” JavaScript solutions and how it can be done. This was the most animated interview I did as there seems to be a massive misunderstanding what open source means. I will blog more about this soon.

Jeroen Wijering, The State of HTML5 Video

Jeroen is the man behind JW player, the HTML5/Flash video player in use in YouTube and seen a lot on the web. He covered the basics of HTML5 video and kept his talk very short which allowed me to dig a bit deeper into the newer unknowns in open media like streaming and DRM during the interview.

Anne van Kesteren, Building the web platform

Anne van Kesteren is scarily smart when it comes to the web, browsers and standards and in this talk he shared some of his thoughts and ideas. Sadly enough, I found the talk very confusing and lacking an overall story arc or goal. It all might become more obvious when I watch the video again, but I for one was more confused than inspired.

Phil Hawksworth, I can smell your CMS

Phil seems to be a clone of Jake Archibald who went to design school. Very funny, very quick, with beautiful slides and examples and tales from the trenches he knows how to engage and to give out good info to boot. To me one of the best talks I’ve seen lately.

Peter Nederlof, Beyond simple transitions, with a pinch of

Peter is a silent star. He does incredible work and participated with solutions in some of the larger breakthroughs in library code in the past without tooting his horn much. The same happened here. Peter had some great examples and code ideas but lacked the oomph needed to get people excited about it. All in all this would have been a kick-ass 15 minute lightning talk but felt stretched as it was. Nevertheless, use what he talked about, there is a lot of good in there.

Rebecca Murphey, JS Minty Fresh: Identifying and Eliminating Smells in Your Code Base

Rebecca is a trainer by heart and gave a very nice overview how to refactor and clean up stale JavaScript code that is based on laziness or “quick, get this out of the door” thinking. Good advice, but to me too much focused on jQuery. I’d like to see this at the jQuery conf as it reminded me of my talk there last year, with the main difference that Rebecca knows it inside out.

Alex Russell, What the legacy web is keeping from us

Alex is very smart indeed and delivered a talk that surprised me and made me happy. Instead of damning outdated technologies and pushing kicking and screaming into a more app-centric web based on current browser technology, Alex started with some thought experiments and built up to a great conclusion that it is up to us to free ourselves from the shackles of outdated tech. Splendid talk, go see it.


All in all Fronteers delivered again. And this was a massive surprise to me as I didn’t prepare anything and neither coached the speakers in time, nor knew some of the speakers. I also convinced the organisers in the last minute to go for the “interview Q&A” approach and scrounged chairs on the spot to make it happen. As it stands, I am damn proud having pulled it off and hope more conferences will follow the principle. For anyone who is out to do the MCing and interviewing: rest up, it is a truckload of work and quite exhausting as you need to be first there, last out and 100% concentrated on the content. Doing ad-hoc interviews with live questions coming in is not a simple feat, but when you do it, it is very much worth your while. I had a blast and I hope people got a lot out of Fronteers 2012.