Christian Heilmann

Author Archive

Don’t call it “open source” unless you mean it

Monday, October 22nd, 2012

In terms of releasing code into the wild we live in terribly exciting times. Products like GitHub, Dropbox, online collaboration tools like JSFiddle, JSBin, Codepen and Dabblet make it very easy to show our code to the outside world. Furthermore, a lot of products are build in a modular manner which means you can simply participate by writing a plugin or add-on instead of coming up with your own solutions. jQuery and WordPress are living proof of that.

Quick release, moving on fast

One of the biggest dangers of a very simple infrastructure is the one of inflation. When it is easy to release something, a lot will be released. This means it becomes much harder to find good quality content and we are tempted to release more rather than releasing a few things we really care about and are ready to also care for in the future. Much like we write shorter, and less thought-out emails than letters we tend to get into a frenzy of releasing smaller, shorter and less documented products.

This is open source – or is it?

This is where we run a current danger of cheapening the term “open source”. Releasing an open source product is much more than making it available for free. It is a process, an ongoing commitment to nurturing something by sharing it with the world. Open source and its merits can actually be a blueprint of a much more democratic world to come as Clay Shirky explains in How the Internet will (one day) transform government.

During the Fronteers conference, David DeSandro gave a talk about monetising jQuery plugins and how to keep your sanity at the same time. His main concerns were that his plugins (mostly Masonry) cost a lot of his time and didn’t bring in enough to make a living off them. Several times during the talk he explained that he does have a job and that he has no time to answer every email. After all, there should be no need to ask questions or get support as the code is available and on Github and people could help each other and be happy that the code was released as open source. How come there was no magical community appearing out of nowhere that took care of all this?

Open Source is more than releasing code

Well, this wasn’t an open source project. The code was released in the open and is available on a repository, but it is not open source. Open source means – at least to me and a lot of people I talked to – that products are produced and maintained in the open.

This encompasses much more than just putting the code on GitHub. It means:

  • being ready to take on pull request and patches and communicate to the people who do them when and if they are released
  • helping people to get involved
  • communicate changes to the outside world
  • reacting to security issues and performance issues
  • answering feature requests
  • dealing with licensing issues
  • ensuring that the product evolves and changes as a reaction to new environments and market needs
  • encouraging people to contribute and help others
  • recognising people for helping out others and sharing the fruits of the labour with them

Open source is a lot of work and needs a community

This means a lot of work and it is the reason why a true open source project is not done by one person but from the very beginning should be planned around a group of people. People who are happy to share ownership and responsibilities as much as the benefits and the income from the project with others. People who are ready to hand over responsibilities should they get tired of them and to train their predecessors with that in mind. To be truly open source you should think about the maintenance and the future of the project before you release it. It is a group effort and the very tricky part is finding the group without hiring them.

A lot of what people call open source these days feels like “Tada-source” or “Pasture-source” to me:

  • Ta-da source – products that are released as a final commercial version as-is and get a source release to the world later. Instead of making the world part of the creation process it becomes maintenance staff to fix bugs and add the things they need to the product. This allows you to say you are open without having to deal with people’s needs and to stick to your own agenda when it comes to core functionality.
  • Pasture source – this is when products were financially viable and interesting but became a nuisance. Instead of maintaining them they go to the pasture of open source where kind shepherds without pay will ensure the products will live happily ever after. Pasture source happens either because of the workload of communication getting too much or because the business you work for doesn’t see the product as something they should pour resources into. A lot of times this is a PR exercise – “hey this product didn’t do well, but now that it is open it will be the next new cool thing for the open source community”

Brackets – a positive surprise

When Adobe said they’ll release their editor Brackets as Open Source and have it as Edge Code in their new set of tools for web developers I was not alone in being wary about this. It sounded like a closed source company trying to play with the rebel kids. Technically this is not new, as Adobe already did a lot of open releases with Air and even released the books under Creative Commons but it still felt “well, let’s see what is coming there”.

When seeing Adam Lehman talk about Brackets and their approach at the Adobe event in London I was very impressed to see how the project is run as an open source project. The code, of course, is available on Github, but that is not all. The project is following an Agile process using Scrum, pull requests are being reviewed every day, it has a 2.5 week release cycle and external contributions take priority. A detailed blog with release notes of each sprint is also available.

The project is managed in the open on Trello and a very clever way to get new contributers is to triage simple bugs as “quick wins” for new contributors rather than having more advanced developers spend their time on fixing them.

This is a great example of approaching an open source release of a product with the right tools and the right mindset. Yes, it is a lot of work, but I am quite confident that this means Brackets will be around for a long time, even if the Edge suite failed.

Don’t stop releasing

I am not saying that we should stop releasing things and making them available on GitHub and others. I am simply saying that open source means much more than that and that we shouldn’t be surprised if we get less contributors than we think if all we do is throw some code out and wait for magic to happen. Open Source means we get people’s time to build with us, not for us.

So when you release things, don’t call it an open source project, unless you are ready to go the full distance. Just put it out there and tell people that it is free and available, and that it is up to them what happens with it.

Data attributes rock – as both CSS and JavaScript know them

Wednesday, October 10th, 2012

Currently my better half Kasia is working on a JavaScript training course and wanted to explain the concepts of JavaScript with a game. So we sat down and did a simple game example whilst she was fretting over the structure of the course. As she wanted to explain how to interact with the DOM in JavaScript rather than using Canvas we had some fun using CSS animation in conjunction with simple keyboard controls. More on the game in due time, but here is a quick thing we found to be extremely useful and not really used enough in the wild – the interplay of data attributes, CSS and changing states.

Defining a player element

We wanted to make the game hackable, people playing with HTML could change it. That was more a request by me as Mozilla has the Webmaker project and there will be a lot of game hacking at Mozfest in November.

In order to define a player element the semantic fan in me would do something like this:

<div id="player">
	<ul>
		<li class="name">Joe</li>
		<li class="score">100</li>
	</ul>
</div>

This makes sense in terms of HTML, and is accessible, too. However, in terms of accessing this in JavaScript, it is quite annoying as you need three element matches. Also in terms of maintenance it means three elements. In JS you’d need to do something like:

var player = document.querySelector('#player'),
    name   = document.querySelector('#player .name'),
    score  = document.querySelector('#player .score');

In order to change the score value, you’d need to change the innerHTML of the score reference.

score.innerHTML = 10;

Aside: yes I know there are lots of HTML templating solutions and I am sure dozens of jQuery solutions for that, but let’s stick to vanilla JS as this was about teaching that.

A HTML5 player element

Instead of going through these pains, we found it to be much easier to go with data attributes:

<div id="dataplayer" data-name="Joe" data-score="100">
</div>

The clever thing here is that HTML5 already gives us an API to change this data:

var player = document.querySelector('#dataplayer');
 
// read
alert('Score:' + player.dataset.score);
alert('Name:' + player.dataset.name);
 
// write
player.dataset.score = 10;
 
// read again
alert('Score:' + player.dataset.score);

Re-using attribute values

Another benefit of using data attributes is that CSS gets them. Say for example you want to show the colour of the score value in red when it reaches 10. In the first HTML using a list you’d need to do the testing in JavaScript and add a class to have a different display. You could of course also change the colour directly with the style collection but that is awful in terms of maintenance. It can cause reflows in your rendering and also means another thing to explain to maintainers.

function changescore(newscore) {
	if (newscore === 10) {
		score.classList.add('low');
	} else {
		score.classList.remove('low');
	}
}

#player .low {
	color: #c00;
}

Aside: jQuery has contains(‘foo’) to match elements with the text in their node content in the selector engine, but it has been deprecated as a CSS standard, so that is not the way to go.

When using data attributes you don’t need that – all you need is an attribute selector in CSS:

#dataplayer[data-score='10'] {
	color: #c00;
}

To display the scores you can use generated content in CSS:

#dataplayer::after {
	content: attr(data-name);
	position: absolute; 
	left: -50px;
}
#dataplayer::before {
	opacity: 0;
	content: attr(data-score);
	position: absolute; 
	left: 100px;
}

Check out the following fiddle to see all in action: http://jsfiddle.net/codepo8/BMY6H/

The only downside I can think of is that only Firefox allows for transitions and animations on generated content. All in all we found data attributes incredibly useful though.

Comments? Here are the threads on Google+ or Facebook.

Fronteers12 – Q&A results, quick reviews and impressions from the stage

Monday, October 8th, 2012

Last week the fifth annual Fronteers conference lured a few hundred developers, designers and managers to Amsterdam, The Netherlands to hear about what’s hot and new in web development. This year I did not speak, but played the MC and interviewer instead.

I have a very soft spot for Fronteers as a conference. I spoke at every one of them and I am always amazed by how much the audience knows. You speak to a group of experts and as such speakers are expected and do deliver sensible, useful talks with lots of technical detail.

Having an audience in the know also makes for a buzzing back channel at a conference and in the past this one was ruthless – shooting down speakers who held back or didn’t know 100% what they were talking about in flames.

In order to turn this into a more productive environment I proposed to the organisers last year that I’d volunteer to introduce the speakers and instead of a traditional Q&A do a sit-down interview with them directly after the talk. I’ve done this before at Highland Fling and found it to be a much more efficient way to handle questions.

And that’s what we did. As Fronteers has a working wireless it was simple to convey to the audience the procedure:

  • I introduce the speaker
  • The speaker gives his presentation during which the audience can tweet questions they have using the #fqa hashtag (Fronteers Q&A)
  • I sit down with the speaker on the side of the stage and conduct an interview using the questions during which the next speaker can set up

All in all this is an incredible effective way of running the conference as you use the time normally wasted in between speakers and you get much more questions answered. There is no waiting for roaming microphones and there is no “can you repeat, I can’t understand you”. Having a 120 character limit also means that people think of their questions much more.

Here are all the talks with a quick note by me and links to the collected tweets. I will try to contact all the speakers to grab them and answer them on their own blogs which I will link from here should that happen:

Fronteers Day 1

Mark Boulton, Adapting to Responsive Design

I’ve just seen Mark talk at Smashingconf and at Reasons to be Creative and still I am not bored of him. Good insights and a very “story telling” approach to speaking.

Addy Osmani, The New And Improved Developer Toolbelt

Addy works tirelessly to collect great information and build and connect tools to make our lives easier. This talk covered the need for tooling and build processes and ended introducing Yeoman. All in all this was a good talk, but to my taste it had far too much content. At times Addy read out his slides with a sentence per bullet to the audience and I found myself basically wanting the deck as I was overwhelmed with the offerings.

Peter-Paul Koch, A Pixel is not a Pixel

PPK did a great job explaining why viewports and pixel densities are not an easy matter and showed a lot of examples how hard it is to build a consistent experience across various browsers just on one device. A good advertisement for PPK’s research into the matter and why we need it.

Alex Graul, Using JS to build bigger, better datavis to enlighten and elate

Alex gave me the first slight heart attack of the event as he had a mix-up with his slides, spoke far too fast and hard to understand if you are not used to speaking to Britishers, and finished after 15 minutes or so. This is where I came in to cover 35 minutes of Q&A until the catering staff was ready for the feeding of the hordes. It was incredible to see though how Alex caught himself and calmed down a lot when in an interview situation rather than a “where are my slides, what is this” one. I think all in all I got much more out of Alex than he’d have covered in his talk this way and as the topic was incredibly interesting it was easy to chat for a bit.

Mathias Bynens, Ten things I didn’t know about HTML

Mathias is dangerous. He is very intelligent, charming and does a lot of research into the ins and outs of markup and browser rendering. Based on that he shows us just how much code is not needed to write for a browser to show a page. That this code is necessary for people to understand what you do is something Mathias believes a lot himself, I just hope that when he says it people still listen rather than just going for the quick “oh, good, I need no closing tag”. Talent like Mathias makes me confident about the future of the web, when I will be sitting on my porch, chasing ducks with my cane and grumbling about darn kids eating my cherries.

Stephen Hay, Style guides are the new Photoshop

Stephen, the only other speaker apart from me who spoke at every Fronteers, is an institution and rightfully so. In this talk, which he also gave at Smashingconf he showed how to automatically generate style guides from mockups, thus making our workflow much shorter. A designer who likes the CLI and uses VIM. What more do you want?

Antoine Hegeman, Bor Verkroost, Bram Duvigneau & Chris Heilmann, Accessibility panel

OK, this was the moment in the conference where I was – as one says – shitting bricks. I know my a11y and I have seen live demos of a11y technology fail spectacularly on stage over and over again. It shows just how professional and pragmatic the panelists were that nothing went wrong at all, and I’d say that this was one of the most informative a11y sessions at a conference I’ve ever seen.

Lea Verou, More CSS secrets: Another 10 things you may not know about CSS

Lea once again dazzled with amazing CSS tricks, once shown before at Smashingconf and coded live on stage. Great stuff, but she was quite over time sadly enough. That said, play with what she showed here, lots to learn.

Fronteers Day 2

Marcin Wichary, The biggest devils in the smallest details

Marcin is the master of Google doodles, builds his own slides using two browsers talking to each other via Node, doesn’t get phased too much when he drops his laptop on stage and in general is a total tinkerer. Great speaker. Lovely, lovely talk.

David DeSandro, Keep it Simple, Smartypants

David changed his talk in the last minute after realising how much in the know the audience is and instead of his planned session talked about trying to make money with “open source” JavaScript solutions and how it can be done. This was the most animated interview I did as there seems to be a massive misunderstanding what open source means. I will blog more about this soon.

Jeroen Wijering, The State of HTML5 Video

Jeroen is the man behind JW player, the HTML5/Flash video player in use in YouTube and seen a lot on the web. He covered the basics of HTML5 video and kept his talk very short which allowed me to dig a bit deeper into the newer unknowns in open media like streaming and DRM during the interview.

Anne van Kesteren, Building the web platform

Anne van Kesteren is scarily smart when it comes to the web, browsers and standards and in this talk he shared some of his thoughts and ideas. Sadly enough, I found the talk very confusing and lacking an overall story arc or goal. It all might become more obvious when I watch the video again, but I for one was more confused than inspired.

Phil Hawksworth, I can smell your CMS

Phil seems to be a clone of Jake Archibald who went to design school. Very funny, very quick, with beautiful slides and examples and tales from the trenches he knows how to engage and to give out good info to boot. To me one of the best talks I’ve seen lately.

Peter Nederlof, Beyond simple transitions, with a pinch of
JavaScript

Peter is a silent star. He does incredible work and participated with solutions in some of the larger breakthroughs in library code in the past without tooting his horn much. The same happened here. Peter had some great examples and code ideas but lacked the oomph needed to get people excited about it. All in all this would have been a kick-ass 15 minute lightning talk but felt stretched as it was. Nevertheless, use what he talked about, there is a lot of good in there.

Rebecca Murphey, JS Minty Fresh: Identifying and Eliminating Smells in Your Code Base

Rebecca is a trainer by heart and gave a very nice overview how to refactor and clean up stale JavaScript code that is based on laziness or “quick, get this out of the door” thinking. Good advice, but to me too much focused on jQuery. I’d like to see this at the jQuery conf as it reminded me of my talk there last year, with the main difference that Rebecca knows it inside out.

Alex Russell, What the legacy web is keeping from us

Alex is very smart indeed and delivered a talk that surprised me and made me happy. Instead of damning outdated technologies and pushing kicking and screaming into a more app-centric web based on current browser technology, Alex started with some thought experiments and built up to a great conclusion that it is up to us to free ourselves from the shackles of outdated tech. Splendid talk, go see it.

Summary

All in all Fronteers delivered again. And this was a massive surprise to me as I didn’t prepare anything and neither coached the speakers in time, nor knew some of the speakers. I also convinced the organisers in the last minute to go for the “interview Q&A” approach and scrounged chairs on the spot to make it happen. As it stands, I am damn proud having pulled it off and hope more conferences will follow the principle. For anyone who is out to do the MCing and interviewing: rest up, it is a truckload of work and quite exhausting as you need to be first there, last out and 100% concentrated on the content. Doing ad-hoc interviews with live questions coming in is not a simple feat, but when you do it, it is very much worth your while. I had a blast and I hope people got a lot out of Fronteers 2012.

Browser benchmarks are gamed – so why not make them a game?

Monday, October 1st, 2012

Tomorrow the Core Mobile Web Platform Community Group is meeting the Mozilla space in London to discuss the future of browser benchmarking. Sadly I won’t be able to attend as I am at Create the Web London on the same day and flying to Amsterdam for Fronteers later. However, I think this is a good opportunity to mention some things I have thought about which my colleague Jet Villegas will also mention tomorrow morning for me.

Here is what I am worried about: browser benchmarks are very hot, but fail to deliver data that will help us make the web the main development platform.

Benchmarks are becoming marketing material

My main concern is that browser benchmarks as a whole are a very academic and “close to the metal” exercise. Creating and wiping an empty canvas or creating and destroying thousands of objects gives you results, but it doesn’t mean that real product needs are met by optimising for these use cases. Jet will talk about some issues that actually do bring up false positives unless you test in a real browsing and using the browser scenario.

Even worse is that the press is hungry for browser news and loves when big corporates shoot at it each other. That’s why a lot of benchmarks are flawed from the very start to give a better result for one browser or platform or another. They’ve become a marketing tool, rather than helping us building better products with the web. Above all though, they are very, very boring.

Making benchmarking fun again

Let’s wind back the clock a bit to 1997. You might not have heard of this but back then Final reality was all the rage. It was a product of the demo scene (closely related to the much acclaimed Second reality) and a very cool thing to watch back then. You can check the YouTube video to see what it looked like.

This pushed the limits of the video card, the sound system and the CPU and the amazing thing was that it was a benchmark. After the demo ran all the way through you got a report on how well your hardware did in the test. These reports were not only bragging rights amongst overclockers but also used by admins to test out if hardware works in 10 minutes that would have taken ages with conventional test methods.

So why don’t we do something similar now?

The benchmarking game

How about that instead of an automated script we’d have a game? This already happens in some games that are built by browser makers.
Bananabread – the 3D shooter that is converted from C++ to JavaScript and runs in the browser has a BananaBench which gives you data of how well the browser performed after you played:

bananabread in action

However, this aims too high as sadly enough there is a lot of hardware out there that still chokes on WebGL and not everybody wants to play a 3D Shooter (I can’t be bothered, to be fair).

So how about this: a platformer or 2D shooter that gets incrementally more technically challenging to the platform the longer you play and offers extra levels to browsers and environments that support certain technologies and offers simpler ones to others.

Imagine a game that tests performance and reports it back after each level running on Facebook and being promoted in the Android (and Apple, yeah, a boy can dream) stores. People could play the game without being the wiser that they are actually helping us get real information from users on a large variety of devices on real (and flawed) connections and on browsers that are not 100% allocated to doing one task but have other tabs open and junk in their caches and RAM.

Why not?

Reporting everywhere

My other colleague Rob Hawkes is currently testing a lot of HTML5 games and compares the performance of different browsers on different mobile OS with these. This is great and a lot of work. I found that a lot of demos and also game demos have a developer mode that shows the FPS and the general performance. Wouldn’t it be great to have a database of this data instead of just seeing it on the screen for tweaking while we develop? There are systems like scoreloop who centralise the scores of games, why not the performance? This could be a whole new market in the HTML5 space.

Apps could of course benefit from that, too. Taking a well-used piece of software and adding performance reporting of – for example scroll-lists – would give us a lot of good information from our users rather than data built and reported in a lab environment. We could do Benchpress instead of WordPress?

Got ideas or want to contribute? Check out the thread on Google+ and on Facebook.

Quick one: converting a multi-page PDF to a JPG for each page on OSX

Sunday, September 30th, 2012

I had this great task to convert a PDF to JPGs, meaning a multi page PDF should become a lot of JPGs. I don’t own Adobe Acrobat Pro and I didn’t want to buy an extra piece of software for it. So I went for the thing that hardly ever lets you down – even it it speaks in tongues – the command line.

So here is how to convert a PDF to JPGs:

Install homebrew by going to the terminal and copying and pasting the following:

ruby -e "$(curl -fsSkL raw.github.com/mxcl/homebrew/go)"

The script explains what it does while it runs. It is not the matrix, although it is green and full of text.

Next get Ghostscript – just go to the command line and do a:

brew install ghostscript

This can take a while, so get a cuppa on. Once you are done, here’s the command to convert a PDF to a lot of JPGs:

gs -dNOPAUSE -sDEVICE=jpeg -r144 -sOutputFile=p%03d.jpg file.pdf

The PDF is file.pdf and this will generate files called “p001.jpg” to “p004.jpg” for a 4 page document for example. You can change that with the p%03d.jpg setting above. For example plonk%04d.jpg would create plonk0001.jpg.

Once the conversion is done you are in the GS> prompt, just press Ctrl+C to get out.