Christian Heilmann

Author Archive

Be a great presenter: deliver on and off-stage

Thursday, June 5th, 2014

As a presenter at a conference, your job is to educate, entertain and explain. This means that the few minutes on stage are the most stressful, but should also be only a small part of your overall work.

Christian Heilmann listening to translations at TEDxThessaloniki

A great technical presentation needs a few things:

  • Research – make sure your information is up-to-date and don’t sell things that don’t work as working
  • Sensible demonstrations – by all means show what some code does before you talk about it. Make sure your demo is clean and sensible and easy to understand.
  • Engagement materials – images, videos, animations, flowcharts, infographics. Make sure you have the right to use those and you don’t just use them for the sake of using them.
  • Handover materials – where should people go after your talk to learn more and get their hands dirty?
  • An appropriate slide deck – your slides are wall-paper for your presentation. Make them supportive of your talk and informative enough. Your slides don’t have to make sense without your presentation, but they should also not be distracting. Consider each slide an emphasis of what you are telling people.
  • A good narration – it is not enough to show cool technical things. Tell a story, what are the main points you want to make, why should people remember your talk?
  • An engaging presentation – own the stage, own the talk, focus the audience on you.

All of this needs a lot of work, collecting on the web, converting, coding, rehearsing and learning to become better at conveying information. All of it results in materials you use in your talk, but may also not get to use whilst they are very valuable.

It is not about you, it is about what you deliver

A great presenter could carry a talking slot just with presence and the right stage manner. Most technical presentations should be more. They should leave the audience with a “oh wow, I want to try this and I feel confident that I can do this now” feeling. It is very easy to come across as “awesome” and show great things but leave the audience frustrated and confused just after they’ve been delighted by the cool things you are able to do.

Small audience, huge viewer numbers

Great stuff happens at conferences, great demos are being shown, great solutions explained and explanations given. The problem is that all of this only applies to a small audience, and those on the outside lack context.

This is why a lot of times parts of your presentation might get quoted out of context and demos you showed to make a point get presented as endorsed by you missing the original point.

In essence: conferences are cliquey by design. That’s OK, after all people pay to go to be part of that select group and deserve to get the best out of it. You managed to score a ticket – you get to be the first to hear and the first to talk about it with the others there.

It gets frustrating when parts of the conference get disseminated over social media. Many tweets talking about the “most amazing talk ever” or “I can’t believe the cool thing $x just showed” are not only noise to the outside world, they also can make people feel bad about missing out.

This gets exacerbated when you release your slides and they don’t make any sense, as they lack notes. Why should I get excited about 50MB of animated GIFs, memes and hints of awesome stuff? Don’t make me feel bad – I already feel I am missing out as I got no ticket or couldn’t travel to the amazing conference.

misleading infographic

If you release your talk materials, make them count. These are for people on the outside. Whilst everybody at an event will ask about the slides, the number of people really looking at them afterwards is much smaller than the ones who couldn’t go to see you live.

Waiting for recordings is frustrating

The boilerplate answer to people feeling bad about not getting what the whole twitter hype is about is “Oh, the videos will be released, just wait till you see that”. The issue with that is that in many cases the video production takes time and there is a few weeks up to months delay between the conference and the video being available. Which is OK, good video production is hard work. It does, however water down the argument that the outside world will get the hot cool information. By the time the video of the amazing talk right now is out we’re already talking about another unmissable talk happening at another conference.

Having a video recording of a talk is the best possible way to give an idea of how great the presentation was. It also expects a lot of dedication of the viewer. I watch presentation videos in my downtime – on trains, in the gym and so on. I’ve done this for a while but right now I find so much being released that it becomes impossible to catch up. I just deleted 20 talks from my iPod unwatched as their due-date has passed: the cool thing the presenter talked about is already outdated. This seems a waste, both for the presenter and the conference organiser who spent a lot of time and money on getting the video out.

Asynchronous presenting using multiple channels

Here’s something I try to do and I wished more presenters did: as a great presenter should be aware that you might involuntarily cause discontent and frustration outside the conference. People talk about the cool stuff you did without knowing what you did.

Instead of only delivering the talk, publish a technical post covering the same topic you talked about. Prepare the post using the materials you collected in preparation of your talk. If you want to, add the slides of your talk to the post. Release this post on the day of your conference talk using the hashtag of the conference and explaining where and when the talk happens and everybody wins:

  • People not at the conference get the gist of what you said instead of just soundbites they may quote out of context
  • You validate the message of your talk – a few times I re-wrote my slides after really trying to use the technology I wanted to promote
  • You get the engagement from people following the hashtag of the conference and give them something more than just a hint of what’s to come
  • You support the conference organisers by drumming up interest with real technical information
  • The up-to-date materials you prepared get heard web-wide when you talk about them, not later when the video is available
  • You re-use all the materials that might not have made it into your talk
  • Even when you fail to deliver an amazing talk, you managed to deliver a lot of value to people in and out of the conference

For extra bonus points, write a post right after the event explaining how it went and what other parts about the conference you liked. That way you give back to the organisers and you show people who went there that you were just another geek excited to be there. Who knows, maybe your materials and your enthusiasm might be the kick some people need to start proposing talks themselves.

Write less, achieve meh?

Wednesday, June 4th, 2014

In my keynote at HTML5DevConf in San Francisco I talked about a pattern of repetition those of us who’ve been around for a while will have encountered, too: every few years development becomes “too hard” and “too fragmented” and we need “simpler solutions”.

chris in suit at html5devconf

In the past, these were software packages, WYSIWYG editors and CMS that promised us to deliver “to all platforms without any code overhead”. Nowadays we don’t even wait for snake-oil salesmen to promise us the blue sky. Instead we do this ourselves. Almost every week we release new, magical scripts and workflows that solve all the problems we have for all the new browsers and with great fall-backs for older environments.

Most of these solutions stem from fixing a certain problem and – especially in the mobile space – far too many stem from trying to simulate an interaction pattern of native applications. They do a great job, they are amazing feats of coding skills and on first glance, they are superbly useful.

It gets tricky when problems come up and don’t get fixed. This – sadly enough – is becoming a pattern. If you look around GitHub you find a lot of solutions that promise utterly frictionless development with many an unanswered issue or un-merged pull request. Even worse, instead of filing bugs there is a pattern of creating yet another solution that fixes all the issues of the original one . People simply should replace the old one with the new one.

Who replaces problematic code?

All of this should not be an issue: as a developer, I am happy to discard and move on when a certain solution doesn’t deliver. I’ve changed my editor of choice a lot of times in my career.

The problem is that completely replacing solutions expects a lot of commitment from the implementer. All they want is something that works and preferably something that fixes the current problem. Many requests on Stackoverflow and other help sites don’t ask for the why, but just want a how. What can I use to fix this right now, so that my boss shuts up? A terrible question that developers of every generation seem to repeat and almost always results in unmaintainable code with lots of overhead.

That’s when “use this and it works” solutions become dangerous.

First of all, these tell those developers that there is no need to ever understand what you do. Your job seems to be to get your boss off your back or to make that one thing in the project plan – that you know doesn’t make sense – work.

Secondly, if we found out about issues of a certain solution and considered it dangerous to use (cue all those “XYZ considered dangerous” posts) we should remove and redirect them to the better solutions.

This, however, doesn’t happen often. Instead we keep them around and just add a README that tells people they can use our old code and we are not responsible for results. Most likely the people who have gotten the answer they wanted on the Stackoverflows of this world will never hear how the solution they chose and implemented is broken.

The weakest link?

Another problem is that many solutions rely on yet more abstractions. This sounds like a good plan – after all we shouldn’t re-invent things.

However, it doesn’t really help an implementer on a very tight deadline if our CSS fix needs the person to learn all about Bower, node.js, npm, SASS, Ruby or whatever else first. We can not just assume that everybody who creates things on the web is as involved in its bleeding edge as we are. True, a lot of these tools make us much more efficient and are considered “professional development”, but they are also very much still in flux.

We can not assume that all of these dependencies work and make sense in the future. Neither can we expect implementers to remove parts of this magical chain and replace them with their newer versions – especially as many of them are not backwards compatible. A chain is as strong as its weakest link, remember? That also applies to tool chains.

If we promise magical solutions, they’d better be magical and get magically maintained. Otherwise, why do we create these solutions? Is it really about making things easier or is it about impressing one another? Much like entrepreneurs shouldn’t be in love with being an entrepreneur but instead love their product we should love both our code and the people who use it. This takes much more effort than just releasing code, but it means we will create a more robust web.

The old adage of “write less, achieve more” needs a re-vamp to “write less, achieve better”. Otherwise we’ll end up with a world where a few people write small, clever solutions for individual problems and others pack them all together just to make sure that really everything gets fixed.

The overweight web

This seems to be already the case. When you see that the average web site according to HTTParchive is 1.7MB in size (46% cacheable) with 93 resource requests on 16 hosts then something, somewhere is going terribly wrong. It is as if none of the performance practices we talked about in the last few years have ever reached those who really build things.

A lot of this is baggage of legacy browsers. Many times you see posts and solutions like “This new feature of $newestmobileOS is now possible in JavaScript and CSS - even on IE8!”. This scares me. We shouldn’t block out any user of the web. We also should not take bleeding edge, computational heavy and form-factor dependent code and give it to outdated environments. The web is meant to work for all, not work the same for all and certainly not make it slow and heavy for older environments based on some misunderstanding of what “support” means.

Redundancy denied

If there is one thing that this discouraging statistic shows then it is that future redundancy of solutions is a myth. Anything we create that “fixes problems with current browsers” and “should be removed once browsers get better” is much more likely to clog up the pipes forever than to be deleted. Is it – for example – really still necessary to fix alpha transparency in PNGs for IE5.5 and 6? Maybe, but I am pretty sure that of all these web sites in these statistics only a very small percentage really still have users locked into these browsers.

The reason for denied redundancy is that we solved the immediate problem with a magical solution – we can not expect implementers to re-visit their solutions later to see if now they are not needed any longer. Many developers don’t even have the chance to do so – projects in agencies get handed over to the client when they are done and the next project with a different client starts.

Repeating XHTML mistakes

One of the main things that HTML5 was invented for was to create a more robust web by being more lenient with markup. If you remember, XHTML sent as XML (as it should, but never was as IE6 didn’t support that) had the problem that a single HTML syntax error or an un-encoded ampersand would result in an error message and nothing would get rendered.

This was deemed terrible as our end users get punished for something they can’t control or change. That’s why the HTML algorithm of newer browsers is much more lenient and does – for example – close tags for you.

Nowadays, the yellow screen of death showing an XML error message is hardly ever seen. Good, isn’t it? Well, yes, it would be – if we had learned from that mistake. Instead, we now make a lot of our work reliant on JavaScript, resource loaders and many libraries and frameworks.

This should not be an issue – the “JavaScript not available” use case is a very small one and mostly by users who either had JavaScript turned off by their sysadmins or people who prefer the web without it.

The “JavaScript caused an error” use case, on the other hand, is very much alive and will probably never go away. So many things can go wrong, from resources not being available, to network timeouts, mobile providers and proxies messing with your JavaScript up to simple syntax errors because of wrong HTTP headers. In essence, we are relying on a technology that is much less reliable than XML was and we feel very clever doing so. The more dependencies we have, the more likely it is that something can go wrong.

None of this is an issue, if we write our code in a paranoid fashion. But we don’t. Instead we also seem to fall for the siren song of abstractions telling us everything will be more stable, much better performing and cleaner if we rely on a certain framework, build-script or packaging solution.

Best of breed with basic flaws

One eye-opener for me was judging the Static Showdown Hackathon. I was very excited about the amazing entries and what people managed to achieve solely with HTML, CSS and JavaScript. What annoyed me though was the lack of any code that deals with possible failures. Now, I understand that this is hackathon code and people wanted to roll things out quickly, but I see a lot of similar basic mistakes in many live products:

  • Dependency on a certain environment – many examples only worked in Chrome, some only in Firefox. I didn’t even dare to test them on a Windows machine. These dependencies were in many cases not based on functional necessity – instead the code just assumed a certain browser specific feature to be available and tried to access it. This is especially painful when the solution additionaly loads lots of libraries that promise cross-browser functionality. Why use those if you’re not planning to support more than one browser?
  • Complete lack of error handling – many things can go wrong in our code. Simply not doing anything when for example loading some data failed and presenting the user with an infinite loading spinner is not a nice thing to do. Almost every technology we have has a success and an error return case. We seem to spend all the time in the success one, whilst it is much more likely that we’ll lose users and their faith in the error one. If an error case is not even reported or reported as the user’s fault we’re not writing intelligent code. Thinking paranoid is a good idea. Telling users that something went wrong, what went wrong and what they can do to re-try is not luxury – it means building a user interface. Any data loading that doesn’t refresh the view should have an error case and a timeout case – connections are the things most likely to fail.
  • A lack of very basic accessibility – many solutions I encountered relied on touch alone, and doing so provided incredibly small touch targets. Others showed results far away from the original action without changing the original button or link. On a mobile device this was incredibly frustrating.

Massive web changes ahead

All of this worries me. Instead of following basic protective measures to make our code more flexible and deliver great results to all users (remember: not the same results to all users; this would limit the web) we became dependent on abstractions and we keep hiding more and more code in loaders and packaging formats. A lot of this code is redundant and fixes problems of the past.

The main reason for this is a lack of control on the web. And this is very much changing now. The flawed solutions we had for offline storage (AppCache) and widgets on the web (many, many libraries creating DOM elements) are getting new, exciting and above all control-driven replacements: ServiceWorker and WebComponents.

Both of these are the missing puzzle pieces to really go to town with creating applications on the web. With ServiceWorker we can not only create apps that work offline, but also deal with a lot of the issues we now solve with dependency loaders. WebComponents allow us to create reusable widgets that are either completely new or inherited from another or existing HTML elements. These widgets run in the rendering flow of the browser instead of trying to make our JavaScript and DOM rendering perform in it.

The danger of WebComponents is that it allows us to hide a lot of functionality in a simple element. Instead of just shifting our DOM widget solutions to the new model this is a great time to clean up what we do and find the best-of-breed solutions and create components from them.

I am confident that good things are happening there. Discussions sparked by the Edge Conference’s WebComponents and Accessibility panels already resulted in some interesting guidelines for accessible WebComponents

Welcome to the “Bring your own solution” platform

The web is and stays the “bring your own solution platform”. There are many solutions to the same problem, each with their own problems and benefits. We can work together to mix and match them and create a better, faster and more stable web. We can only do that, however, when we allow the bricks we build these solutions from to be detachable and reusable. Much like glueing Lego bricks together means using it wrong we should stop creating “perfect solutions” and create sensible bricks instead.

Welcome to the future – it is in the browsers, not in abstractions. We don’t need to fix the problems for browser makers, but should lead them to give us the platform we deserve.

Open Web Apps – a talk at State of the Browser in London

Wednesday, May 14th, 2014

state of the browser panel

On my birthday, 26th of April 2014, I was lucky enough to once again be part of the State of the Browser conference. I gave the closing talk. In it I tried to wrap up what has been said before and remind people about what apps are. I ended with an analysis of how web technologies as we have them now are good enough already or on the way there.

The slides are available on Slideshare:

The video recording of the talk features the amazing outfit I wore, as originally Daniel Appelquist said he’ll be the best dressed speaker at the event.

Open web apps – going beyond the desktop from London Web Standards on Vimeo.

In essence, I talked about apps meaning four things:

  • focused: fullscreen with a simple interface
  • mobile: works offline
  • contained: deleting the icon deletes the app
  • integrated: works with the OS and has hardware access
    responsive and fast: runs smooth, can be killed without taking down the rest of the OS

The resources I talked about are:

Make sure to also watch the other talks given at State of the Browser – there was some great information given for free. Thanks for having me, London Web Standards team!

Thank you, TEDx Thessaloniki

Tuesday, May 13th, 2014

Last weekend was a milestone for me: I spoke at my first TEDx event. I am a big fan of TED and learned a lot from watching their talks and using them as teaching materials for coaching other speakers. That’s why this was a big thing for me and I want to take this opportunity to thank the organisers and point out just how much out of their way they went to make this a great experience for all involved.

thanks tedx thessaloniki

Hey, come and speak at TEDx!

I got introduced to the TEDx Thessaloniki folk by my friend Amalia Agathou and once contacted and approved, I was amazed just how quickly everything fell into place:

  • There was no confusion as to what was expected of me – a talk of 18 minutes tops, presented from a central computer so I needed to create powerpoint or keynote slides dealing with the overall topic of the event “every end is a beginning”
  • I was asked to deliver my talk as a script and had an editor to review it to make it shorter, snappier or more catered to a “TED” audience
  • My flights and hotel were booked for me and I got my tickets and hotel voucher as email – no issue getting there and no “I am with the conference” when trying to check into the hotel
  • I had a deadline to deliver my slides and then all that was left was waiting for the big day to come.

A different stage

TEDx talks are different to other conferences as they are much more focused on the presenter. They are more performance than talk. Therefore the setup was different than stages I am used to:

  • There were a lot of people in a massive theatre expecting me to say something exciting
  • I had a big red dot to stand and move in with a stage set behind me (lots of white suitcases, some of them with video projection on them)
  • There were three camera men; two with hand-held cameras and one with a boom-mounted camera that swung all around me
  • I had two screens with my slides and a counter telling me the time
  • I was introduced before my talk and had 7 seconds to walk on stage whilst a music was playing and my name shown on the big screens on stage
  • In addition to the presentations, there were also short plays and bands performing on stage

Rehearsals, really?

Suffice to say, I was mortified. This was too cool to be happening and hearing all the other speakers and seeing their backgrounds (the Chief Surgeon of the Red Cross, famous journalists, very influential designers, political activists, the architect who designed the sea-side of the city, famous writers, early seed stage VCs, car designers, photo journalists and many, many more) made me feel rather inadequate with my hotch-potch career putting bytes in order to let people see kittens online.

We had a day of rehearsals before the event and I very much realised that they are not for me. Whilst I had to deliver a script, I never stick to one. I put my slides together to remind me what I want to cover and fill the gaps with whatever comes to me. This makes every talk exciting to me, but also a nightmare for translators (so, a huge SORRY and THANK YOU to whoever had to convert my stream of consciousness into Greek this time).

Talking to an empty room doesn’t work for me – I need audience reactions to perform well. Every speaker had a speaking coach to help them out after the rehearsal. They talked to us what to improve, what to enhance, how to use the stage better and stay in our red dot and so on. My main feedback was to make my jokes more obvious as subtle sarcasm might not get noticed. That’s why I added it thicker during the talk. Suffice to say, my coach was thunderstruck after seeing the difference of my rehearsal and the real thing. I told him I need feedback.

Event organisation and other show facts

All in all I was amazed by how well this event was organised:

  • The hotel was in walking distance along a seaside boulevard to the theatre
  • Food was organised in food trucks outside the building and allowing people to eat it on the lawn whilst having a chat. This avoided long queues.
  • Coffee was available by partnering with a coffee company
  • The speaker travel was covered by partnering with an airline – Aegean
  • The day was organised into four sections with speakers on defined topics with long breaks in between
  • There were Q&A sessions with speakers in breaks (15 minutes each, with a defined overall topic and partnering speakers with the same subject matter but differing viewpoints)
  • All the videos were streamed and will end up on YouTube. They were also shown on screens outside the auditorium for attendees who preferred sitting on sofas and cushions
  • There was an outside afterparty with drinks provided by a drinks company
  • Speaker dinners were at restaurants in walking distance and going long into the night

Attendees

The best thing for me was that the mix of attendees was incredible. I met a few fellow developers, journalists, doctors, teachers, a professional clown, students and train drivers. Whilst TED has a reputation to be elitist, the ticket price of 40 Euro for this event ensured that there was a healthy cross-section and the afterparty blended in nicely with other people hanging out at the beach.

I am humbled and amazed that I pulled that off and I was asked to be part of this. I can’t wait to get my video to see how I did, because right now, it all still seems like a dream.

TEDx Thessaloniki – The web is dead?

Saturday, May 10th, 2014

OMG OMG OMG I am speaking at TEDx! Sorry, just had to get this out of the way…

I am currently in the sunny Thessaloniki in Greece at TEDx and waiting for things to kick off. My own talk is in the afternoon and I wanted to share my notes and slides here for those who can’t wait for the video.

The, slightly cryptic overall theme of the event is “every end is a beginning” and thus I chose to talk about the perceived end of the web at the hand of native apps and how apps are already collapsing in on themselves. Here are the slides and notes which – as usual – might end up just being a reminder for myself what I want to cover.

TEDx Thessaloniki – The web is dead? from Christian Heilmann


Hello, I am here today to tell you that the web is dead. Which is unfortunate, as I am a web developer. I remember when the web was the cool new revolution and people flocked to it. It was the future. What killed it?

Typing on a Blackberry Torch

The main factor in the death of the web is the form factor of the smart phone. This is how people consume the web right now. And as typing web addresses in it isn’t fun, people wanted something different.

We got rather desperate in our attempt to make things easier. QR codes were the cool thing to do. Instead of typing in an address in a minute it is much easier to scan them with your phone – and most of the time the camera does focus correctly in a few minutes and only drains 30% of your battery.

This is when the app revolution kicked in. Instead of going to web sites, you can have one app each for all your needs. Apps are great. They perform well, they are beautiful, they are easy to find and easy to install and use.

Apps are also focused. They do one thing and one thing well, and you really use them. You don’t have a browser open with several windows. You keep your attention to the one thing you wanted to do.
So, in order to keep my job, I came up with an idea for an app myself.

In my research, I found that apps are primarily used in moments of leisure. Downtime, so to say.

This goes so far that one could say that most apps are actually used in moments historically used for reflection and silence. Like being in the bathroom. My research showed that there is a direct correlation between apps released and time spent in facilities.

chart: time spent in toilet playing games

And this is where my app idea comes in. Instead of just using a random app in these moments, use WhatsOut!

whatsout logo

WhatsOut is a location based checkin app much like Foursquare but focused at public facilities. You can check-in, become the mayor, leave reviews, win badges like “3 stall buddies” when checking in with friends.

Marking territory

The app is based on principles of other markets, like the canine one where it’s been very successful for years. There are many opportunities to enhance the app. You can link photos of food on Instagram with the checkin (as an immediate result), and with enough funding and image recognition it could even become a health app.

hype

Seriously though: this is my problem with apps. Whilst technically superior on a mobile device they are not an innovation.

The reason is their economic model: everything is a numbers game. For app markets to succeed, they need millions of apps. For apps to succeed, they need thousands of users. What the app does is not important – how many eyeballs it gets is.

This is why every app needs to lock you in. It needs for you to stay and do things. Add content, buy upgrades, connect to friends and follow people.

tamagotchi

In essence, for apps to succeed they have to be super annoying Tamagotchi. They want you to care for them all the time and be there only for them. And we all know what happened to Tamagotchi – people were super excited about them and now they all collect dust.

The web was software evolved – you get your content and functionality on demand and independent of hardware. Apps, as they are now, are a step back in that regard. We’re back to waiting for software to be delivered to us as a packaged format dependent on hardware.

That’s why the web is far from dead. It is not a consumable product. Its very nature is distributed. And you can’t shut down or replace that. Software should enrich and empower our lives, our lives should not be the content that makes software successful.