Christian Heilmann

You are currently browsing the Christian Heilmann blog archives for September, 2010.

Archive for September, 2010

TTMMHTM: Scaling and redesigns, iPad for access, old games, HTML5 polyfills and unicorns

Tuesday, September 28th, 2010

Things that made me happy this morning:

Sharing slides and frustrating readers

Monday, September 27th, 2010

As you know, I give a lot of talks at conferences. Most of my life these days is to keep my eyes open for cool things, try them out, document them and package them up in a presentation for a conference. This does not only mean writing demos and explaining the why – it also most of the time means translating the content into a language and use cases that are meaningful to the audience.

This year is packed with amazing conferences, and I am sorry to have missed a few really cool ones and that I will miss some in the future, too. Being the only full-time evangelist outside the US for an international company brings that with it.

That is why I love when speakers make their slides available to the world and when conference organisers release the video and audio recordings of the talks. I do that myself, for the obvious reasons:

Reasons to release your slides

  • People who saw me at the conference can get the slides to refresh their memories when they get back
  • People can re-use the slides for internal presentations after the conference and show off to their boss about what they learnt (if they are released as Creative Commons)
  • People can embed my slides in blog posts and in their own conference reviews or as a reminder to make a point
  • People can search for topics on SlideShare and LinkedIn and find my talks
  • People can listen to the talk on the train, in the gym or send it to people who prefer to listen to things rather than reading
  • People can take in the information at their own leisure and are not bound to a timed schedule – a whole conference can be overload
  • You can flip through a slide deck quickly to find the one thing you wanted to remember – this is much harder in a video or audio recording

Now, in my case, I use Keynote to write my slides, export them to PDF and host them on SlideShare. The reasons are:

  • I don’t want to be dependent on technology – if my laptop breaks or gets lost I can still show my slides from another machine
  • If you use a 800×600 resolution it works with even the most shoddy projectors and scales to higher resolutions without me having to do anything
  • People can print the thing if they wanted to – I’ve seen people go through printouts of presentations on the train.
  • SlideShare automatically creates a transcript of my slides in HTML (for SEO reasons) so even my blind friends can read them without me having to create a special version

The thing to remember though is that I write my slides to make sense without me talking over them – I use the slides as the narrative turning points in my talk. Each web site mentioned gets a URL and each code example has a live demo that people can click on and see when they get back.

This is why I can release the slide decks and people have fun with them and use them in trainings.

Not all slides are the same

Other speakers have a different approach which is more focused on the talk itself. Rather than doing a lot of speaking they write one amazing talk and fine-tune and upgrade it. The talk is a performance, and the speaker is the main source of information. That is why in talks like these, the slides become a backdrop to the main performance rather than the content. This can be very effective and needs a great stage persona to do right.

Watch the videos of Aral Balkan for example, his talk about emotional design was wonderfully timed, had the right amount of passion, emotion and fun and some very good open questions in it.

Should he release the slides? I don’t think so as without the talk and Aral’s performance they don’t make much sense. Should the video be released? Yes, please as this is where the information and the enjoyment lies. A talk like “emotional design” is not there to skip over quickly and get the gist – it is something you should watch from start to end.

The talks on Ted don’t come with slides – for a reason. The talk is the performance, without the visual and audial you’ll miss most of what makes it an outstanding talk.

What makes a great presentation?

Some people call this the only effective presentation and how a great presenter should deal with the task. Which is true to a degree but there are other things to consider.

What if you speak to an audience that doesn’t have the same language skills as you have? I found that giving talks in Asia and in continental Europe worked much better when I had an introductory sentence on my slide and went from there. People who considered me hard to understand still got the gist of what I was saying and asked very detailed questions afterwards and had friends translate for them.

What if there are people in the audience who don’t feel like hanging onto your every word? What if there are people who learn more from an image with a short text than from a long aural explanation and complex visual stimuli?

People learn in different ways. I have encountered abysmal presentations with really great content that still stuck as the slides were clear enough. I’ve seen very shy presenters who pulled through by sticking with their slides and getting better with every presentation they gave.

On the flipside I’ve seen people who read their bullet points out to the audience and bored people to death more often than I want to remember. This latter example of boardroom presentations gave content heavy slides a bad name. I don’t think though that this is fair.

When sharing slides is not helpful

The whole concept of sharing slides means full disclosure – you give out the thing that your talk was based on. If the slides are just images to accentuate what you said and there are no notes to understand what they mean then you leave your slides to interpretation by the audience – and that can go very wrong indeed.

I’ve seen people quoted out of context as the slide was meant to be delivered with a very sarcastic voice and body language. On the released deck this was missing as it doesn’t translate.

The more annoying thing to me is though when speakers write their own presentation software and expect things to work the same on the computer of their audience.

I just got very excited about the talks at JSConf in Berlin which I couldn’t attend. A lot of them were written in HTML/JavaScript and some clever NoSQL solution or another. I am sure they worked like a charm on stage and showed the power of the open technologies we have. When these break though then you are left frustrated. Firstly, because you want to get to the promised content and secondly because you want open formats to succeed.

Take Aaron Quint’s presentation based on Swinger for example. Slide 35 shows me some code:

The Frontend Takeover (35 of 58)

That doesn’t look right! Ok, no worries, view-source is the key as this is HTML, right? Well, view-source gives me the main app of Swinger and not the current state. So the only way to get to this code is to use Developer Tools and do a copy of the node.

If I paste this in my editor I get the whole function:

the full code

Now, I am a geek and can do that. I also was interested enough to do it. Others would have simply given up or thought Aaron showed broken code.

Other presentations were live in a browser and expected you to have the same configuration. Paul Irish had one of those and when I complained that I can’t read the slides he claimed I was simply negative and love to cry FAIL. This is not true, I was very interested and love to see people use new technology to do good demos in browsers. However, the display of his slides did not work in my Chrome even when it advertised Chrome:

Cut off slides

After probing a bit Paul admitted that the slides were for himself and not meant to be released. They were catered to the talk as all the demos are in-line in the HTML. I was happy that at least I could do a view-source and get all the information.

This is a case though were releasing the slides in this state (or someone releasing them and not Paul) was annoying to me as I was very excited about the cool content just to get stuck. If you can’t ensure that your audience will be able to see what you see, a screencast, video or even separating the code examples from the slides with information as to which browser to use to see the best results is much better than causing confusion.

If something gets online, people expect that to make sense and work. You promise to share your content to get all the benefits mentioned at the beginning. If your sharing means people need to change their environment to see what you did then you need a really dedicated audience. If what you prepared was meant only for you and not to be released and got out this seems to be a communication issue with the conference organisers.

In the end, all I want is to be able to follow what people want me to read. If your content only works for you or needs a special setup this is not what the open web stack is about – we want to shy away from PDF and Flash to be more flexible, not to dictate environments.

New Twitter exploit about goats – how it works.

Sunday, September 26th, 2010

OK, in the last few minutes you will have gotten a few tweets of people explaining that they like to have intercourse through the backdoor with goats. This is a Twitter exploit – probably initiated by someone doing a security talk (I know some people who would be devious enough).

The exploit is actually easy – the main ingredients are:

  • Twitter allowing updates through the API via IFRAMES and GET thus being vulnerable to CSRF attacks
  • PasteHTML.com being vulnerable to render code without a secure site around it and executing it
  • Clients or Twitter automatically applying the t.co link shortener

The code to execute the “worm” is hosted at http://pastehtml.com/view/1b7xk3b.html so Twitter should contact them – (I just did):





Nothing magical there – all you do is create two SCRIPT files that point to the twitter update API and send a request to do an post. As the user who clicked on the malicious link is authenticated with twitter you can send them on his behalf. It is the same trick that worked for the “Don’t click this button” exploit or my demo at Paris Web last year how to get the updates of a protected Twitter user.

The effects of this are a mixed bag

  • Bad: People stop trusting the t.co shortener after it was actually installed to be a trustworthy link shortener. The link shortening service is not compromised – this is one thing that can’t be blamed on Twitter
  • Bad: There is a flood of wrong messages on Twitter
  • Good: people talk about the exploit and how it was done
  • Good: people get more conscious about clicking links
  • Good: Twitter have to harden their API agains CSRF
  • Bad: this will break some implementations

There is no real defense against CSRF from a user’s point of view other than not clicking links you don’t trust and turning off JavaScript. As this is a wide definition, we will get those over and over again unless API providers disallow for requests without tokens. This, on the flipside means that implementing one click solutions to tweet or like will be a lot harder.

I fell for the trick, too – especially as I didn’t expect PasteHTML to render code instead of sanitizing it.

Update: As some clever clogs just pointed out in the comments, JSBin is also vulnerable to hosting code that will be executed. One thing I do to check malicious links is to use curl in the terminal:

Terminal — bash — 67×24 by photo

If you don’t know JS, that doesn’t help you but if you do you can warn the world.

Promote better JavaScript documentation with PromoteJS

Sunday, September 26th, 2010

As I heard during my visit in the valley last week and just now from the goings-on at the JSConf in Berlin, Mozilla is working on ramping up the already amazing MDC documentation for JavaScript and you can help by providing some guerrilla SEO - simply add the following image with the right code available at http://www.promotejs.com/ to any article that talks about JavaScript:

JavaScript JS Documentation: JS Array splice, JavaScript Array splice, JS Array .splice, JavaScript Array .splice

The reason is obvious when you search for the term “JavaScript” – the things you find are wiki articles or even products that tell you not to care about learning JavaScript when you use them. As the term JavaScript was a hot SEO term during the DHTML days and with the resurrection now a lot of sites washed out the experience of finding something good when you try to use the web to find it. When I wrote my JavaScript book Beginning JavaScript with DOM Scripting and Ajax I had a chapter that told you where to find good information about JavaScript and other than the MDC and SelfHTML (if you are German or Spanish) I drew quite a blank. Lately Sitepoint started to put together some good docs but then all we had was W3Schools.

Let’s shine a Flashlight on the elephant in the room: W3Schools. This site is the number one for almost every technical search term in CSS, JavaScript and HTML. There are a few reasons for this:

  • Its SEO is spot on – heading structure, amount of links and deep level content (long tail)
  • It leaves you with a good feeling – “oh this is how simple it is! Cool, no need to read more, just copy and paste the example and I am an expert”
  • It is fast and small

The “good feeling” part is also what makes it dangerous. I ranted in several talks about the Siren Song of W3schools – it gives you the what but not the why. You can use it as a reference when you already know your stuff but it is dangerous for people to learn a subject matter. Programming is not about syntax alone – it is also about the environment you apply it in. W3schools is like bringing up your kid on fast food – quick, fast and cheap. This is not the fault of the maintainers of W3Schools – it is the fault of people sending new learners there for information.

The PHP docs are a good compromise of fast information with insights from implementers in the field – I learnt more from comments on the docs in PHP than from the entries themselves.

In the write great code examples chapter of my Developer Evangelism handbook I described a blueprint how to write a code example that explains and educates. Sadly this takes much more work than just showing the syntax. And this is why the docs sites that simply tell the “how” are winning – they update much faster with less overhead.

It is very cool to see Chris WillisonChris Williams(DOH!) and others take on the challenge and Mozilla to me is the perfect host for this kind of effort – they are open, independent when it comes to JS and have a track record of providing good docs.

Interestingly enough about five few years ago I tried a similar approach for all docs:

Replacing old tutorials with new ones - Obsoletely Famous by photo

Other than getting it hacked once I found no support for it though so I shut it down a year ago. Now with the leading names of the market talking about a similar thing at the leading conference of the language we have a better chance. Help us!

The annoying thing about YQL’s JSON output and the reason for it

Wednesday, September 22nd, 2010

A few days ago, Simon Willison vented on Twitter about one of the things that annoyed me about YQL, too:

Battling YQL's utterly horrible JSON output, where a response with one result is represented completely differently from a response with two

The interesting thing about this is that it is not a YQL bug – instead it is simply a problem of generic code and data conversion.

The problem with YQL JSON results

Let’s have an example. If you use YQL to get for example the geo information about “London” you get several results in XML:

select * from geo.places where text=”london”:

XML output with multiple places

If you use JSON as the output format this becomes an array:

JSON output with multiple places

Cool, so you can loop over query.results.place, right? No, cause when there is only one result, we have a different situation. Instead of being an array, place becomes an object:

select * from geo.places where text=”london,uk”:

Single results in JSON become an object instead of an array

This is what ailed Simon as it means you need to write a function that works around that issue. For example in the Geoplanet Explorer, I use a render() function to show each place result and I use the following to work around the JSON output issue:

function renderlist($p){
if(sizeof($p)>1){
foreach($p as $pp){
$o .= render($pp);
}

} else{
$o = render($p);
}

return $o;
}

In JavaScript you’d have to check the length and either show the results directly or loop over the results.

Generic conversion vs. creating a single API

This is a problem when you convert XML to JSON - as the resulting XML from the GeoPlanet API doesn’t have a places element with place in it but instead repeats the place element for every result the JSON parser must make a decision: if it is only one element, this becomes a property of the results object. If there are more than one place element it becomes a property that is an array (as you cannot repeat the same property name).

If you build your own API and you know the format of the different results you can force this to be an array even when there is only one result. If you do not know the XML schema there is no way of assuming what should be an array and what should not be an array. So the generic solution for XML to JSON conversion of an unknown XML with 1 to n results would be to always create an array. That way you couldn’t get to place.admin1.code for example but you’d always have to do place[0].admin1[0].code[0] – not really a sensible thing to do.

As YQL is an open system and the XML results are provided by anyone in their open table definition the only way around I could think of is to tell YQL in the table that some elements should always be forced to become arrays.

Do you have a solution?