Christian Heilmann

Author Archive

Check the source before you tweet

Friday, December 16th, 2011

Working for a large entity of the web is an awesome thing. You have access to great people, resources and you don’t have to chase the next paycheck or prepare the next pitch document to boot. The thing that can make it taxing – if you let it get to you – is that everybody and their dog has to say something about your employer. These things mostly fall into a few categories:

  • The “company XYZ should do this to be successful (and/or survive)” post. I like these, they normally come from people as far removed from the entity as possible. In most cases, they aren’t even working for other companies or as business consultants. Which is a shame, really, as when they know so many amazing simple things to bring success then they should bring it to where people implement it, right?
  • The “I used to use XYZ but I like ABC now” post. Good for you, if you are happy, I am happy
  • The “OMG did you read this about your company” tweet

The latter is what I want to talk about a bit.

Why repeat the report?

The most annoying blog posts, tweets and other “social media” releases are re-hashing what a certain tech media outlet has said. In some cases, even taken out of context, boiled down to the most shocking or “amazing thing”. The fallacy there is that you are not telling the world what you are outraged about or interested in – all you do is bring the tech media outlet you got the message from money as visits are clicks and clicks are money.

Mike Butcher of TechCrunch gave a very honest and open talk about this lately which didn’t come as a surprise to me but should be something to take into consideration when you give a certain article your name and stamp of approval by retweeting it:

How to deal with tech media by @mikebutcher

View more presentations from mikebutcher

TL;DR: every piece on a tech blog is there to bring readers to the blog. It is not about the content – it is about getting the headline and being the first to talk about it.

I worked as a news journalist and this is really what it boiled down to: you have to be the first to have the info and when your media outlet is dependent on numbers you have to spice it up until it really gets people excited. If that means bending the truth or making wild accusations without backup, so be it. You can always apologise later. You will be washed clean but the original rumour will still bring people to your site. You win. You get paid.

Use the source

As web developers, view-source always has been our friend. It is great for debugging and it is great for looking beyond the shiny. You should apply the same to news reports on the web:

  • If there is a certain news about a company, check the official press release for comparison. If the thing the hoo-hah is about is real then there is one.
  • If the article talks about a source, go to that source and tweet about that one. In many cases, this is better quality. A good example just happened: A friend of mine, Dennis Lembree, tweeted about a the best places to work report on TechCrunch and complained about it being inaccessible (as the data was an image with no text alternative). Looking at the news piece I found the source article on glassdoor.com which is in HTML
  • Check where the source is coming from. There is no point in debugging the generated HTML when it is assembled somewhere else. If the person making a certain assumption about a company has no clue about the subject matter why give them the satisfaction to repeat what he/she said? Asking the wrong person for a comment is never a good plan, much like asking an unfunny car tester to give a quote about union matters can cause controversy

Don’t mistake SEO for the real thing

A post that really got me lately was 21 Types Of Social Content To Boost Your SEO linked here with the keyword horse manure (to see what that does to their Google rank). Whilst probably well-intended I was really annoyed by the tips given there to get eyeballs to your site. The ideas to get more people to your site – regardless of your content – are to use a lot of techniques, the ones that got me annoyed were the following:

1.) The Manifesto
The Manifesto is the viral equivalent of preaching to the choir. Write a passionate, eloquent, or well-researched argument that your niche will wholeheartedly agree with. Since you’ve already got an army of believers who agree with you, they’re already primed and ready to share your argument.
Example: Why I’m a Vegetarian, Dammit, an essay on a vegetarian recipe blog, received over 14,000 shares on StumbleUpon alone

Yes, that is because a manifesto is something you should believe in – by definition.

2.) The Controversy
The opposite of the Manifesto, the Controversy is all about stirring up some dissent in your niche. Write a well-written rebuttal to another argument, challenge a popular opinion, or spark a controversial discussion and watch the reader comments fly.

Translation: your readers are idiots who need to be lead into shouting at each other. Be the puppet master. Sensible discussion is for hippies.

5.) The Epic
Why do a top 10 list when you can do a top 100? Go for gold and craft a mega-list relevant to your industry. Examples of epic titles include “50 Must-Have Firefox Add-ons,” or “101 Tips for Increasing Productivity.”

Yes, cause reading 101 tips will totally increase your productivity. And the more add-ons, the better. Then you can also complain when your browser is sluggish.

8.) The Directory
Why make readers sift through mounds of data when you can do it for them? Collect the best links from around the internet and share them with your readers. Gather the best advice for your niche, the top news stories, the leading Twitter accounts in your field, or a simple collection of interesting information.

Remember, kids, this is how Yahoo started and see where they are now! Also, social bookmarking sites do not exist, your blog should do this!

11.) The Expert
In viral content and in life, it’s not what you know, but who you know. Name recognition is a powerful thing. When Mark Zuckerberg talks about Facebook or Mario Batali talks about food, people listen. For even more viral impact, gather a group of experts: “15 Published Authors on Writing,” for example.

My proposal “10 martial arts tricks Douglas Crockford never gave out before” (who the heck is Mario Batali?)

13.) The Visual Aid
Visual representations of mass amounts of data are easy-to-digest while still containing a lot of “meaty” content. Infographics aren’t the only example of this—think graphs, informational videos, or interactive maps, too.

Because nothing makes lies and pointless comparisons nicer than beautiful colours and shapes. Funnily enough I get spam offering me to do infographics for my blog. There is quite a market there.

Recognising the danger signs

There are a few sources I don’t retweet or mention and get very bored when people do. These are:

  • Blogs where every link in the text links to the same blog. This is lame SEO and pure arrogance. “This assumption is totally true as you can see in our article of last month” – what tells me that one was right?
  • Blogs that don’t link to the source or mention where you can get it.
  • Blogs that re-blog other blogs. These are the ones that didn’t get to be the first to have the piece of news, but are too lazy to do their own research which is a shame as they could make a news piece out of their competition saying nonsense.
  • People using fill sentences like “scientists say” or “in the expert opinion” whilst failing to say who these are.
  • Posts spelling utter devastation or total success. It is never that black or white

YMMV of course, but I’d rather give out some information that is coming from the source than keep an artificial discussion going that was first and foremost invented to get clicks.

TTMMHTM: Singing hedgehogs, light cycles, working for the internet and inspiring people on twitter

Thursday, December 15th, 2011

As a special “Things that made me happy this morning” here is a kick-ass interactive (well linked) video of singing hedgehogs:

HP has a new logo! And they can do it in CSS

Wednesday, December 14th, 2011

There is a lot of discussion right now about HP’s new logo. I for one like it as they can save an HTTP request by creating it with CSS:

That “JavaScript not available” case

Tuesday, December 6th, 2011

During some interesting discussions on Twitter yesterday I found that there is now more than ever a confusion about JavaScript dependence in web applications and web sites. This is a never ending story but it seems to me to flare up ever time our browsing technology leaps forward.

I encountered this for the first time back in the days of DHTML. We pushed browsers to their limits with our lovely animated menus and 3D logos (something we of course learned not to do again, right?) and we were grumpy when people told us that there are environments out there where JavaScript isn’t available.

Who turns off JavaScript?

The first question we need to ask about this is what these environments are. There are a few options for that:

  • Security systems like noscript or corporate proxies that filter out JavaScript
  • Feature phones like old Blackberries (I remember switching to Opera Mini on mine to have at least a bearable surfing experience)
  • Mobile environments where carriers proxy images and scripts and sometimes break them
  • People on traffic-limited or very slow connections
  • People who turn off JavaScript for their own reasons
  • People sick of modal pop-ups and other aggressive advertising

As you can see some of them are done to our end users (proxying my companies or mobile provider), some are probably temporary (feature phones) and some are simply their own choice. So there is no way to say that only people who want to mess with our cool web stuff are affected.

Why do they turn off JavaScript?

As listed above, there are many reasons. When it comes to deliberately turning off JavaScript, I’d wager to guess that the main three are security concerns, advertising fatigue and slow connectivity.

Security is actually very understandable. Almost every attack on a client machine happens using JavaScript (in most cases in conjunction with plugin vulnerabilities). Java of course is the biggest security hole at the moment but there is a lot of evil you can do with JavaScript via a vulnerable web site and unprotected or outdated browser and OS.

Slow connectivity is a very interesting one. Quite ironic – if you think about it – as most of what we use JavaScript for is to speed up the experience of our end users. One of the first use cases for JS was client side validation of forms to avoid unnecessary server roundtrips.

Now when you are on a very flaky connection (say a free wireless or bad 3G connectivity or at any web development conference) and you try to use for example Google Reader or Gmail you’ll end up with half broken interfaces. If the flakiness gets caught during first load you actually get offered a “HTML only low version” that is very likely to work better.

The best of both worlds

This is totally fine – it tries to give an end user the best experience depending on environment and connectivity. And this is what progressive enhancement is about, really. And there is nothing evangelical about that – it is plain and pure pragmatism.

It seems just not a good plan under any circumstances to give people an interface that doesn’t work. So to avoid this, let’s generate the interface with the technologies that it is dependent on.

With techniques like event delegation this is incredibly simple. You add click handlers to the parent elements and write out your HTML using innerHTML or other, newer and faster techniques.

So why is this such a problem?

Frankly, I really don’t know. Maybe it is because I am old school and like my localhost. Maybe it is because I have been disappointed by browsers and environments over and over again and like to play it safe. I just really don’t get why someone would go for a JS-only solution when the JS is really only needed to provide the enhanced experience on top of something that can work without it.

The mythical edge case application

A big thing that people keep coming up with are the “applications that need JavaScript”. If we are really honest with ourselves, then these are very rare. If pushed, I could only think of something like photoshop in the browser, or any other editor (video, IDE in the browser, synth) that would be dependent on JavaScript. All the others can fall back to a solution that requires a reload and server-side component.

And let’s face it – in the times of Node.js the server side solution can be done in JavaScript, too. Dav Glass of Yahoo 2 years ago showed that if a widget library is written to be independent of its environment, you can re-use the same rich widget client and server side.

The real reasons for the “App that needs JavaScript” seems to be a different, non-technical ones.

The real reasons for “Apps that need JavaScript”

Much like there are reasons for not having JavaScript there are reasons for apps that need JavaScript and deliver broken experiences.

  • You only know JS and think people should upgrade their browsers and stop being pussies. This is fine, but doesn’t make you the visionary you think you are as it is actually a limited view. We called that DHTML and it failed once – it can fail again
  • You are building an app with a team without server side skills and want to get it out cheaply. This can work, but sounds to me like apps that “add accessibility later”, thus quadrupling the time and money needed to make that happen. Plan for that and all is good.
  • You want to get the app out quickly and you know you’ll have to re-write it later. This is actually a pretty common thing, especially when you get highly successful or bought by someone else. Good luck to you, just don’t give people the impression that you are there to stay.
  • Your app will run in a pure JS environment. Of course this means there is no need to make it work without JS. One example of this would be Air applications. Just make sure you bet on tech and environments that will stay on the radar of the company selling it.
  • Your app really needs JS to work. If that is the case, just don’t offer it to people without it. Explain in a nice fashion the whys and hows (and avoid telling people they need to turn it on as they may not be able to and all you do is frustrate even more) and redirect with JS to your app.

In summary – sort of

All in all, the question of JavaScript dependence reaches much further than just the technical issues. It questions old best practices and has quite an impact on maintainability (I will write about this soon).

Let’s just say that our discussions about it would be much more fruitful if we started asking the “what do we need JS for” question rather than the “why do people have no JS”. There is no point in blaming people to hold back the web when our techniques are very adaptive to different needs.

There is also no point in showing people you can break their stuff by turning things in your browser on and off. That is not a representation of what happens when a normal visitor gets stuck in our apps.

Maybe all of this will be moot when node.js matures and becomes as ubiquitous as the LAMP stack is now. I’d like to see that.

Making vid.ly conversion and embedding easy

Thursday, December 1st, 2011

I am lucky enough to have a vid.ly pro account to convert videos. Lucky because lately the free service started limiting the amount of times you can watch a video in a month (as they were hammered by a lot of traffic from Asia abusing the service). In case you still haven’t heard about vid.ly – it is a service that converts a video into a few dozen formats for HTML5 embedding and gives you a single URL to redirect devices to the correct format of the video.

Now, to make it easier for my colleagues to convert and embed videos in HTML5, I built a simple interface for converting and embedding a video on our blogs. For this I am using the API, but I wanted to avoid having to give my key out for colleagues to use.

The interface to convert videos is pretty easy:

<header><h1>Vid.ly conversion and embed</h1></header>
<section>
  <?php echo $message; ?>
 
  <p>Simply add the URL of the video to convert below and you get the embed code. 
An email will inform you about the successful conversion. 
Conversion could take up to an hour.</p>
 
  <form method="post">
    <div><label for="email">Email:</label><input type="text" id="email" name="email"></div>
    <div><label for="url">URL:</label><input type="text" id="url" name="url"></div>
    <div><input type="submit" name="send" value="make it so"></div>
  </form>
</section>

One of the cool features of the API is that it allows you to define an email that is not the one connected with the key to be the one that gets notified both of the conversion start, errors and success email. That made my job a lot easier. All I needed to do was assemble the correct XML and send it to the API. As the result is XML, too, I needed to check what came back and give feedback in the form:

<?php
$key = '{add your key here}';
$message = '';
if(isset($_POST['send'])){
 
  if($_POST['email'] !== '' && $_POST['url'] !== '') {
    $query =  '<?xml version="1.0"?>'.
              '<query><action>AddMedia</action><userid>481</userid>'.
              '<userkey>'.$key.'</userkey>'.
              '<notify>'.$_POST['email'].'</notify>'.
              '<Source><SourceFile>'.$_POST['url'].'</SourceFile>'.
              '<CDN>AWS</CDN></Source></query>';
    $url = 'http://m.vid.ly/api/';
    $ch = curl_init();
    curl_setopt($ch,CURLOPT_URL,$url);
    curl_setopt($ch,CURLOPT_POST,1);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 
    curl_setopt($ch,CURLOPT_POSTFIELDS,'xml='.urlencode($query));
    $result = curl_exec($ch);
    curl_close($ch);
 
    $xml = simplexml_load_string($result);
 
    if($xml->Success) {
      $vid = $xml->Success->MediaShortLink->ShortLink;
      $video = '<video controls width="100%" controls preload="none"'.
               ' poster="http://cf.cdn.vid.ly/'.$vid.'/poster.jpg">'.
               '<source src="http://cf.cdn.vid.ly/'.$vid.'/mp4.mp4" '.
               'type="video/mp4">'.
               '<source src="http://cf.cdn.vid.ly/'.$vid.'/webm.webm" '.
               'type="video/webm">'.
               '<source src="http://cf.cdn.vid.ly/'.$vid.'/ogv.ogv" '.
               'type="video/ogg">'.
               '<a target="_blank" href="http://vid.ly/'.$vid.'">'.
               '<img src="http://cf.cdn.vid.ly/'.$vid.'/poster.jpg" '.
               'width="500"></a>'.
               '</video>';
      $message = '<div class="success"><h1>Conversion started</h1>'.
                 '<p>The video conversion is under way. '.
                 'You should get an email telling you so and an email when '.
                 'the video URL is ready. The code to copy & paste into '.
                 'the blog is:</p>'.
                 '<textarea>'.htmlspecialchars($video).' </textarea>';
    } else {
        $message = '<div class="error"><h1>Error</h1>'.
                   '<p>Something went wrong in the conversion,'.
                   'please try again.</p></div>';
    }
 
  } else {
    $message = '<div class="error"><h1>Error</h1>'.
               '<p>Please provide a video URL and email</p></div>';
  }
}
?>

Pretty simple, isn’t it. Now my colleagues can add their email, give the form a URL where the video to convert is on the web and will get a copy and paste HTML for the video, for example:

<video controls preload="none" style="width:100%;height:300px;" 
poster="http://cf.cdn.vid.ly/1l5i5m/poster.jpg">
<source src="http://cf.cdn.vid.ly/1l5i5m/mp4.mp4" type="video/mp4">
<source src="http://cf.cdn.vid.ly/1l5i5m/webm.webm" type="video/webm">
<source src="http://cf.cdn.vid.ly/1l5i5m/ogv.ogv" type="video/ogg">
<a target='_blank' href='http://vid.ly/1l5i5m'>
<img   src='http://cf.cdn.vid.ly/1l5i5m/poster.jpg' width="500"></a>
</video>

Which results in:

Giving HTML5 video to the browsers who support it and a link to vid.ly for those who don’t :) The code is on GitHub as a Gist: