Christian Heilmann

You are currently browsing the archives for the General category.

Archive for the ‘General’ Category

How to write an article or tutorial the fast way

Saturday, January 2nd, 2010

As you know if you come here often, I am a very prolific writer and churn out blog posts and articles very quickly. Some people asked me how I do that – especially as they want to take part in Project 52.

Well, here is how I approach writing a new post/article:

Step 1: Find a problem to solve

Your article should solve an issue – either one you encountered yourself and always wanted to find a solution to on the web (this is how I started this blog) or something people ask on mailing lists, forums or Twitter.

Step 2: Research or code (or both)

The first step is the research of the topic you want to cover. When you write, you don’t want to get side-tracked by looking up web sites. Do your surfing, copy and paste the quotes and URLs, take the screenshots and all that jazz. Put them in a folder on your hard drive.

If your article is a code tutorial, code the whole thing and save it in different steps (plain_html.html, styled.html, script.html, final.html,final_with_docs.html). Do this step well – you will copy and paste part of the code into your article and when you find mistakes then you need to maintain it in two spots again. Make sure this code can be used by others and does not need anything only you can provide (for more tips check the write excellent code examples chapter of the developer evangelism handbook).

Step 3: Build the article outline

The next thing I do is write the outline of the article as weighted headlines (HTML, eh?). This has a few benefits.

  • You know what you will cover and it allows you to limit yourself to what is really needed.
  • You will know what follows what you are writing and already know what you don’t need to mention. I myself tend to get excited and want to say everything in the first few lines. This is bad as it doesn’t get the readers on a journey but overloads them instead.
  • You can estimate the size of the overall article
  • You can write the different parts independent of another. If you get stuck with one sub-topic, jump to one you know inside-out and get this out of the way.

It would look something like this:

Turning a nested list into a tree navigation

See the demo, download the code

Considering the audience

How do tree navigations work?

Allowing for styling

Accessibility concerns

Start with the minimal markup

Add styling

The dynamic CSS class switch

Add the script

Event delegation vs. Event handling

Adding a configuration file

Other options to consider

See it in action

Contact and comment options

Step 4: Fill in keywords for each section

For each of the sections just put in a list of keywords or topics you want to cover. This will help you to write the full text.

Turning a nested list into a tree navigation

See the demo, download the code

working demo, code on github

Considering the audience

who needs tree navigations? where are they used?

How do tree navigations work?

How does a tree navigation work? What features are common? How to allow expanding a sub-branch and keep a link to a landing page?

Allowing for styling

keep look and feel away from the script, write a clean css with background images.

Accessibility concerns

Consider keyboard access. cursor keys, tabbing not from link to link but section to section and enter to expand.

Start with the minimal markup

clean HTML, simple CSS handles, not a class per item

Add styling

show the style, explain how to alter it - show a few options

The dynamic CSS class switch

the trick to add a class to a parent element. allows for styles for the dynamic and non-dynamic version. Also prevents the need for looping

Add the script

Performance tricks, safe checking for elements, structure of the script

Event delegation vs. Event handling

One event is enough. Explain why - the menu will change as it will be maintained elsewhere.

Adding a configuration file

Take all the strings, colours and parameters and add it to a configuration file - stops people from messing with your code.

Other options to consider

Dynamic loading of child branches.

See it in action

Show again where it is and if it was used in live sites

Contact and comment options

Tell me where and how to fix things

Step 5: Write the full text for each section.

As said before you can do that in succession or part by part. I find myself filling in different sections at different times. Mostly I get out the laptop on the train and fill in a quick section I know very well on a short ride. That means it is out of my way.

Step 6: Add fillers from section to section

I then add a sentence after each section that sums up what we achieved and what we will do next. This is not really needed but great for reading flow.

Step 7: Read the lot and delete what can be deleted

The last step is to read the whole text (probably printed out as you find more mistakes that way) and see how it flows. Alter as needed and remove all the things that seemed a great idea at the first time of writing but seem superfluous now. People are busy.

Step 8: Put it live and wait for the chanting groupies

Find a place to put the article, convert it to the right format, check all the links and images and you are ready to go.

More, please, more!

More tips on the style of the article itself are also listed in the Write great posts and articles chapter of the developer evangelism handbook.

Look back at 2009 and resolutions for 2010

Friday, January 1st, 2010

Well it is the beginning of 2010 so time to talk about some resolutions.

  • As I am planning to get a lighter laptop to lug around my main resolution will go down from 1400×900 to 1280×800 – unless Apple will change that for the new 13”.
  • My place is a mess – the reason is that I am never here. So my resolution for this year is:

** De-clutter my flat

** Throw away (or bring to the charity shop or give to @pekingspring to sell at the Amnesty International boot sales) stuff I don’t need – and that is a lot. I have unpacked moving crates from my flat back in Germany (1999).

** Use my flat more (I lived here for 4 years and yet have to use the oven!)

** Work more from home

  • Check my financial situation and what is working for me – I have money and I have a few insurances and private pension plans but I got no clue what they are any more. Also, I have quite a few shares and I am totally oblivious to what they are worth and what I can do with them.
  • Write another book
  • Organize (or help organize) another small conference
  • Concentrate more on the topics of security and performance.
  • Do something awesome for my parents as they need to get out of the house and have trouble traveling.
  • Pick my battles instead of trying to change things that should have changed years ago but are hindered by complacency.
  • I will go to Iceland this year – I have always wanted to and in 10 years in England I never managed to.
  • I will also try to get to New Zealand and South Africa this year.

Interestingly enough 2009 was a very cool year for me and if I had had any resolutions last year these were the ones I was able to fulfill:

  • I spoke a lot at conferences and released a lot of articles, blog posts, code and tutorials.
  • I’ve reached out to audiences I hadn’t tapped beforehand (domainer conference, design agencies and museum brownbags…)
  • I’ve tech-edited a book and a few chapters and I was judge at some competiions – I like doing that a lot.
  • I’ve lost a lot of weight by going to the gym a lot – there’s even the outline of a 4-pack visible (yes I know there is something wrong with this).
  • I’ve been to Japan – something I always wanted to do (shame I was sick when I was there). I’ve also been to Australia which is something I always wanted to.

I think this is about it – I am back to bed now.

Writing for Smashingmagazine – what do you think I should cover?

Monday, December 28th, 2009

I guess it is a nice case of the squeaky wheel getting the oil… After complaining on twitter about smashing magazine overdoing the “list posts” – you know “543 jQuery plugins you really need” and “3214 ways to create drop-shadows” I was now asked to become one of the writers for the magazine.

I’ve always had a soft spot for Smashingmagazine as it rose quite quickly in an already full market and showed that dedication works out in the end. I’ve learnt a lot of my trade from online magazines, and later from blogs. Things like A List Apart, Evolt.org, Digital Web and Sitepoint taught me CSS tricks, basics of SEO and other tricks. When Digital Web shut down, A List Apart changed direction and other, interesting new magazines like Particletree just didn’t quite get off the ground I thought it was over and to a degree it is. Personal blogs, Twitter and Facebook groups change the idea where we go for information and the old-school editorial approach appears stilted and seems to hold us back.

I disagree though. A good editorial process means we deliver better content. Books are great not because of the author being a genius but because of technical editors challenging the author to explain better, copy editors fixing spelling and grammar mistakes and the same subject being prodded over and over again until it is the bare mimimun and easy to understand.

Where it goes pear-shaped is when your editorial work is not appreciated and the reader numbers (and ad-views) do not add up to the cost you have for paying writers and editors. Back when the first mags came out this was not an issue – people were happy to do this for free. Nowadays, however it is much more business and a lot of online writers ask for cash for articles. Seeing the amount of work that goes into a good article this is totally fine, but what if you cannot find good articles every day?

This is when mags turn to list posts. These are quick to do and mean a new release for the mag – the RSS feed gets a new entry, people can tweet it and so on and so forth.

List posts are a real problem. They are immensely successful, as they are easy to digest, but they are also killing the overall quality of a magazine. As Scrivs on Drawar put in in not very minced words:

It used to be so much better than this. Every article that you came across wasn’t a tutorial or list. Hell, the majority of them weren’t tutorials or lists. There were articles that actually talked about design. There were articles that made you think how you could become a better designer and encouraged intellectual discussion on design. Those articles still exist here and there, but they are drowned out by the copycats.
The web design community is split into two sides: 1. loves to view every single list article there is 2. hates that list articles were ever invented. I fall into both camps because to me some list articles do serve a purpose, but when we start to see Design Trends of Spa Websites I think we might be going a bit too far.

I was very happy therefore when Smashingmagazine approached me (on Facebook of all things) to write for them as they want to change and release more meaty, in-depth articles that cause a discussion rather than a flurry of comments all saying “awesome” or similar YouTube-isms.

So my first two articles I’ve written stuck on airports on the way to my parents’ for christmas will be released in January and cover the following topics:

  • Basic performance testing using YSlow, PageSpeed and AOL Web Page Speed test
  • A seven step test to find the right JavaScript widget

What other things would you like to see on Smashingmag and reach the massive amount of readers it has?

Going a little crazy – one HTTP request RSS reader in JavaScript

Monday, December 21st, 2009

the joker and two face by  ♠NiJoKeR♣. Ok, using YQL and playing around with the console can make you go a bit too far.

A few days ago and in response to my 24 ways article on YQL my friend Jens Grochtdreis asked me how to get the thumbnails and some other data from the Slideshare site in one YQL request. He tried multiple XPATH filtering until I pointed out that there is a perfectly valid RSS feed with thumbnails.

That made we wonder why we really have to care about the detection of a feed but instead use it when it is there and let the computer do the detection for us. What I wanted to do was to turn the following HTML automatically into a list with the feed data as embedded lists:

The ungodly YQL request I came up with was the following:

select
title,link,content.thumbnail,thumbnail,description
from feed where url in (
select href from html where url in (
"http://wait-till-i.com",
"http://flickr.com/photos/codepo8",
"http://slideshare.com/cheilmann",
"http://youtube.com/chrisheilmann"
) and
xpath="//link[contains(@type,'rss')][1]")
|unique(field="link")

What is going on here? I am using the html table to read in each of the resources I want to analyse:

select * from html where url in (
"http://wait-till-i.com",
"http://flickr.com/photos/codepo8",
"http://slideshare.com/cheilmann",
"http://youtube.com/chrisheilmann"
)

Then I use xpath and return the first link element that has a type attribute that contains the word RSS. In YQL I only take its href attribute.

select href from html where url in (
"http://wait-till-i.com",
"http://flickr.com/photos/codepo8",
"http://slideshare.com/cheilmann",
"http://youtube.com/chrisheilmann"
) and
xpath="//link[contains(@type,'rss')][1]")

Notice the joy that is xpath syntax… 0 is the first – every developer knows that! We then use the feed table to get the feed information from each of these hrefs as urls:

select
title,link,content.thumbnail,thumbnail,description
from feed where url in (
select href from html where url in (
"http://wait-till-i.com",
"http://flickr.com/photos/codepo8",
"http://slideshare.com/cheilmann",
"http://youtube.com/chrisheilmann"
) and
xpath="//link[contains(@type,'rss')][1]")

The last thing that was a problem is that Flickr returns the photo items several times that way as it has a feed for the url of the photo and one for the link to the license of the photo. Therefore we needed to use unique() to get only the first of these:

select
title,link,content.thumbnail,thumbnail,description
from feed where url in (
select href from html where url in (
"http://wait-till-i.com",
"http://flickr.com/photos/codepo8",
"http://slideshare.com/cheilmann",
"http://youtube.com/chrisheilmann"
) and
xpath="//link[contains(@type,'rss')][1]")
|unique(field="link")

So, this actually does what we want – we have all the different requests in one HTTP request and then only need some JavaScript to display it. The data coming back is a mess, as it is just an array of items – so we need to loop and check the link of each to know when to go to the next list item.

This is very quick and dirty:

var x = document.getElementById('feeds');
var containers = [];
if(x){
var links = x.getElementsByTagName('a');
var resources = [];
var urls = [];
for(var i=0,j=links.length;i'+items[i].title+'';
if(items[i].thumbnail || items[i].content){
var thumb = items[i].thumbnail || items[i].content.thumbnail;
out += '';
} else {
if(items[i].description.indexOf('src')!=-1){
var thumb = items[i].description.split('src="')[1];
thumb = thumb.split('"')[0];
out += '';
}
}
out += '';
if((items[i+1] && items[i+1].link.substr(0,20) !=
items[i].link.substr(0,20))){
containers[c].innerHTML+='
    '+out+'
'; c++; out=''; } } containers[c].innerHTML+='
    '+out+'
'; }

However, the bad news about this is that it is pretty pointless as the performance is terrible. Not really surprising if you see what the YQL servers have to do and how much data gets loaded and analysed.

pointless performance by  you.

You could of course cache the result locally and thus get it down to a very small amount. However, if you go this way you might as well go fully server-side.

I am currently working on making icant.co.uk perform much faster, so watch this space for a generic RSS displayer :)

cURL – your “view source” of the web

Friday, December 18th, 2009

What follows here is a quick introduction to the magic of cURL. This was inspired by the comment of Bruce Lawson on my 24 ways article:

Seems very cool and will help me with a small Xmas project. Unfortunately, you lost me at “Do the curl call”. Care to explain what’s happening there?

What is cURL?

OK, here goes. cURL is your “view source” tool for the web. In essence it is a program that allows you to make HTTP requests from the command line or different language implementations.

The cURL homepage has all the information about it but here is where it gets interesting.

If you are on a Mac or on Linux, you are in luck – for you already have cURL. If you are operation system challenged, you can download cURL in different packages.

On aforementioned systems you can simply go to the terminal and do your first cURL thing, load a web site and see the source. To do this, simply enter

curl "http://icant.co.uk"

And hit enter – you will get the source of icant.co.uk (that is the rendered source, like a browser would get it – not the PHP source code of course):

showing with curl

If you want the code in a file you can add a > filename.html at the end:

curl "http://icant.co.uk" > myicantcouk.html

Downloading with curl by  you.

( The speed will vary of course – this is the Yahoo UK pipe :) )

That is basically what cURL does – it allows you to do any HTTP request from the command line. This includes simple things like loading a document, but also allows for clever stuff like submitting forms, setting cookies, authenticating over HTTP, uploading files, faking the referer and user agent set the content type and following redirects. In short, anything you can do with a browser.

I could explain all of that here, but this is tedious as it is well explained (if not nicely presented) on the cURL homepage.

How is that useful for me?

Now, where this becomes really cool is when you use it inside another language that you use to build web sites. PHP is my weapon of choice for a few reasons:

  • It is easy to learn for anybody who knows HTML and JavaScript
  • It comes with almost every web hosting package

The latter is also where the problem is. As a lot of people write terribly shoddy PHP the web is full of insecure web sites. This is why a lot of hosters disallow some of the useful things PHP comes with. For example you can load and display a file from the web with readfile():

<?php
  readfile('http://project64.c64.org/misc/assembler.txt');
?>

Actually, as this is a text file, it needs the right header:

<?php
  header('content-type: text/plain');
  readfile('http://project64.c64.org/misc/assembler.txt');
?>

You will find, however, that a lot of file hosters will not allow you to read files from other servers with readfile(), or fopen() or include(). Mine for example:

readfile not allowed by  you.

And this is where cURL comes in:

<?php
header('content-type:text/plain');
// define the URL to load
$url = 'http://project64.c64.org/misc/assembler.txt';
// start cURL
$ch = curl_init(); 
// tell cURL what the URL is
curl_setopt($ch, CURLOPT_URL, $url); 
// tell cURL that you want the data back from that URL
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 
// run cURL
$output = curl_exec($ch); 
// end the cURL call (this also cleans up memory so it is 
// important)
curl_close($ch);
// display the output
echo $output;
?>

As you can see the options is where things get interesting and the ones you can set are legion.

So, instead of just including or loading a file, you can now alter the output in any way you want. Say you want for example to get some Twitter stuff without using the API. This will get the profile badge from my Twitter homepage:

<?php
$url = 'http://twitter.com/codepo8';
$ch = curl_init(); 
curl_setopt($ch, CURLOPT_URL, $url); 
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 
$output = curl_exec($ch); 
curl_close($ch);
$output = preg_replace('/.*(<div id="profile"[^>]+>)/msi','$1',$output);
$output = preg_replace('/<hr.>.*/msi','',$output);
echo $output;
?>

Notice that the HTML of Twitter has a table as the stats, where a list would have done the trick. Let’s rectify that:

<?php
$url = 'http://twitter.com/codepo8';
$ch = curl_init(); 
curl_setopt($ch, CURLOPT_URL, $url); 
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 
$output = curl_exec($ch); 
curl_close($ch);
$output = preg_replace('/.*(<div id="profile"[^>]+>)/msi','$1',$output);
$output = preg_replace('/<hr.>.*/msi','',$output);
$output = preg_replace('/<?table>/','',$output);
$output = preg_replace('/<(?)tr>/','<$1ul>',$output);
$output = preg_replace('/<(?)td>/','<$1li>',$output);
echo $output;
?>

Scraping stuff of the web is but one thing you can do with cURL. Most of the time what you will be doing is calling web services.

Say you want to search the web for donkeys, you can do that with Yahoo BOSS:

<?php
$search = 'donkeys';
$appid = 'appid=TX6b4XHV34EnPXW0sYEr51hP1pn5O8KAGs'.
         '.LQSXer1Z7RmmVrZouz5SvyXkWsVk-';
$url = 'http://boss.yahooapis.com/ysearch/web/v1/'.
       $search.'?format=xml&'.$appid;
$ch = curl_init(); 
curl_setopt($ch, CURLOPT_URL, $url); 
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 
$output = curl_exec($ch); 
curl_close($ch);
$data = simplexml_load_string($output);
foreach($data->resultset_web->result as $r){
  echo "<h3><a href=\"{$r->clickurl}\">{$r->title}</a></h3>";
  echo "<p>{$r->abstract} <span>({$r->url})</span></p>";
}
?>

You can also do that for APIs that need POST or other authentication. Say for example to use Placemaker to find locations in a text:

$content = 'Hey, I live in London, England and on Monday '.
           'I fly to Nuremberg via Zurich,Switzerland (sadly enough).';
$key = 'C8meDB7V34EYPVngbIRigCC5caaIMO2scfS2t'.
       '.HVsLK56BQfuQOopavckAaIjJ8-';
$ch = curl_init(); 
define('POSTURL',  'http://wherein.yahooapis.com/v1/document');
define('POSTVARS', 'appid='.$key.'&documentContent='.
                    urlencode($content).
                   '&documentType=text/plain&outputType=xml');
$ch = curl_init(POSTURL);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, POSTVARS);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);  
$x = curl_exec($ch);
$places = simplexml_load_string($x, 'SimpleXMLElement',
                                LIBXML_NOCDATA);    
echo "<p>$content</p>";
echo "<ul>";
foreach($places->document->placeDetails as $p){
  $now = $p->place;
  echo "<li>{$now->name}, {$now->type} ";
  echo "({$now->centroid->latitude},{$now->centroid->longitude})</li>";
};
echo "</ul>";
?>

Why is all that necessary? I can do that with jQuery and Ajax!

Yes, you can, but can your users? Also, can you afford to have a page that is not indexed by search engines? Can you be sure that none of the other JavaScript on the page will not cause an error and all of your functionality is gone?

By sticking to your server to do the hard work, you can rely on things working, if you use web resources in JavaScript you are first of all hoping that the user’s computer and browser understands what you want and you also open yourself to all kind of dangerous injections. JavaScript is not secure – every script executed in your page has the same right. If you load third party content with JavaScript and you don’t filter it very cleverly the maintainers of the third party code can inject malicious code that will allow them to steal information from your server and log in as your users or as you.

And why the C64 thing?

Well, the lads behind cURL actually used to do demos on C64 (as did I). Just look at the difference:

horizon 1990

haxx.se 2000