Skip to content
Christian Heilmann

You are currently browsing the archives for the General category.

Archive for the ‘General’ Category

  • πŸ”™ Older Entries
  • Newer Entries πŸ”œ

TTMMHTM: WWW birthday,Morse Code, Nano puppets, OS design, FireEagle plugin and really getting things done

Friday, March 13th, 2009

Things that made me happy this morning

  • The world wide web is 20 years today, thank you Tim!
  • Text message vs. Morse code shows that old school is still the best school
  • The Nano song explains nano technology – in musical style and with puppets
  • iDaft is a Daft Punk soundboard in Flash
  • OS UI design 1981 – 2009 shows most of the Operating System interfaces during in that period, missing is Atari TOS and C64 GeOS and C128 CP/M
  • Godzillabukkake is something that I leave uncommented (WTF)
  • Accessibility Tips are collected accessibility tips by colleagues at Yahoo in London
  • It is time to break out of twitter discusses the impact of twitter but also shows that some people take it too seriously
  • The cult of done can kiss my ass and then mark it off their list is a rant about people who are too busy “getting things done” rather than checking if they done them in a sensible way
  • The FireEagle Firefox add-on is out and finally makes it easy for me to update my FireEagle location.

Tags: fireeagle, gettingthingsdone, morse, nanotechnology, twitter
Posted in General | 2 Comments »

A few things the web development community can learn from The Green Movie

Thursday, March 12th, 2009

One of the, oh heck, the only really good thing about flying Delta was their Fly-in-Movie competition. This is a section of their entertainment program where they show short movies of budding movie makers who compete to be shown at the Tribeca Film Festival in New York this coming April.

The green film

One of the movies in there is The Green Film and I loved it (“Cold call” was also very good).

The Green Movie

In this 6 minute movie a self-righteous film director proclaims pompously and full of enthusiasm that they are producing the greenest movie ever. All the food is organic, everything gets recycled, all the make-up is free of animal testing and there is not a single thing that is not in the correct order and would cause a frown on the faces of the friends of the earth.

The wrong doers and how they should be lectured

When the main actress arrives she rolls up in a stretch limo and asks for her trailer. The director tells her off for not cycling or using a bus and shows her a deck chair and an umbrella which is to be her “trailer”. He goes on to explain all the bad things that do not happen on his set and especially goes into a detailed sermon over plywood used on other sets and that it actually is based on rainforest wood. He also is very insightful about using the right light bulbs on the whole set.

Getting caught out

The actress on the other hand starts wondering about the professionalism of the whole setup – which culminates in her wondering if the movie is shot on film rather than digital. The director then goes nuts on the mere idea of movies being shot in digital and that digital film is just “TV on big screens”. His rant goes so far as to proclaim that art could never be done with digital cameras. To the arguments of the actress about film processing involving toxic chemicals and shipment of reels all over the world the only thing the director comes up with is “but we recycle – a lot!”.

The movie ends with the actress filming herself in the woods using her mobile (cellphone for Americans).

How this applies to us

This is exactly how we get stuck when advocating best practices on the web. One interesting exchange that shows this is Chromatic Sites advocating for CSS vs. Table layouts and Mike Davies shining a massive big light of truth on the arguments provided.

Another interesting “oh not again” moment was Jeffrey Zeldman doing the inaugural testing of the top 100 sites in a validator causing an avalanche of comments.

You know what? We’re wasting time and energy in these discussions and we are so immersed in our own “doing the right thing” that we forgot to care about what we wanted to achieve in the first place. We get into meticulous details of explaining certain technologies and invent idea after idea based on the same technologies we tried to make people understand by force years and years ago and failed.

Standards and best practices are there for a single reason: make our work predictable and easy to work with other developers. This only works if everybody is on board and understands these best practices – in essence, following them needs to make their job easier. If following a “best practice” doesn’t make our lives easier but produces extra overhead it will not catch on.

Instead of concentrating on showing the benefits of working in a predictable manner we concentrate on ticking all the right boxes and telling everybody who is unfortunate enough to listen about all the details we had to think about to get where we are. We know all about the plywood and the right light bulbs but we forgot to talk in the language of the people we want to reach with our ideas. We are not concentrating on how we deliver the message and that there might be better techniques and technologies available nowadays than the great problem solvers of the past.

Web development is evolving and changing to new channels of distribution and re-use. Widget frameworks allow re-use of the same little application across the web, mobile devices and now even Television sets. These things is what we should have our sights on and not if a certain document passes a dumb validation test or not. Validation is the beginning of a quality control process, not the end of it. Semantic value cannot be validated by a dumb machine but needs a human to check. Zeldman did point this out in his introduction to the test, but this message always gets forgotten in the uproar of indignation over and unencoded ampersand.

Tags: advocating, evangelism, green, standards, webstandards
Posted in General | 2 Comments »

Building a hack using YQL, Flickr and the web – step by step

Wednesday, March 11th, 2009

As you probably know, I am spending a lot of time speaking and mentoring at hack days for Yahoo. I go to open hack days, university hack days and even organized my own hackday revolving around accessibility last year.

One of the main questions I get is about technologies to use. People are happy to find content on the web, but getting it and mixing it with other sources is still a bit of an enigma.

Following I will go through a hack I prepared at the Georgia Tech University hack day. I am using PHP to retrieve information of the web, YQL to filter it to what I need and YUI to do the CSS layout and add extra functionality.

The main ingredient of a good hack – the idea

I give a lot of presentations and every time I do people ask me where I get the pictures I use from. The answer is Flickr and some other resources on the internet. The next question is how much time I spend finding them and that made me think about building a small tool to make this easier for me.

This is how Slidefodder started and following is a screenshot of the hack in action. If you want to play with it, you can download the Slidefodder source code.

Slide Fodder - find CC licensed photos and funpics for your slides

Step 1: retrieving the data

The next thing I could have done is deep-dive into the Flick API to get photos that I am allowed to use. Instead I am happy to say that using YQL gives you a wonderful shortcut to do this without brooding over documentation for hours on end.

Using YQL I can get photos from flickr with the right license and easily display them. The YQL statement to search photos with the correct license is the following:


select id from flickr.photos.search(10) where text='donkey' and license=4

Retrieving CC licensed photos from flickr in YQL

You can try the flickr YQL query here and you’ll see that the result (once you’ve chosen JSON as the output format) is a JSON object with photo results:


{
"query": {
"count": "10",
"created": "2009-03-11T01:23:00Z",
"lang": "en-US",
"updated": "2009-03-11T01:23:00Z",
"uri": "http://query.yahooapis.com/v1/yql?q=select+*+from+flickr.photos.search%2810%29+where+text%3D%27donkey%27+and+license%3D4",
"diagnostics": {
"publiclyCallable": "true",
"url": {
"execution-time": "375",
"content": "http://api.flickr.com/services/rest/?method=flickr.photos.search&text=donkey&license=4&page=1&per_page=10"
},
"user-time": "376",
"service-time": "375",
"build-version": "911"
},
"results": {
"photo": [
{
"farm": "4",
"id": "3324618478",
"isfamily": "0",
"isfriend": "0",
"ispublic": "1",
"owner": "25596604@N04",
"secret": "20babbca36",
"server": "3601",
"title": "donkey image"
}
[...]
]
}
}
}

The problem with this is that the user name is not provided anywhere, just their Flickr ID. As I wanted to get the user name, too, I needed to nest a YQL query for that:

select farm,id,secret,server,owner.username,owner.nsid from flickr.photos.info where photo_id in (select id from flickr.photos.search(10) where text='donkey' and license=4)

This gives me only the information I really need (try the nested flickr query here):


{
"query": {
"count": "10",
"created": "2009-03-11T01:24:45Z",
"lang": "en-US",
"updated": "2009-03-11T01:24:45Z",
"uri": "http://query.yahooapis.com/v1/yql?q=select+farm%2Cid%2Csecret%2Cserver%2Cowner.username%2Cowner.nsid+from+flickr.photos.info+where+photo_id+in+%28select+id+from+flickr.photos.search%2810%29+where+text%3D%27donkey%27+and+license%3D4%29",
"diagnostics": {
"publiclyCallable": "true",
"url": [
{
"execution-time": "394",
"content": "http://api.flickr.com/services/rest/?method=flickr.photos.search&text=donkey&license=4&page=1&per_page=10"
},
[...]
],
"user-time": "1245",
"service-time": "4072",
"build-version": "911"
},
"results": {
"photo": [
{
"farm": "4",
"id": "3344117208",
"secret": "a583f1bb04",
"server": "3355",
"owner": {
"nsid": "64749744@N00",
"username": "babasteve"
}
}
[...]
}
]
}
}
}

The next step was getting the data from the other resources I am normally tapping into: Fail blog and I can has cheezburger. As neither of them have an API I need to scrape the HTML data of the page. Luckily this is also possible with YQL, all you need to do is select the data from html and give it an XPATH. I found the XPATH by analysing the page source in Firebug:

Using Firebug to find the right xpath to an image

This gave me the following YQL statement to get images from both blogs. You can list several sources as an array inside the in() statement:


select src from html where url in ('http://icanhascheezburger.com/?s=donkey','http://failblog.org/?s=donkey') and xpath="//div[@class='entry']/div/div/p/img"

Retrieving blog images using YQL

The result of this query is again a JSON object with the src values of photos matching this search:


{
"query": {
"count": "4",
"created": "2009-03-11T01:28:35Z",
"lang": "en-US",
"updated": "2009-03-11T01:28:35Z",
"uri": "http://query.yahooapis.com/v1/yql?q=select+src+from+html+where+url+in+%28%27http%3A%2F%2Ficanhascheezburger.com%2F%3Fs%3Ddonkey%27%2C%27http%3A%2F%2Ffailblog.org%2F%3Fs%3Ddonkey%27%29+and+xpath%3D%22%2F%2Fdiv%5B%40class%3D%27entry%27%5D%2Fdiv%2Fdiv%2Fp%2Fimg%22",
"diagnostics": {
"publiclyCallable": "true",
"url": [
{
"execution-time": "1188",
"content": "http://failblog.org/?s=donkey"
},
{
"execution-time": "1933",
"content": "http://icanhascheezburger.com/?s=donkey"
}
],
"user-time": "1939",
"service-time": "3121",
"build-version": "911"
},
"results": {
"img": [
{
"src": "http://icanhascheezburger.files.wordpress.com/2008/09/funny-pictures-you-are-making-a-care-package-very-correctly.jpg"
},
{
"src": "http://icanhascheezburger.files.wordpress.com/2008/01/funny-pictures-zebra-donkey-family.jpg"
},
{
"src": "http://failblog.files.wordpress.com/2008/11/fail-owned-donkey-head-intimidation-fail.jpg"
},
{
"src": "http://failblog.files.wordpress.com/2008/03/donkey.jpg"
}
]
}
}
}

Writing the data retrieval API

The next thing I wanted to do was writing a small script to get the data and give it back to me as HTML. I could have used the JSON output in JavaScript directly but wanted to be independent of scripting. The script (or API if you will) takes a search term, filters it and executes both of the YQL statements above before returning a list of HTML items with photos in them. You can try it out for yourself: search for the term donkey or search for the term donkey and give it back as a JavaScript call

I use cURL to get the data as my server has external pulling of data via PHP disabled. This should work for most servers, actually.

Here’s the full “API” code:


';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $flickurl);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$output = curl_exec($ch);
curl_close($ch);
$flickrphotos = json_decode($output);
foreach($flickrphotos->query->results->photo as $a){
$o = $a->owner;
$out.= '
  • '. ''; $href = 'http://www.flickr.com/photos/'.$o->nsid.'/'.$a->id; $out.= ''.$href.' - '.$o->username.'
  • '; } $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $failurl); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $output = curl_exec($ch); curl_close($ch); $failphotos = json_decode($output); foreach($failphotos->query->results->img as $a){ $out.= '
  • '; if(strpos($a->src,'failblog') = 7){ $out.= ''; } else { $out.= ''; } $out.= ''.$a->alt.'
  • '; } $out.= ''; if($_GET['js']=’yes’){ $out.= ‘’})’; } echo $out; } else { echo ($_GET[‘js’]!==’yes’) ? ‘

    Invalid search term.

    ’ : ‘seed({html:”Invalid search Term!”})’; } } ?>

    Let’s go through it step by step:

    
    

    I test if the js parameter is set and if it is I send a JavaScript header and start the JS object output.

    
    if(isset($_GET['s'])){
    $s = $_GET['s'];
    if(preg_match("/^[0-9|a-z|A-Z|-| |+|.|_]+$/",$s)){
    

    I get the search term and filter out invalid terms.

    
    $flickurl = 'http://query.yahooapis.com/v1/public/yql?q=select'.
    '%20farm%2Cid%2Csecret%2Cserver%2Cowner.username'.
    '%2Cowner.nsid%20from%20flickr.photos.info%20where%20'.
    'photo_id%20in%20(select%20id%20from%20'.
    'flickr.photos.search(10)%20where%20text%3D''.
    $s.''%20and%20license%3D4)&format=json';
    $failurl = 'http://query.yahooapis.com/v1/public/yql?q=select'.
    '%20*%20from%20html%20where%20url%20in'.
    '%20('http%3A%2F%2Ficanhascheezburger.com'.
    '%2F%3Fs%3D'.$s.''%2C'http%3A%2F%2Ffailblog.org'.
    '%2F%3Fs%3D'.$s.'')%20and%20xpath%3D%22%2F%2Fdiv'.
    '%5B%40class%3D'entry'%5D%2Fdiv%2Fdiv%2Fp%2Fimg%22%0A&'.
    'format=json';
    

    These are the YQL queries, you get them by clicking the “copy url” button in YQL.

    
    $out.= '
      ';

    I then start the output list of the results.

    
    $ch = curl_init();
    curl_setopt($ch, CURLOPT_URL, $flickurl);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
    $output = curl_exec($ch);
    curl_close($ch);
    $flickrphotos = json_decode($output);
    foreach($flickrphotos->query->results->photo as $a){
    $o = $a->owner;
    $out.= '
  • '. ''; $href = 'http://www.flickr.com/photos/'.$o->nsid.'/'.$a->id; $out.= ''.$href.' - '.$o->username.'
  • '; }

    I call cURL to retrieve the data from the flickr yql query, do a json_decode and loop over the results. Notice the rather annoying way of having to assemble the flickr url and image source. I found this by clicking around flickr and checking the src attribute of images rendered on the page. The images with the “ico” class should tell me where the photo was from.

    
    $ch = curl_init();
    curl_setopt($ch, CURLOPT_URL, $failurl);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
    $output = curl_exec($ch);
    curl_close($ch);
    $failphotos = json_decode($output);
    foreach($failphotos->query->results->img as $a){
    $out.= '
  • '; if(strpos($a->src,'failblog') = 7){ $out.= ''; } else { $out.= ''; } $out.= ''.$a->alt.'
  • '; }

    Retrieving the blog data works the same way, all I had to do extra was check for which blog the resulting image came from.

    
    $out.= '';
    if($_GET['js']=’yes’){
    $out.= ‘’})’;
    }
    echo $out;
    

    I close the list and – if JavaScript was desired – the JavaScript object and function call.

    
    } else {
    echo ($_GET['js']!=='yes') ?
    '

    Invalid search term.

    ' : 'seed({html:"Invalid search Term!"})'; } } ?>

    If there was an invalid term entered I return an error message.

    Setting up the display

    Next I went to the YUI grids builder and created a shell for my hack. Using the generated code, I added a form, my yql api, an extra stylesheet for some colouring and two IDs for easy access for my JavaScript:

    
    
    
    
    Slide Fodder - find CC licensed photos and funpics for your slides
    
    
    
    
    

    Slide Fodder

    Slide Fodder by Christian Heilmann, hacked live at Georgia Tech University Hack day using YUI and YQL.

    Photo sources: Flickr, Failblog and I can has cheezburger.

    Rounding up the hack with a basket

    The last thing I wanted to add was a “basket” functionality which would allow me to do several searches and then copy and paste all the photos in one go once I am happy with the result. For this I’d either have to do a persistent storage somewhere (DB or cookies) or use JavaScript. I opted for the latter.

    The JavaScript uses YUI and is no rocket science whatsoever:

    
    function seed(o){
    YAHOO.util.Dom.get('content').innerHTML = o.html;
    }
    YAHOO.util.Event.on('f','submit',function(e){
    var s = document.createElement('script');
    s.src = 'yql.php?js=yes&s='+ YAHOO.util.Dom.get('s').value;
    document.getElementsByTagName('head')[0].appendChild(s);
    YAHOO.util.Dom.get('content').innerHTML = '';
    
    YAHOO.util.Event.preventDefault(e);
    });
    
    YAHOO.util.Event.on('content','click',function(e){
    var t = YAHOO.util.Event.getTarget(e);
    if(t.nodeName.toLowerCase()==='img'){
    var str = '
    '; if(t.src.indexOf('flickr')!==-1){ str+= '

    '+t.parentNode.getElementsByTagName('a')[0].innerHTML+'

    '; } str+='x
    '; YAHOO.util.Dom.get('basket').innerHTML+=str; } YAHOO.util.Event.preventDefault(e); }); YAHOO.util.Event.on('basket','click',function(e){ var t = YAHOO.util.Event.getTarget(e); if(t.nodeName.toLowerCase()==='a'){ t.parentNode.parentNode.removeChild(t.parentNode); } YAHOO.util.Event.preventDefault(e); });

    Again, let’s check it bit by bit:

    
    function seed(o){
    YAHOO.util.Dom.get('content').innerHTML = o.html;
    }
    

    This is the method called by the “API” when JavaScript was desired as the output format. All it does is change the HTML content of the DIV with the id “content” to the one returned by the “API”.

    
    YAHOO.util.Event.on('f','submit',function(e){
    var s = document.createElement('script');
    s.src = 'yql.php?js=yes&s='+ YAHOO.util.Dom.get('s').value;
    document.getElementsByTagName('head')[0].appendChild(s);
    YAHOO.util.Dom.get('content').innerHTML = '';
    YAHOO.util.Event.preventDefault(e);
    });
    

    When the form (the element with th ID “f”) is submitted, I create a new script element,give it the right src attribute pointing to the API and getting the search term and append it to the head of the document. I add a loading image to the content section and stop the browser from submitting the form.

    
    YAHOO.util.Event.on('content','click',function(e){
    var t = YAHOO.util.Event.getTarget(e);
    if(t.nodeName.toLowerCase()==='img'){
    var str = '
    '; if(t.src.indexOf('flickr')!==-1){ str+= '

    '+t.parentNode.getElementsByTagName('a')[0].innerHTML+'

    '; } str+='x
    '; YAHOO.util.Dom.get('basket').innerHTML+=str; } YAHOO.util.Event.preventDefault(e); });

    I am using Event Delegation to check when a user has clicked on an image in the content section and create a new DIV with the image to add to the basket. When the image was from flickr (I am checking the src attribute) I also add the url of the image source and the user name to use in my slides later on. I add an “x” link to remove the image from the basket and again stop the browser from doing its default behaviour.

    
    YAHOO.util.Event.on('basket','click',function(e){
    var t = YAHOO.util.Event.getTarget(e);
    if(t.nodeName.toLowerCase()==='a'){
    t.parentNode.parentNode.removeChild(t.parentNode);
    }
    YAHOO.util.Event.preventDefault(e);
    });
    

    In the basket I remove the DIV when the user clicks on the “x” link.

    That’s it

    This concludes the hack. It works, it helps me get photo material faster and it took me about half an hour to build all in all. Yes, it could be improved in terms of accessibility, but this is enough for me and my idea was to show how to quickly use YQL and YUI with a few lines of PHP to deliver something that does a job :)

    Tags: flickr, hack, HTML, javascript, php, scraping, yql
    Posted in General | 5 Comments »

    News Mixer – my first attempt at using the Guardian’s open platform content API

    Tuesday, March 10th, 2009

    I am a very happy bunny at the moment. First of all because there is more yummy data on the web to play with as The Guardian just released a brand new API to access their archives and secondly as I was invited to play with it before it was public. The announce of the API was today and I’ve spent a few hours yesterday in my hotel room before checking out to build news mixer

    News Mixer - web news and images enhanced by Guardian content

    The API is simple enough to use and once you got your developer key you can search for content and request the more detailed data using a content ID. The next problem to tackle was what to build.

    Access of data and tags is easy

    I love that we turned the web from yet another information channel into a read/write web and that user generated content allows us to get information from everybody and not just from dedicated journalists. I also love that you can tag information and make it easier to find that way. Lastly I love that with products like BOSS you can now get access to information of search engines and use that in your own sites.

    Relevancy of tags?

    The tagging bit has me a bit annoyed though. While a few years ago when the idea was still fresh people tagged like mad and with high quality keywords this seemed to be on the decline a bit and as faster connections allow us to upload more and more data in bulk people stopped tagging sensibly and rely more on automated tags like geolocation or exif data in images.

    Mixing user tags and professional categories

    I wanted to show a news site that allows you to find keywords that match your search term that make sense and used two different APIs for that. BOSS allows you to search for news items and images and the BOSS web search also offers keyterms for certain web sites. These keyterms are to a degree user generated as this is what people entered into Yahoo to find the sites. I then used the new Guardian Data API to pull relevant articles and as these are professionally tagged by journalists this makes for more relevant keywords. Putting the two together means a good mix of professional and up-to-date information.

    The outcome is News Mixer and you can download the source code to play with it yourself.

    It was amazingly straight forward to build, the only snags I hit were the following:

    • Whilst BOSS provides keyterms for web searches, it does not do so for news searches. Therefore I used YQL to get the keyterms of each of the urls returned by news search in a nested loop. This is a bit hacky and I would love for that to change.
    • The Guardian API returns articles by relevancy and not by date. You can specify though that you want articles before or after a certain date, which is why all I had to do is get the current date and go back one month from that.
    • The content body of the Guardian API does not provide any paragraph or list information. This is very annoying as it results in terrible display (a massive chunk of text). I’ve worked around the issue by splitting the content at full stops and then injecting paragraphs after every third of them but that is just guesswork and not real structure of text.

    In any case I am happy to have such a cool new archive of information to play with and we’re working on open table definitions for YQL to make it easy for you to get to the goodies the Guardian offers us.

    Tags: api, BOSS, guardian, mashup, yql
    Posted in General | Comments Off on News Mixer – my first attempt at using the Guardian’s open platform content API

    TTMMHTM: Dazzle audiences, love audiences, cool data from the guardian and Opera WSC

    Friday, March 6th, 2009

    Things that made me happy this morning

    • OmniDazzle is a tool for Mac that allows you to show a visual clue where your mouse pointer is. Great for live presentations of systems.
    • Opera released JavaScript best practices my final article for the Opera Web Standards curriculum JavaScript section
    • Seth Godin explains the two elements of a great presenter, respect from the audience and love for the audience
    • The Guardian newspaper in the UK has some cool data to mash-up available for us
    • The Mime on twitter – well if you haven’t got anything to say…
    • Dasher is an interesting concept of an interface to quickly enter text without using a keyboard:
    Dasher is an information-efficient text-entry interface, driven by natural continuous pointing gestures. Dasher is a competitive text-entry system wherever a full-size keyboard cannot be used – for example, when operating a computer one-handed, by joystick, touchscreen, trackball, or mouse; when operating a computer with zero hands (i.e., by head-mouse or by eyetracker); on a palmtop computer; on a wearable computer.
    The eyetracking version of Dasher allows an experienced user to write text as fast as normal handwriting – 29 words per minute; using a mouse, experienced users can write at 39 words per minute.
    Dasher can be used to write efficiently in any language.

    Tags: accessibility, dasher, presentations, text entry
    Posted in General | 2 Comments »

    • < Older Entries
    • Newer Entries >
    Skip to search
    • Christian Heilmann Avatar
    • About this
    • Archives
    • Codepo8 on GitHub
    • Chris Heilmann on BlueSky
    • Chris Heilmann on Mastodon
    • Chris Heilmann on YouTube
    Christian Heilmann is the blog of Christian Heilmann chris@christianheilmann.com (Please do not contact me about guest posts, I don't do those!) a Principal Program Manager living and working in Berlin, Germany.

    Theme by Chris Heilmann. SVG Icons by Dan Klammer . Hosted by MediaTemple. Powered by Coffee and Spotify Radio.

    Get the feed, all the cool kids use RSS!