Christian Heilmann

You are currently browsing the Christian Heilmann blog archives for August, 2009.

Archive for August, 2009

TTMMHTM: Barcamps, datasets, social mentions and Python in JavaScript

Sunday, August 16th, 2009

Things that made me happy this morning:

Fantastic voyage into the web of data – my talk at the Webmontag in Frankfurt, Germany

Tuesday, August 11th, 2009

Yesterday I gave a talk at the Webmontag in Frankfurt, Germany about using APIs to build web sites from distributed data using YQL. Here are the slides of the talk followed by my notes.

Transcript / Notes

Fantastic voyage into the web of data

Web development has changed drastically in the last few years. Sadly enough not all of the options that are open to us are common practice though.

Developing the web vs. developing for the web

The main issue is that instead of using the web to develop our products we still develop products for the web. Instead of embracing the fact that there is no “offline” when it comes to web sites we still build products that keep all the content and media on one server and write mediocre solutions for people to deal with images, video content or outbound links whilst there are already great products in place that were built for exactly those use cases.

Instead of concentrating our energies on improving the content of the web – using proper textual structures, providing alternative content, adding semantic meaning and geospatial context and so on – we spend most of our days bickering on forums, mailing lists, blogs and really any other platform about the technologies that drive the web.

There are dozens of solutions on how to make rounded corners work with any old browser out there; we keep re-inventing new ways to use custom fonts on web sites yet the documentation on proper localisation and real accessibility of web products that benefit everybody are rare.

Decentralised Data

The biggest mistake in web development to me is building a single point of entry for our users and then hope that people will come. This is why we spend more time and money on SEO, link-building, newsletters and other ways of promoting our domain and brand instead of embracing the idea of the web.

The web is an interlinked structure of data – media, documents and URL endpoints. By spreading ourselves all over it we make our domain less important but we also weave ourselves into the fabric of the web.

There is a different approach to web development. About two years ago I wrote this book. In it I explained that you can build an easy to maintain and successful web site without needing to know much about programming. The lack of success of the book to me is related to the title which is far too complex. “Web Development Solutions: Ajax, APIs, Libraries, and Hosted Services Made Easy” originally was meant to be “No Bullshit Web Design”.

The trick that I explained in the book and want to re-iterate here is the following: instead of trying to bring the web to our site we are much better off bringing our site to the web.

The main core of the site should be a CMS and it doesn’t really matter which one. This could be as easy as a blogging system like WordPress or as complex as a full-blown enterprise level system like Vignette, Tridion or RedDot. The main feature should be that it is modular and that we can write our own extensions for it to retrieve data from the web.

The next step is to spread our content on the web:

The benefits of this approach are the following:

  • The data is distributed over multiple servers – even if your own web site is offline (for example for maintenance) the data lives on
  • You reach users and tap into communities that would never have ended up on your web site.
  • You get tags and comments about your content from these sites. These can become keywords and guidelines for you to write very relevant copy on your main site in the future. You know what people want to hear about rather than guessing it.
  • Comments on these sites also mean you start a channel of communication with users of the web that happens naturally instead of sending them to a complex contact form.
  • You don’t need to worry about converting image or video materials into web formats – the sites that were built exactly for that purpose automatically do that for you.
  • You allow other people to embed your content into their products and can thus piggy-back on their success and integrity.

If you want to more about this approach, check out the Developer Evangelism Handbook where I cover this in detail in the “Using the (social) web” chapter.

APIs are the key

The main key to this kind of development are Application Development Interfaces or for short APIs. Using APIs you get programmatic access to the content of the API provider. There are hundreds of APIs available for you and one site that lists them is programmable web.

Using an API can be as easy as opening an address like http://search.twitter.com/trends/current.json in a browser. In this case this will get you the currently trending topics on Twitter in JSON format.

API issues

Of course there are also problems with APIs. The main one is inconsistency. Each API has its own ways of authenticating, needs different input parameters and has different output formats and structures. That means that you have to spend a lot of time reading API documentation or – if that one doesn’t exist which happens a lot – trial and error development. The other big problem with APIs is that a lot of providers underestimate the performance the API needs and the amount of traffic it will have to deal with. Therefore you will find APIs being unavailable or constantly changing to work around traffic issues.

No need for rock stars

Whilst development using third party APIs used to be an exclusive skill set of experts this is very much over. Newer products and meta APIs allow everyone to simply put together a product using several APIs. There is no need any longer to call in a “rock star developer”

YQL - making it really easy

YQL is a meta API that allows you to mix, match, convert and filter API output in a very simple format:

select {what} from {where} where {condition(s)}

The easiest way to start playing with YQL is by using the console. This is a simulation of a call to the YQL web service and allows you to enter your query and define the output format (XML or JSON or JSON-P or JSON-PX). You then run the query and will see the results either as raw data format or as a tree to drill into. If you’re happy with the result you can copy and paste the URL to use either in a browser or your script.
You have a list of your recent queries, some demo queries to get you going and a list of all the available data tables. Data tables are the definitions that point to the third party API and they come with demo queries and a description which tells you what parameters are expected to make the request work.

For example: Frankfurt

As an example, let’s build an interface that shows information about Frankfurt.

The main piece of code that you need is a function that uses cURL to get data from the web:

function getstuff($url){
$curl_handle = curl_init();
curl_setopt($curl_handle, CURLOPT_URL, $url);
curl_setopt($curl_handle, CURLOPT_CONNECTTIMEOUT, 2);
curl_setopt($curl_handle, CURLOPT_RETURNTRANSFER, 1);
$buffer = curl_exec($curl_handle);
curl_close($curl_handle);
if (empty($buffer)){
return ‘Error retrieving data, please try later.’;
} else {
return $buffer;
}

}

Then you can get a description of Frankfurt from Wikipedia via YQL and the HTML table:

$root = ‘http://query.yahooapis.com/v1/public/yql?q=’;
$city = ‘Frankfurt’;
$loc = ‘Frankfurt’;

$yql = ‘select * from html where url = ‘http://en.wikipedia.org/wiki/’.$city.’’ and xpath=”//div[@id=’bodyContent’]/p” limit 3’;
$url = $root . urlencode($yql) . ‘&format=xml’;
$info = getstuff($url);
$info = preg_replace(“/.*|.*/”,’‘,$info);
$info = preg_replace(“/ ” encoding=”UTF-8”?>/”,’‘,$info);
$info = preg_replace(“//”,’‘,$info);
$info = preg_replace(“/”/wiki/”,’”http://en.wikipedia.org/wiki’,$info);

Newest events from upcoming:

$yql = ‘select * from upcoming.events.bestinplace(4) where woeid in (select woeid from geo.places where text=”’.$loc.’”) | unique(field=”description”)’;
$url = $root . urlencode($yql) . ‘&format=json’;
$events = getstuff($url);
$events = json_decode($events);
foreach($events->query->results->event as $e){
$evHTML.=’
  • $yql = ‘select * from flickr.photos.info where photo_id in (select id from flickr.photos.search where woe_id in (select woeid from geo.places where text=”’.$loc.’”) and license=6) limit 16’;
    $url = $root . urlencode($yql) . ‘&format=json’;
    $photos = getstuff($url);
    $photos = json_decode($photos);
    foreach($photos->query->results->photo as $s){
    $src = “http://farm{$s->farm}.static.flickr.com/{$s->server}/{$s->id}_{$s->secret}_s.jpg”;
    $phHTML.=’
  • title.’” src=”’.$src.’”>
  • ‘;
    }

  • And the weather forecast from Yahoo Weather:

    $yql=’select description from rss where url=”http://weather.yahooapis.com/forecastrss?p=GMXX0040&u=c”’;
    $url = $root . urlencode($yql) . ‘&format=json’;
    $weather = getstuff($url);
    $weather = json_decode($weather);
    $weHTML = $weather->query->results->item->description;

    Kobayashi Maru

    Kobayashi Maru is a fictional test that graduates of Star Fleet Academy in Star Trek have to pass in order to get their first commission. The interesting part about the test is that it cannot be solved and its purpose is to confront people with the idea of failure and death and see how they cope with it. The only person to ever successfully pass the test is James Tiberius Kirk because a) he is the definition of awesome and b) he cheated by modifying the computer program.

    YQL can be used the same way to create an API where none exists. For example by scraping the headlines of a newspaper web site using the HTML table and an xpath:

    select * from html where url=”http://faz.de” and xpath=”//h2”

    See it here or try it in the console.

    You can then go even further and translate the headlines using Google’s translation API:

    select * from google.translate where q in (select a from html where url=”http://faz.de” and xpath=”//h2”) and target=”en”;

    See it here or try it in the console.

    You can also use an API to filter cleverly to get information that normally is not readily available. For example getting all twitter updates from two different users but only when they posted a link:

    select title from twitter.user.timeline where title like “%@%” and id=”codepo8” or id=”ydn”

    See it here or try it in the console.

    Benefits of using YQL

    YQL gives you a lot of flexibility when it comes to remixing the web and filtering the results. You can:

    • mix and match APIs
    • filter results
    • simplify authentication
    • use in console or from code
    • minimal research of documentation
    • caching of results
    • proxied on Yahoos servers

    Join the web of data!

    Using YQL you can not only read and mix API data but you can also make your own data available to the world. By defining a simple XML schema as an Open Table you give YQL data access to your API endpoint. The really useful part of this is that YQL limits to the outside access to 100000 hits a day and 1000 hits an hour and caches your data for you. Thus you can have the world use your data without having to buy an own server park.

    Thanks

    I hope I got you interested in YQL - now it is up to you to have a go using it!

    Die Reise zum Mittelpunkt der Daten – Vortrag am Webmontag in Frankfurt

    Monday, August 10th, 2009


    Vor ein paar Wochen hat mich Darren Cooper gefragt am heutigen Web Montag in Frankfurt einen kurzen Vortrag ueber APIs zu halten. Hier sind die Slides und die Notizen zum Vortrag.

    Notizen

    Die Reise zum Mittelpunkt der Daten

    In den letzten Jahren hat sich Webentwicklung drastisch veraendert. Leider haben die Moeglichkeiten die wir heute haben sich noch nicht wirklich rumgesprochen.

    Villabajo und Villariba Webdesign.

    Wer sich noch erinnert, Villabajo und Villariba waren die fiktiven Doerfer in einer Fairy Werbung. Nach einer Paella-Party musste Villabajo nachher ihre Zeit mit Pfannenschrubben verbringen waehrend Villariba durch die Verwendung von Fairy schon feiern konnte.

    Das gleiche findet im Webentwicklungsbereich statt. Waehrend viele noch (mental) an ihren CSS Loesungen rumschrubben und sich stundenlang darueber ereifern koennen ob denn XHTML jetzt tot ist oder nicht kann man schnell und einfach Webloesungen erstellen die einfach abzuaendern und staendig aktuell sind.

    Dezentralisierung der Daten

    Das groesste Missverstaendnis im Webdesign ist es das wir immer noch glauben das wir etwas basteln und dann die Kunden kommen. Die Webseite ist das wichtigste und wir verbringen die meiste Zeit damit sie zu erstellen, zu re-designen und dann einen SEO Spezialisten dafuer bezahlen uns doch bitte Endnutzer zu finden.

    Dabei geht es auch anders. Vor ein paar Jahren habe ich dieses Buch rausgebracht. Darin wird beschrieben wie man das Netz und seine Angebote nutzen kann um ganz einfach Webseiten zu gestalten. Leider blieb der grosse Erfolg warscheinlich wegen des Titels aus. “Web Development Solutions: Ajax, APIs, Libraries, and Hosted Services Made Easy” ist dann doch etwas laenger als “No Bullshit Web Design”.

    Der Trick, den ich im Buch erklaerte und auch hier nochmal anbringen will ist der folgende: anstatt das Web zur Seite zu bringen kann man auch die Seite ins web bringen.

    Zentral ist ein CMS, was ein einfaches blog system sein kann (WordPress, Expression engine, was immer…).
    Dann kann man die Daten die man fuer die Seite benoetigt im Web verteilen:

    Die Vorteile von dieser Vorgangsweise sind die folgenden:

    • Die Daten sind auf Servern verteilt, was bedeutet das die eigene Seite auch mal offline sein kann.
    • Man erreicht Nutzer dieser Seiten die niemals auf die eigene Seite gekommen waeren.
    • Man bekommt Kommentare und Tags von diesen Seite was bedeutet das man eine Kommunikation beginnt und man sinnvolle Schluesselworte fuer SEO bekommt.
    • Man muss Inhalte nicht selbst in Webinhalte umwandeln (Bilder, Videos).
    • Man erlaubt Besuchern und Lesern die Daten auch in ihre eigenen Seiten einzubinden

    Wer nachlesen will warum es Sinn macht diesen Weg einzuschlagen, kann auch in meinem Developer Evangelism Handbuch nachschlagen wo das nochmal im Detail erklaert wird.

    APIs sind der Schluessel

    Der Schluessel zu alle diesen Daten sind Application Development Interfaces oder kurz APIs. Das sind Programme die Firmen anbieten die einen Datenzugriff auf die Inhalte ihrer Webseiten erlauben.

    Das kann so einfach sein wie die Adresse http://search.twitter.com/trends/current.json in einem Browser aufzurufen und dann die neuesten Twittertrends als JSON Datei zu bekommen.

    Probleme mit APIs

    Das Hauptproblem von APIs ist allerdings das jede Firma ihre API anders gestaltet. Unterschiedliche Zugangsmethoden, Eingabeparamater, Ausgabeformate und fehlende Dokumentation machen es nicht einfach, mehrere APIs in einem Produkt zu verwenden. Ein weiteres Problem ist, das die API nicht erreichbar sein kann, da nicht jede Firma die Resourcen und das Kapital hat einen dedizierten Server aufrecht zu erhalten.

    Man braucht keine Rockstars

    Waehrend die Verwendung von APIs noch vor ein paar Jahren ein Fachgebiet von Spezialisten war braucht man heutzutage kein “Rockstar” zu sein um ganz einfach Daten aus dem Netz zu mischen.

    Einfacher mit YQL

    YQL ist eine Meta-API mit der man andere APIs in einem einfachen Format verwenden kann.

    select {was} from {wo} where {konditionen}

    In der YQL Konsole kann man diese Anfrage in das Eingabefeld eingeben und bestimmen ob man XML oder JSON als Rueckgabeformat bekommt. Am Ende steht dann eine Webadresse die man sofort in einem Browser oder Programm verwenden kann. Die resultierenden Daten kann man entweder als formatierte Daten oder als Baum angezeigt bekommen. Die letzten paar Anfragen sowie Beispielsanfragen werden angezeigt. Die Auflistung der Datentabellen zeigt alle APIs auf, die durch YQL erreicht werden koennen. Jede einzelne dieser Tabellen kommt mit einer Beschreibung.

    Beispiel: Frankfurt

    Nehmen wir als Beispiel eine Seite die Informationen ueber Frankfurt anzeigen soll.

    Der Haupttrick ist es, cURL dazu zu verwenden um Daten von YQL zu holen und weiterzuverarbeiten:

    function getstuff($url){
    $curl_handle = curl_init();
    curl_setopt($curl_handle, CURLOPT_URL, $url);
    curl_setopt($curl_handle, CURLOPT_CONNECTTIMEOUT, 2);
    curl_setopt($curl_handle, CURLOPT_RETURNTRANSFER, 1);
    $buffer = curl_exec($curl_handle);
    curl_close($curl_handle);
    if (empty($buffer)){
    return ‘Error retrieving data, please try later.’;
    } else {
    return $buffer;
    }

    }

    Dann kann man beispielsweise die Beschreibung von Frankfurt von Wikipedia ziehen:

    $root = ‘http://query.yahooapis.com/v1/public/yql?q=’;
    $city = ‘Frankfurt’;
    $loc = ‘Frankfurt’;

    $yql = ‘select * from html where url = ‘http://en.wikipedia.org/wiki/’.$city.’’ and xpath=”//div[@id=’bodyContent’]/p” limit 3’;
    $url = $root . urlencode($yql) . ‘&format=xml’;
    $info = getstuff($url);
    $info = preg_replace(“/.*|.*/”,’‘,$info);
    $info = preg_replace(“/ ” encoding=”UTF-8”?>/”,’‘,$info);
    $info = preg_replace(“//”,’‘,$info);
    $info = preg_replace(“/”/wiki/”,’”http://en.wikipedia.org/wiki’,$info);

    Neueste Veranstaltungen aus Upcoming:

    $yql = ‘select * from upcoming.events.bestinplace(4) where woeid in (select woeid from geo.places where text=”’.$loc.’”) | unique(field=”description”)’;
    $url = $root . urlencode($yql) . ‘&format=json’;
    $events = getstuff($url);
    $events = json_decode($events);
    foreach($events->query->results->event as $e){
    $evHTML.=’
  • $yql = ‘select * from flickr.photos.info where photo_id in (select id from flickr.photos.search where woe_id in (select woeid from geo.places where text=”’.$loc.’”) and license=6) limit 16’;
    $url = $root . urlencode($yql) . ‘&format=json’;
    $photos = getstuff($url);
    $photos = json_decode($photos);
    foreach($photos->query->results->photo as $s){
    $src = “http://farm{$s->farm}.static.flickr.com/{$s->server}/{$s->id}_{$s->secret}_s.jpg”;
    $phHTML.=’
  • title.’” src=”’.$src.’”>
  • ‘;
    }

  • Und das Wetter von Yahoo:

    $yql=’select description from rss where url=”http://weather.yahooapis.com/forecastrss?p=GMXX0040&u=c”’;
    $url = $root . urlencode($yql) . ‘&format=json’;
    $weather = getstuff($url);
    $weather = json_decode($weather);
    $weHTML = $weather->query->results->item->description;

    Kobayashi Maru

    Kobayashi Maru ist ein fiktiver Test in Star Trek der unmoeglich zu loesen ist. Captain Kirk allerdings schaffte es ihn zu loesen indem er den Testcomputer manipulierte. Genauso kann man YQL dazu verwenden APIs zu erstellen, wo keine vorhanden sind. Zum Beispiel indem man HTML Daten von einer Seite liest:

    select * from html where url=”http://faz.de” and xpath=”//h2”

    Hier ausprobieren oder in der Konsole anzeigen.

    Und diese dann auf englisch uebersetzt.

    select * from google.translate where q in (select a from html where url=”http://faz.de” and xpath=”//h2”) and target=”en”;

    Hier ausprobieren oder in der Konsole anzeigen.

    Man kann auch mehrere Datenquellen mischen um Probleme zu loesen fuer die es bisher keine Loesungen gab:

    select title from twitter.user.timeline where title like “%@%” and id=”codepo8” or id=”ydn”

    Gib mir twitter updates bei denen entweder der nutzer “codepo8” oder “ydn” einen link getweetet hat.

    Hier ausprobieren oder in der Konsole anzeigen.

    Vorteile von YQL

    YQL ermoeglicht es einem ganz einfach APIs zu verwenden um eigene Loesungen zu erstellen:

    • APIs einfach mischen
    • Resultate ausfiltern
    • Einfache API Anmeldung
    • Als Konsole oder Kode
    • Dokumentation inklusive
    • Yahoos server als proxy und Zugang.

    Ab ins Netz der Daten

    YQL ermoeglicht es nicht nur Daten auszulesen sondern auch eigene Daten anzubieten. Mittels Open Tables kann man seine eigenen Daten als Tabelle in YQL anbieten. YQL erlaubt es damit Endnutzern die Daten einfach zu erreichen und fungiert als Firewall da nur 100000 Anfragen pro Tag und 1000 Anfragen pro Stunde erlaubt sind.

    Danke

    Ich hoffe das Interesse fuer YQL erweckt zu haben und kann nur noch empfehlen einfach mal damit loszulegen.

    Das ist alles wofuer uns Zeit bleibt. Ich bin heute abend hier um auch weiterhin Frage und Antwort zu stehen.

    Young rewired state in London on the 22nd of August – hacking for teens!

    Friday, August 7th, 2009

    I just got pinged by Dan Morris of the BBC on Facebook (yeah it is up again) about a really cool event in London on the 22nd of August:

    YOUNG REWIRED STATE
    Hack the Government, 22-23 August @ Google.

    THE SKINNY

    Young Rewired State is a free two-day hackday for young people work in
    small teams to create software hacks using open Government data.

    Government will be there and listening to what you have to say and
    show. Groups will present at the end of the event to press, web &
    business judges, and representatives from Government – There will be
    prizes!

    WHEN & WHERE

    22-23 August at the Google offices in London.

    If you want, you can bring your parent along with you, we’ll promise
    to keep them busy while you hack. Support is available for travel and
    overnight stay for you if you need it.

    The date is getting close now so please signup now if you’re
    interested in attending!

    HOW GEEKY?

    We’re expecting a mix of technical ability, so chances are you’ll fit
    in just fine. So, whether you just dabble with HTML or dream in
    machine code, this event is for definitely for you!

    We have the largest set of open Government data going, all available
    for you to use. Experienced hacker mentors will be assigned to groups,
    and can help as much or as little as you need.

    INFO & SIGNUP

    http://rewiredstate.org/young

    Pass it on!

    Chris

    Sunnyvale, Frankfurt, Toronto…

    Thursday, August 6th, 2009

    In case you wondered why there is a bit of a lull from me on the internets right now (even before the Twitter DDOS), I am right now in Sunnyvale, California where I attended the iPhoneDevCamp and now got bogged down in team meetings and internal trainings. I feel that I am much less effective in the US because a) I am driving in a car and not in public transport where I can use a laptop and b) cubicles stop people talking to each other – I hate these things.

    I am flying back across the pond tomorrow to go to Frankfurt to attend a Web Brunch and speak about mashups and YQL at the WebMontag. I then get a day to go to London, switch suitcases and fly off to Toronto, Canada to speak at the Domain Convergence.

    There are some more things in the making, so bear with me :)

    Chris