Christian Heilmann

You are currently browsing the Christian Heilmann blog archives for June, 2009.

Archive for June, 2009

Accessibility and You – my brownbag presentation at ebay/gumtree/paypal

Friday, June 5th, 2009

Today I am going to Richmond in South London to talk to the teams of Gumtree, Paypal, shopping.com and Skype (I think) about accessibility, open web development and a long Q&A session in the afternoon.

Here are my slides of the talk which will be recorded by them as a video.

Notes

Access

There is an amazing amount of misconceptions about accessibility. In its very basic form it means that we don’t block out people because of conditions they cannot readily change.

Availability

This is different to availability, which is an absolute must when it comes to anything we offer customers. We need our web products to be up and responsive, or have a good call center to fill in when they are down. The mere idea of having a web product came around so we are available 24/7 and on the cheap.

Lack of barriers

The German term for accessibility is “Barrierefreiheit” which means “Lack of Barriers”. This actually makes more sense. It doesn’t mean however that the German market is better or more switched on. They could be but other things keep them from being really effective.

Law and Orders

The issue is that a lot of accessibility work is being done to comply with legal requirements. This is a wonderful “covering our arse” tactic, but it will not result in good, accessible solutions. Fear of lawsuits or persecution makes you creative in avoiding these but also distracts you from the goal of producing something that makes sense for all users out there. We are not here to comply with laws, we’re here to make things work for people.

People

People is what our work should be about. How can we make people happy with what we provide? How can we make sure that everybody has a very good time using our products, comes back for more and tells all their friends about it?

Myths

The main problem is that instead of keeping our eyes open and peeled on the future we are very quick to believe in accessibility myths. Most of the time because they sound like a quick solution to a large and complex issue.

Disconnect

The next issue is that the world of accessibility and the world of web development is terribly disconnected. I get the feeling that the accessibility world stopped seeing the web and its technologies as something that evolves around 1999. The web development world (or at least the loudest advocates) on the other hand are fed up with people not staying up-to-date and start yelling for abandonment of technology that is still very much in use.

Irony

Internet Explorer 6 is the bane of the existence of every web developer out there. The reason is that people do not upgrade it because it does the job. I call this the good enough syndrome. For very outspoken advocates of accessibility, IE6 on Windows is also the only browser which supports assistive technology to the fullest. The reason is that monoculture allows you to build things once and then patch instead of evolving your product. Assistive technology is a very expensive piece of kit and the market is scared of losing that source of income.

Closed Doors

Innovation on the web is driven by being open. Show your software to the world and people help you find and fix bugs. People also tell you about issues they encountered that you hadn’t thought of. Open your data to the world and people show you more effective ways of using it or how mixing it with other data sources can tell stories hidden in your information. The accessibility world doesn’t work like that yet. The reason is once again that most clients want to know about legal compliance rather than really caring about the end result.

Disability

One other problem is that accessibility is always connected with disability. Disability is a topic that makes us feel uneasy talking about or acknowledging. It is also a tricky subject to talk about because of the language differences and it is easy to say a non-PC term without wanting to.
I’ve found that people like to deal with it by targeting single instances that are easy to grasp. How does a blind user deal with that? Cool, let’s fix it for him. Disability is much larger than that and exists in numerous levels of severity.

Market Shift

One thing that makes me very happy to see is that the internet user market is shifting. The biggest and fastest growing group of internet users is something you would not expect. These are the results of a survey by the National Organization on Disability in 2001

The highest level of discretionary income in the US is held by older Americans, especially those between 64-69, at $6,920.00 per year. The age group with the highest concentration of online buyers is the 50-64 age segment, with over 25% making online purchases.
The fastest growing segment of the U.S. population is the 65 and over group. The U.S. Census Bureau projects that the population of those 65 and over will more than double between now and the year 2050, to 80 million. The result of all this – a large and rapidly expanding market of web users that have significant disposable income and a need for accessible web sites.

Different needs

As we age, most people experience a decrease in vision, hearing, physical abilities, and cognitive abilities. The percentages of people with disabilities increases significantly with age – 13.6% at age18-44, 30% at 45-64, 46% at 65-74, 64% at 75-84. Use of AT increases with age, with 52% of AT devices used by those 65+.

Silver Surfers

With this shift we have to reconsider the approach of our web products. Elderly people have different needs than the young go-getters that we are. Right?

Same Needs

Elderly and disabled people do not have different needs than other people, all they have is a more obvious need for the same things.

Simplicity

The first thing to think about is keeping things simple. Build working solutions with the technologies at your disposal and enhance them iteratively after checking that the enhancement can be applied.

Usable interfaces

It is interesting to see how many times we do this wrong. Instead of sending users onto a path that leads them to the first sensible result and thus giving them a positive learning experience we overload them with information hoping they pick the right one. Marketing and internal policies dictate what goes on the first thing we show end users. Developers fail the same way: being power users ourselves we tend to pack in feature after feature instead of making the interface a journey of sensible pieces of information.

One company had this down like nobody else. Nintendo. With the Wii they broke all the barriers of conventional gaming ideas. Instead of learning a complex system of buttons and levers all you needed was move the game controller as you would play the game in real life. This breaks boundaries and barriers. Elderly people and the Wii are no problem – on the contrary, they are in use in rehab centers.

Multi channel access

One thing that people keep forgetting is that the internet is not a single access channel media.
The web can be accessed in various ways: from a computer, a games console, a mobile device, a TV set, a Kiosk system and many more channels. This means that you can not deliver a one size fits all solution – instead you should concentrate on not blocking these access channels.

A jolly good time

My main bugbear with web sites and products is that people think that creating accessible products means making them less pretty and not with the full set of features. The scary thing is that the expert sites do give that impression.

Making it sexy

You can however build something that is sexy and accessible if you put enough effort into it. Take for example the Yahoo Currency Converter which makes my life much easier and is highly accessible. It needed some effort and dedication to get it that far though

Endpoints

If you look at the currency converter then you’ll also realize that the URL is something I can send in a link to somebody and it will make sense immediately to that person. This way I can publish and promote my product to a much larger audience. I can even allow people to send different versions of the same product catered to different needs with url parameters.

Bad URLs don’t look like a big problem but they are extra effort that is not needed.

Pimping

That’s all well and good, but what if you have already a solution in place? In most cases these days we will have a massive system already that is hard or impossible to change.

Hackable

As Tristan Nitot of Mozilla puts it – the web is hackable

I mean “hackable” in the sense that one can decide to experience it in ways that were not exactly what the author decided it would be. In short, the Web is not TV. It’s not PDF either. Nor Flash.

Using systems like Mozilla Jetpack, YQL, GreaseMonkey and Pipes I can easily prototype changes that should be done to web sites to make them more accessible. These are simple things like injecting language attributes or labels. A lot of HTML problems are in web sites because the maintainers are not aware of the barrier they cause.

Mashable

If you really want to help, open your data out to the world and let the millions of developers out there show you how things can be fixed. Praise them, invite them and support them and we all win. Build an own API, or if you don’t want to go that far, build some open tables for YQL.

Scripting Enabled was the first accessibility hack event and is completely open for you to organize, too.

Styleable

Techies are easy to reach but to build a beautiful and accessible web we also need to reach out more to the designer world. Accessibility is not an enemy of beauty, there is a lot of interesting and creative challenge and gain in making things work across the board.

Educating

I hope that you now have a better idea just how many options you have to make the web a more accessible place. Our training and teaching on the subject should be closer to people’s needs rather than technical implementation of guidelines. Let’s talk to HR about this.

Reward

The reward for building an accessible web is not only monetary. First and foremost we make the media we work in reach a lot more people, all of which can contribute to the web but are stopped before they can even consider it. Check these videos to see just how much more empowered people feel if they get an interface they can work with.

Dedication

To make this work for you, all you need to do is put some dedication into the whole subject. You can be very accessible companies but everybody has to understand what is going on.

Collaborating

Together we are stronger. If more and more companies show that they are understanding the need for accessibility and want to do the right thing but lack the backup from the accessibility world we can fix the web.

How I built icant.co.uk – source code

Wednesday, June 3rd, 2009

After my talk at FOWA in Cambridge yesterday I showed off that http://icant.co.uk is fully driven by YUI and YQL and maintained elsewhere. I’ve recorded a screencast about this earlier which is also available for download as a M4V but hadn’t released the code yet.

So here goes – this is the PHP source of icant.co.uk (without the HTML which is more or less done with YUI grids builder


// get all the feeds to grab the data
$feeds = array(
‘http://feeds2.feedburner.com/wait-till-i/gwZf’,
‘http://feeds.delicious.com/v2/rss/codepo8/myvideos?count=15’,
‘http://feeds.delicious.com/v2/rss/codepo8/sandbox’,
‘http://feeds.delicious.com/v2/rss/codepo8/icantarticles’,
‘http://www.slideshare.net/rss/user/cheilmann’
);

// assemble the YQL statement
$yql = ‘select meta.views,content.thumbnail,content.description,title,’.
‘link,description from rss where url in ‘;
$yql .= “(‘” . join($feeds,”’,’”) . “’)”;

// assemble the request url
$root = ‘http://query.yahooapis.com/v1/public/yql?q=’;
$url = $root . urlencode($yql) . ‘&format=json’;

// get the feeds and populate the data to echo out in the HTML
$feeds = renderFeeds($url);
$blog = $feeds[‘blog’];
$videos = $feeds[‘videos’];
$articles = $feeds[‘articles’];
$presentations = $feeds[‘slides’];

// this function loads all the feeds and turns them into HTML
function renderFeeds($url){

// grab the content from YQL via cURL
$c = getStuff($url);

// as the content comes back as JSON, turn it into PHP objects
$x = json_decode($c);

// reset counter for videos and presentations
$count = 0;
$vidcount = 0;

// start new array to return
$out = array();

// loop over YQL results, if they exist.
if($x->query->results->item){
foreach($x->query->results->item as $i){

// if the link comes from the blog, add to the blog HTML
if(strstr($i->link,’wait-till-i’)){
$out[‘blog’] .= ‘

  • link . ‘”>’ . $i->title .

    ’ . html_entity_decode($i->description) .

  • ‘;
    $vidcount++;
    }

    // for interviews and articles, add to the articles section
    if(strstr($i->title,’Interview’) ||
    strstr($i->title,’Article:’)){
    $out[‘articles’].= ‘

  • link . ‘”>’ . $i->title .

    YQL doesn’t send a diagnostics part

    // grab the books from my blog
    $yql = ‘select * from html where url=’.
    ‘”http://wait-till-i.com/books/”’.
    ’ and xpath=”//div[@class=’entry’]”’;
    $books = renderHTML($root.urlencode($yql).’&format=xml&diagnostics=false’);

    // this is a quick and dirty solution for the HTML output
    function renderHTML($url){
    // pull the information from YQL
    $c = getStuff($url);
    // check that something came back
    if(strstr($c,’<’)){
    // remove all the XML parts
    $c = preg_replace(“/.*|.*/”,’‘,$c);
    $c = preg_replace(“/ ” encoding=”UTF-8”?>/”,’‘,$c);
    // remove all comments
    $c = preg_replace(“//”,’‘,$c);
    }

    // send it back
    return $c;
    }

    // a simple cURL function to get information
    function getstuff($url){
    $curl_handle = curl_init();
    curl_setopt($curl_handle, CURLOPT_URL, $url);
    curl_setopt($curl_handle, CURLOPT_CONNECTTIMEOUT, 2);
    curl_setopt($curl_handle, CURLOPT_RETURNTRANSFER, 1);
    $buffer = curl_exec($curl_handle);
    curl_close($curl_handle);
    if (empty($buffer)){
    return ‘Error retrieving data, please try later.’;
    } else {
    return $buffer;
    }

    }

  • Presentation: Remixing and distribution of web content made dead easy

    Tuesday, June 2nd, 2009

    My talk at the Future of Web Apps Tour 2009 about remixing the web of data with YQL. I’ll turn this into a slidecast once I am back.

    Notes

    Evolution

    Today I will talk a bit about an evolution that we are all part of, although we might not be aware of it yet.

    What is the web?

    Whenever people asked me what my job is I told them I am a web developer. This brings the question what I develop, really.

    Documents

    The web as it stands is made up from documents. The technologies that run it – http for transport and HTML for structure haven’t changed much over the years. Linking documents was a revolution and it made the earth a much smaller place and allowed us to collaborate. However, it got boring quickly.

    Things

    The web of things has been a running theme for a while. Initially it meant that all kind of devices can be connected to the web (self-ordering fridge and somesuch). It also means that with RESTful web services we can point directly at the thing we want to reach which could be a text but also an image or a video or other embedded rich content in web sites.

    Data

    In essence, the web is data. Data can be anything that is available on the web or referred to. Data is what we look for, data is what we get. And there is much more than meets the eye.

    Connected

    By connecting different data sources we even get more information and new data emerges. As humans we all learn differently and having different data sets and various ways of connecting them makes it easy for us to grasp the learnings from the data.

    Hunters and Gatherers

    The issue is that we overshot the goal. We collect for the sake of collecting and we spend much more time chasing the next big thing to collect than giving the things we already have some love and tag and describe them. As humans we are hard-wired to find things and collect them. It also means that we always want to do everything ourselves and not rely on others. In essence, we collected a solid mass of data and now we don’t know how to plough through it anymore. This is why we try to use technology to clean up the mess for us by injecting landmarks and machine-readable information.

    Let’s take this sentence for example. There is much more in there than meets the eye.

    My name is Chris. I am a German living in London, England and one of my favourite places to go is Hong Kong. I also enjoyed Brazil a lot.

    By using a geolocation service I can analyze the text and add extra information that allows me to make it easy for other systems to understand this sentence. That way I can enrich the information.

    My name is Chris. I am a German living in London, England (Name: London,England, GB, Type: Town, Latitude: 51.5063, Longitude: -0.12714) and one of my favourite places to go is Hong Kong. I also enjoyed Brazil (Name: Brazil, Type: Country, Latitude: -14.2429, Longitude: -54.3878) a lot.

    This makes data much easier to grasp and gives it a richer experience for us all. The question is how we can do this easily.

    APIs

    APIs are the web data publisher’s way to give us access to their data. There are hundreds out there and each of them is different. Which leads to another problem.

    Language

    Each API uses its own language, ways of authenticating, data entry vocabulary and return value. You are lucky to find good documentation and many examples are hard to grasp as they are not available in the programming language you would like to work in.

    Documentation can be confusing. And in most cases you don’t really want to have to dig in that deep into the API just get some information.

    Simplicity

    What we need is a simple way to access all these wonderful APIs and mix and match the content of them.

    YQL

    The Yahoo Query Language (or short YQL) is a SQL-style language for the data web. Using the YQL console you can easily build most complex queries and get them ready for copy and paste.

    You simply enter your statement in the appropriate box and try it out. You can choose to get back XML or JSON and define a JavaScript callback for JSON to use as JSON-P.

    Very important is the permalink link. Click this every time you do a complex query as if you reload the page by accident it will still be available to you.

    The REST query is a URL ready to copy and paste into a browser or your own script.

    The formatted view shows you the XML or JSON; the tree view allows you to drill down into the returned information.

    Recent queries are stored, and example queries show you how it is done.

    The data tables show all the available data sources. Each table comes with an own description.

    What can you do with this?

    Say you want to find events in Cambridge. You can query upcoming.org. Sadly enough (and because of people entering bad data) this will not result in anything useful but give you results from London!

    select * from upcoming.events where location = “cambridge,uk”

    By using the geo.places API you can define Cambridge without a doubt (as a woeid) and then get events.

    select * from upcoming.events where woeid in (select woeid from geo.places where text=”Cambridge,UK”)

    The diagnostics part of the resulting data set tells you which URLs where called “under the hood” and how long it took to get them.

    The results section has all the events but far too much data for each of them. Say you only want the url, the title, the venue and the description.

    You can select only the parts that you want:

    select title,url,venue_name,
    description from upcoming.events where woeid in (select woeid from geo.places where text= “cambridge,uk”)

    Which cuts down nicely on the resulting data.

    You can get my latest updates from Twitter…

    select title from twitter.user.timeline where id=”codepo8”

    Or only those that I replied to somebody…

    select title from twitter.user.timeline where id=”codepo8”and title like “%@%”

    Or check several accounts!

    select title from twitter.user.timeline where id=”codepo8” or id=”ydn” and title like “%@%”

    You could also check my tweets for useful keywords:

    select * from search.termextract where context in
    (select title from twitter.user.timeline where id=”codepo8”)

    You can scrape the BBC’s news site for links:

    select * from html where url=”http://news.bbc.co.uk” and xpath=”//td[2]//a”

    Or get all the alternative text of their news images:

    select alt from html where url=”http://news.bbc.co.uk” and xpath=”//td[2]//a/img[@alt]”

    And get better photos from flickr…

    select * from flickr.photos.search where text in
    (select alt from html where url=”http://news.bbc.co.uk” and xpath=”//td[2]//a/img[@alt]”)

    Flexibility

    • mix and match APIs
    • filter results
    • simplify authentication
    • use in console or from code
    • minimal research of documentation
    • caching of results
    • proxied on Yahoos servers

    YQL gives you a lot of flexibility when it comes to remixing the web and filtering the results. However, there are some things that can not be done with them that are possible with other systems like for example Yahoo Pipes.

    Extending

    YQL can be extended by Open Tables. This is a simple XML schema that redirects YQL queries to your web service. That way you can be part of the simple YQL interface without needing to change your architecture. The other benefit is that YQL will cache queries, thus hitting your servers less and also limit the access of every user to 1000 calls per hour to YQL.

    One of those is for example the real estate search engine nestoria. Using their open table I can look for flats in Cambridge:

    use “http://www.datatables.org/nestoria/nestoria.search.xml” as nestoria;
    select * from nestoria where
    place_name=”Cambridge”

    Open tables can be added on github to a repository. This will make them available to the YQL community.

    Clicking the Show Community Tables link in the console adds all these third party tables to the interface.

    Re-Use

    Instead of making the language of YQL more complex we also allow for YQL execute tables which can get the data from a query and you can write a JavaScript with full E4X support to convert it to whatever you want before giving it back to the YQL engine.