Christian Heilmann

Author Archive

A winter of discontent in the web design world?

Wednesday, December 21st, 2011

There is a lot of discontent in our ranks lately. Almost every week there is something people get overly excited about and blame others for doing wrong. There are endless blog comment and Twitter arguments and they turn personal rather quickly. Or they are being labeled personal attacks where in reality there is not much to them (beware the wrong use of “ad hominem” there). All in all it feels uncomfortable and people start to not say anything for fear of being misunderstood.

And that pains me. It annoys me as I have spent almost my whole career as a web developer. It annoys me as our success so far is based on communication, publication and open discussion. It also annoys me as we are ungrateful to a job that is unique, incredibly liberating and creative.

  • We do what we like to do as a job – this is rare
  • We get away with much more than in any other job I ever had
  • We learn from another and with lots and lots of free resources
  • Our market has not enough people to fill the demand
  • Companies apply to us for hiring us

Instead of being happy and amazed about that it seems to me that there is a constant craving for things to be unhappy about and complain. Maybe this is just because we like to do things in the open – the way other jobs are not done. Or maybe it is a kind of guilt syndrome cause if you don’t get just how damn privileged we are as web developers, then you are a very spoilt person.

Change is a constant, not a threat

A lot of this discontent has to do with change. Change is good, but change that repeats mistakes of the past is dangerous. Sometimes it is change for the sake of change. It is the prerogative of a new generation of makers to question the ways of old and move the craft ahead. But it is also a fact that to learn a craft, you should learn from good masters, who have experience and can prevent you from repeating their follies.

“That’s the duty of the old,” said the Librarian, “to be anxious on behalf of the young. And the duty of the young is to scorn the anxiety of the old.”

They sat for a while longer, and then parted, for it was late, and they were old and anxious.

Philip Pullman, Northern Lights

Craving controversy to win arguments

It seems that instead of trying to find a consensus, the speed and rules of internet conversations are much more likely to make people concentrate on “winning an argument”. This is where “expert panels” with added controversy at conferences come in. We stoke the fire of discontent for entertainment purposes and wonder when we don’t get anywhere when people repeat the truisms uttered there for the sake of a quick laugh or to one-up one another. The repetition of these and the quoting out of context of experts by others is what causes a lot of quarrels and discontent.

In order to understand better what is going on I wanted to explain a few things I found looking back into the history of what we do.

Bumpy beginnings

When web design started we were the joke of the design industry. The market saw the web as something that can be generated by software, Frontpage and the likes.

The battles we fought were not only of technical nature but mostly outdated and limited understanding of the web as a platform for design. People tried to control every pixel and make things look and behave exactly like print.

We had no guidelines or ideas to follow – it was pure trial and error.

One main difference to industries like interactive CD-ROMs or the likes is that when we found out how to achieve something we shared it with others. This is how the web design community started – people sharing ideas and discussing solutions.

Structuring our approach and finding our niche

Later on browsers got better and we started separating concerns of development. CSS is where the look and feel goes, HTML is there to structure the document and JavaScript and Flash brings the extra bling that can’t be added server-side.

We thought we are on a good track there. Our jobs were much more defined, we got more respect in the market and were recognised as a profession. Before we started showing a structured approach and measurable successes with web technologies we were just “designers” or “HTML monkeys”.

A hero who turned villain – IE6

IE6 was the main browser used. It was also the most capable browser when you wrote code only available in it. Tied in with developer tools and environments that were Windows only it became a very natural choice for a lot of companies to concentrate on exclusively.

And this is what we still suffer from now. A lot of work and a lot of expensive systems were built for one browser only, a browser that is now outdated and annoying. The problem is that re-writing these systems is much more expensive than writing new ones and writing new ones is more expensive than just “letting them stay the way they are”. By writing browser-specific code, we painted ourselves into a corner of “never change a running system”, and the people who suffer are the visitors having to use these now terribly dated systems and the maintaining developers.

Starting the fight for standards

This is the time where the hardliners on one side of the web development battle come from. We proved we can build amazing things on the web and now we wanted to add style and standardisation.

We wanted to make what we do predictable, teachable and understandable. For this, you need standards and style guides and they’ve served us well.

Many a man hour was not wasted by trying to understand what other developers have done. As their work applied to a standard, we knew what was supposed to happen and debugging got much easier.

Standards need to embrace change, too

What we fail to do on the web these days is to understand that our standards were defined for a time that is in the past. Instead of understanding that there is a change in what we are doing on the web we still talk about the old techniques we needed to fight the WYSIWYG editors and browser specific code that wasn’t based on an agreed standard.

So yes, at times it is totally OK not to close your tags. Some applications are fine to rely on JavaScript. And it is very much pointless to complain about the markup quality of an application that was built using GWT.

On the other hand not everything needs to be a web app the likes of Google Docs and the best practices there do violate good ideas that for example need to be followed when writing a blogging system.

We need a “pick and mix” best practice approach – not the “one size doesn’t quite fit all” we do now.

Think of the maintainers

All in all this is about maintainability. You are totally welcome to violate any best practices and ideas from the past but you should not be surprised when people maintaining your code are slow in doing so. You should also not be surprised when your projects get totally scrapped and re-build from scratch when changes need to be done.

Maybe this is a dream we have to give up. Maybe there is no such thing as longevity in web design. Maybe our job is to start from scratch every single time and maybe that is what keeps us fresh and fun to work in our environment.

From craft to commodity

The old standards for “best practices” do not apply to today’s world any longer. We wanted web development to be a craft, but it actually is becoming a commodity. The people who laughed at web developers for “not being engineers” are the ones trying to do our jobs now as we are one of the few markets that are booming.

Not so hidden agenda: getting graduates hit the ground running

And this is where a lot of the “forget the old ways, here, write less and achieve more” mentality comes from. Companies are hard pushed to hire as many engineers as they need so they want to make web development interesting for people who just came out of university. Now, in university we learn nothing that is of much use in web development. Re-educating people is a long and arduous process so let’s bring the things we learn in university into the market deliveries.

This is a good idea to hire people and to stop them from building native code (Android, iOS) instead of thinking about becoming web developers. It washes out the craft though, which is always the case when things become mainstream and need lots of people.

Where we think we build Chippendale furniture, the market needs more IKEA Billies and it makes sense to teach new people to build those.

The best change is to avoid repeating mistakes

All in all our market is changing and there is no point in taking sides and trying to torpedo each other. It is fun and damn easy to do, but it is a waste of our time.

To me, our energies should be spent on preventing repeating mistakes of the past like:

  • Relying on a single browser to be the one to support – Webkit should not be the next IE6
  • Write sloppy code that obviously works now but will become a nightmare to maintain in the future – shorter is not better when it needs long explanations how to change it
  • Stick to tried and true old ideas and technologies by any means necessary (no, IE6 does not need animation when you can do it in CSS for other browsers)

Furthermore we should stop thinking as limited as we do right now

  • Complaining about the symptoms and not the causes (we try to educate people far too late about the merits of “old school” standards web development)
  • Cut corners relying on certain environments and call it “best practice”
  • Embrace the idea that a semantically marked up blog is as much web development as is an app like Google Docs and an interactive video like Wilderness Downtown.
  • Ask for the reason why something was done the way it was instead of flat out shooting it down in flames
  • Fix things for people instead of telling them off for not doing it – most of our code is open source, so if you are unhappy with an implementation, fix it!

I don’t think that we are split in two as a community. I think we all want the same things, we just fail to be aware of the reasons, pressures and end goals that drive certain people to do things in a certain way.

Check the source before you tweet

Friday, December 16th, 2011

Working for a large entity of the web is an awesome thing. You have access to great people, resources and you don’t have to chase the next paycheck or prepare the next pitch document to boot. The thing that can make it taxing – if you let it get to you – is that everybody and their dog has to say something about your employer. These things mostly fall into a few categories:

  • The “company XYZ should do this to be successful (and/or survive)” post. I like these, they normally come from people as far removed from the entity as possible. In most cases, they aren’t even working for other companies or as business consultants. Which is a shame, really, as when they know so many amazing simple things to bring success then they should bring it to where people implement it, right?
  • The “I used to use XYZ but I like ABC now” post. Good for you, if you are happy, I am happy
  • The “OMG did you read this about your company” tweet

The latter is what I want to talk about a bit.

Why repeat the report?

The most annoying blog posts, tweets and other “social media” releases are re-hashing what a certain tech media outlet has said. In some cases, even taken out of context, boiled down to the most shocking or “amazing thing”. The fallacy there is that you are not telling the world what you are outraged about or interested in – all you do is bring the tech media outlet you got the message from money as visits are clicks and clicks are money.

Mike Butcher of TechCrunch gave a very honest and open talk about this lately which didn’t come as a surprise to me but should be something to take into consideration when you give a certain article your name and stamp of approval by retweeting it:

How to deal with tech media by @mikebutcher

View more presentations from mikebutcher

TL;DR: every piece on a tech blog is there to bring readers to the blog. It is not about the content – it is about getting the headline and being the first to talk about it.

I worked as a news journalist and this is really what it boiled down to: you have to be the first to have the info and when your media outlet is dependent on numbers you have to spice it up until it really gets people excited. If that means bending the truth or making wild accusations without backup, so be it. You can always apologise later. You will be washed clean but the original rumour will still bring people to your site. You win. You get paid.

Use the source

As web developers, view-source always has been our friend. It is great for debugging and it is great for looking beyond the shiny. You should apply the same to news reports on the web:

  • If there is a certain news about a company, check the official press release for comparison. If the thing the hoo-hah is about is real then there is one.
  • If the article talks about a source, go to that source and tweet about that one. In many cases, this is better quality. A good example just happened: A friend of mine, Dennis Lembree, tweeted about a the best places to work report on TechCrunch and complained about it being inaccessible (as the data was an image with no text alternative). Looking at the news piece I found the source article on glassdoor.com which is in HTML
  • Check where the source is coming from. There is no point in debugging the generated HTML when it is assembled somewhere else. If the person making a certain assumption about a company has no clue about the subject matter why give them the satisfaction to repeat what he/she said? Asking the wrong person for a comment is never a good plan, much like asking an unfunny car tester to give a quote about union matters can cause controversy

Don’t mistake SEO for the real thing

A post that really got me lately was 21 Types Of Social Content To Boost Your SEO linked here with the keyword horse manure (to see what that does to their Google rank). Whilst probably well-intended I was really annoyed by the tips given there to get eyeballs to your site. The ideas to get more people to your site – regardless of your content – are to use a lot of techniques, the ones that got me annoyed were the following:

1.) The Manifesto
The Manifesto is the viral equivalent of preaching to the choir. Write a passionate, eloquent, or well-researched argument that your niche will wholeheartedly agree with. Since you’ve already got an army of believers who agree with you, they’re already primed and ready to share your argument.
Example: Why I’m a Vegetarian, Dammit, an essay on a vegetarian recipe blog, received over 14,000 shares on StumbleUpon alone

Yes, that is because a manifesto is something you should believe in – by definition.

2.) The Controversy
The opposite of the Manifesto, the Controversy is all about stirring up some dissent in your niche. Write a well-written rebuttal to another argument, challenge a popular opinion, or spark a controversial discussion and watch the reader comments fly.

Translation: your readers are idiots who need to be lead into shouting at each other. Be the puppet master. Sensible discussion is for hippies.

5.) The Epic
Why do a top 10 list when you can do a top 100? Go for gold and craft a mega-list relevant to your industry. Examples of epic titles include “50 Must-Have Firefox Add-ons,” or “101 Tips for Increasing Productivity.”

Yes, cause reading 101 tips will totally increase your productivity. And the more add-ons, the better. Then you can also complain when your browser is sluggish.

8.) The Directory
Why make readers sift through mounds of data when you can do it for them? Collect the best links from around the internet and share them with your readers. Gather the best advice for your niche, the top news stories, the leading Twitter accounts in your field, or a simple collection of interesting information.

Remember, kids, this is how Yahoo started and see where they are now! Also, social bookmarking sites do not exist, your blog should do this!

11.) The Expert
In viral content and in life, it’s not what you know, but who you know. Name recognition is a powerful thing. When Mark Zuckerberg talks about Facebook or Mario Batali talks about food, people listen. For even more viral impact, gather a group of experts: “15 Published Authors on Writing,” for example.

My proposal “10 martial arts tricks Douglas Crockford never gave out before” (who the heck is Mario Batali?)

13.) The Visual Aid
Visual representations of mass amounts of data are easy-to-digest while still containing a lot of “meaty” content. Infographics aren’t the only example of this—think graphs, informational videos, or interactive maps, too.

Because nothing makes lies and pointless comparisons nicer than beautiful colours and shapes. Funnily enough I get spam offering me to do infographics for my blog. There is quite a market there.

Recognising the danger signs

There are a few sources I don’t retweet or mention and get very bored when people do. These are:

  • Blogs where every link in the text links to the same blog. This is lame SEO and pure arrogance. “This assumption is totally true as you can see in our article of last month” – what tells me that one was right?
  • Blogs that don’t link to the source or mention where you can get it.
  • Blogs that re-blog other blogs. These are the ones that didn’t get to be the first to have the piece of news, but are too lazy to do their own research which is a shame as they could make a news piece out of their competition saying nonsense.
  • People using fill sentences like “scientists say” or “in the expert opinion” whilst failing to say who these are.
  • Posts spelling utter devastation or total success. It is never that black or white

YMMV of course, but I’d rather give out some information that is coming from the source than keep an artificial discussion going that was first and foremost invented to get clicks.

TTMMHTM: Singing hedgehogs, light cycles, working for the internet and inspiring people on twitter

Thursday, December 15th, 2011

As a special “Things that made me happy this morning” here is a kick-ass interactive (well linked) video of singing hedgehogs:

HP has a new logo! And they can do it in CSS

Wednesday, December 14th, 2011

There is a lot of discussion right now about HP’s new logo. I for one like it as they can save an HTTP request by creating it with CSS:

That “JavaScript not available” case

Tuesday, December 6th, 2011

During some interesting discussions on Twitter yesterday I found that there is now more than ever a confusion about JavaScript dependence in web applications and web sites. This is a never ending story but it seems to me to flare up ever time our browsing technology leaps forward.

I encountered this for the first time back in the days of DHTML. We pushed browsers to their limits with our lovely animated menus and 3D logos (something we of course learned not to do again, right?) and we were grumpy when people told us that there are environments out there where JavaScript isn’t available.

Who turns off JavaScript?

The first question we need to ask about this is what these environments are. There are a few options for that:

  • Security systems like noscript or corporate proxies that filter out JavaScript
  • Feature phones like old Blackberries (I remember switching to Opera Mini on mine to have at least a bearable surfing experience)
  • Mobile environments where carriers proxy images and scripts and sometimes break them
  • People on traffic-limited or very slow connections
  • People who turn off JavaScript for their own reasons
  • People sick of modal pop-ups and other aggressive advertising

As you can see some of them are done to our end users (proxying my companies or mobile provider), some are probably temporary (feature phones) and some are simply their own choice. So there is no way to say that only people who want to mess with our cool web stuff are affected.

Why do they turn off JavaScript?

As listed above, there are many reasons. When it comes to deliberately turning off JavaScript, I’d wager to guess that the main three are security concerns, advertising fatigue and slow connectivity.

Security is actually very understandable. Almost every attack on a client machine happens using JavaScript (in most cases in conjunction with plugin vulnerabilities). Java of course is the biggest security hole at the moment but there is a lot of evil you can do with JavaScript via a vulnerable web site and unprotected or outdated browser and OS.

Slow connectivity is a very interesting one. Quite ironic – if you think about it – as most of what we use JavaScript for is to speed up the experience of our end users. One of the first use cases for JS was client side validation of forms to avoid unnecessary server roundtrips.

Now when you are on a very flaky connection (say a free wireless or bad 3G connectivity or at any web development conference) and you try to use for example Google Reader or Gmail you’ll end up with half broken interfaces. If the flakiness gets caught during first load you actually get offered a “HTML only low version” that is very likely to work better.

The best of both worlds

This is totally fine – it tries to give an end user the best experience depending on environment and connectivity. And this is what progressive enhancement is about, really. And there is nothing evangelical about that – it is plain and pure pragmatism.

It seems just not a good plan under any circumstances to give people an interface that doesn’t work. So to avoid this, let’s generate the interface with the technologies that it is dependent on.

With techniques like event delegation this is incredibly simple. You add click handlers to the parent elements and write out your HTML using innerHTML or other, newer and faster techniques.

So why is this such a problem?

Frankly, I really don’t know. Maybe it is because I am old school and like my localhost. Maybe it is because I have been disappointed by browsers and environments over and over again and like to play it safe. I just really don’t get why someone would go for a JS-only solution when the JS is really only needed to provide the enhanced experience on top of something that can work without it.

The mythical edge case application

A big thing that people keep coming up with are the “applications that need JavaScript”. If we are really honest with ourselves, then these are very rare. If pushed, I could only think of something like photoshop in the browser, or any other editor (video, IDE in the browser, synth) that would be dependent on JavaScript. All the others can fall back to a solution that requires a reload and server-side component.

And let’s face it – in the times of Node.js the server side solution can be done in JavaScript, too. Dav Glass of Yahoo 2 years ago showed that if a widget library is written to be independent of its environment, you can re-use the same rich widget client and server side.

The real reasons for the “App that needs JavaScript” seems to be a different, non-technical ones.

The real reasons for “Apps that need JavaScript”

Much like there are reasons for not having JavaScript there are reasons for apps that need JavaScript and deliver broken experiences.

  • You only know JS and think people should upgrade their browsers and stop being pussies. This is fine, but doesn’t make you the visionary you think you are as it is actually a limited view. We called that DHTML and it failed once – it can fail again
  • You are building an app with a team without server side skills and want to get it out cheaply. This can work, but sounds to me like apps that “add accessibility later”, thus quadrupling the time and money needed to make that happen. Plan for that and all is good.
  • You want to get the app out quickly and you know you’ll have to re-write it later. This is actually a pretty common thing, especially when you get highly successful or bought by someone else. Good luck to you, just don’t give people the impression that you are there to stay.
  • Your app will run in a pure JS environment. Of course this means there is no need to make it work without JS. One example of this would be Air applications. Just make sure you bet on tech and environments that will stay on the radar of the company selling it.
  • Your app really needs JS to work. If that is the case, just don’t offer it to people without it. Explain in a nice fashion the whys and hows (and avoid telling people they need to turn it on as they may not be able to and all you do is frustrate even more) and redirect with JS to your app.

In summary – sort of

All in all, the question of JavaScript dependence reaches much further than just the technical issues. It questions old best practices and has quite an impact on maintainability (I will write about this soon).

Let’s just say that our discussions about it would be much more fruitful if we started asking the “what do we need JS for” question rather than the “why do people have no JS”. There is no point in blaming people to hold back the web when our techniques are very adaptive to different needs.

There is also no point in showing people you can break their stuff by turning things in your browser on and off. That is not a representation of what happens when a normal visitor gets stuck in our apps.

Maybe all of this will be moot when node.js matures and becomes as ubiquitous as the LAMP stack is now. I’d like to see that.