Christian Heilmann

You are currently browsing the Christian Heilmann blog archives for February, 2013.

Archive for February, 2013

Helping or hurting? (keynote talk at jQuery Europe)

Wednesday, February 20th, 2013

This was my closing keynote talk of the first day of jQuery Europe 2013 in Vienna, Austria. The slide deck is here. Sadly, due to technical issues there is no audio or video recording.

As developers, we seem to be hardwired to try to fix things. And once we managed something, we get bored with it and want to automate it so that in the future we don’t have to think about it any longer. We are already pre-occupied with solving the next puzzle. We delude ourselves into thinking that everything can get a process that makes it easier. After all, this is what programming was meant for – doing boring repetitive tasks for us so we can use our brains for things that need more lateral thinking and analysis.

Going back in time

When I started as a web developer, there was no such thing as CSS and JavaScript’s interaction with the browser was a total mess. Yet we were asked to build web sites that look exactly like the Photoshop files our designers sent us. That’s why in order to display a site correctly we used tables for layout:

<table width="1" border="1" cellpadding="1" 
       cellspacing="1">
  <tr>
    <td width="1"><img src="dot_clear.gif" 
                       width="1" height="1" 
                       alt=""></td>
    <td width="100"><img src="dot_clear.gif" 
                         width="100" height="1" 
                         alt=""></td>
  </tr>
  <tr>
    <td align="center" valign="middle" bgcolor="green">
      <font face="arial" size="2" color="black">
        <div class="text">
          1_1
        </div>
      </font>
    </td>
    <td align="center" valign="middle" bgcolor="green">
      <font  face="arial" size="2" color="black" >
        <div class="text">
          1_2
        </div>
      </font>
    </td>
  </tr>
</table>

Everything in there was needed to render properly across different browsers. Including the repeated width definitions in the cells and the filler GIF images. The DIVs with class “text” inside the font elements were needed as Netscape 4 didn’t apply any styling without an own element and class inside a FONT element.

Horrible hacks, but once we tested and discovered the tricks that are needed we made sure we didn’t have to do it every time from scratch. Instead we used snippets in our editors (Homesite in my case back then) and created generators. My “table-o-matic” is still available on the Wayback machine:

table-o-matic

Moving into hard-core production we shifted these generators and tricks into the development environment. Instead of having a web site, I created toolbars for Homesite for different repetitive tasks we had to do. I couldn’t find the original table toolbar any longer but I found the WAP-o-matic, which was the same thing to create the outline of a document for the – then – technology of the future called WML:

wap-o-matic

Etoys, the company I worked for back then was hard-core as they come. Our site was an ecommerce site that had millions of busy users on awful connections. Omitting quotes on our HTML was not a matter of being hipster cool about HTML5, it was a necessity to make things work on a 56KB dial-up when browsers didn’t have any gzip deflating built-in.

Using the table tools it was great to build a massively complex page and see it working the same across all the browsers of the day and the older ones still in use (think AOL users).

etoys

But there was an issue. It turned out that browsers didn’t render the whole page until all the document was loaded when the content was a table. Netscape also didn’t render the table at all when there was a syntax error in your HTML. In order to speed up rendering you had to cut up your page into several tables, which would render one after another. With a defined header in our company it was possible to still automate that both in the editor and in templates. But it limited us. We had a race between progressive rendering of the page and the amount of HTML we sent to the browser. Add to this that caching was much more hit and miss back then and you are in a tough spot.

progressive rendering

The rise of search engines was the next stumbling block for table layouts. Instead of having lots of HTML that defines the main navigation and header first-up (remember, source order defined visual display) it became more and more important to get the text content of the page higher up in the source. The trick was to use a colspan of 2 on the main content and add an empty cell first.

Helper tools need to evolve with market needs

The fact remains that back then this was both state of the art and it made our lives much, much easier to automate the hacks once we knew them. The problem though is that when you deliver tools like these to developers, you need to keep up with changes in the requirements, too. Company internal tools suffer less from that problem as they are crucial to delivery. But tools we threw out for fun and help other developers are more likely to get stale once we move on to solve other problems.

No browser in use these days needs table layouts and they are far too rigid to deliver to the different environment we want to deliver HTML in. Luckily enough, I don’t know anyone who uses these things any longer and whilst they were amazingly successful at the time, I cut the cord and made them not available any longer. No sense in tempting people to do pointless things that seem the most simple thing to do.

When we look at the generated code now it makes us feel uneasy. At least it should – if it doesn’t, you’ve been working in enterprise too long. There was some backlash when I deleted my tools because once engineers like something and find it useful we love to defend tooth and nail:

Arguing with an engineer is a lot like wrestling in the mud with a pig. After a few hours, you realize that he likes it.

What I found over and over again in my career as a developer is that our tools are there to make our jobs easier, but they also have to continuously evolve with the market. If they don’t, then we stifle the market’s evolution as developers will stick to outdated techniques as change means having to put effort in.

Never knowing the why stops us from learning

Even worse, tools that make things much easier for developers can ensure that new developers coming into the market have no clue what they are doing. They just use what the experts use and what makes their lives easier. In software that works as there is no physical part that could fail. You can’t be as good as a star athlete by buying the same pair of shoes. But you can use a few plugins, scripts and libraries and put together something very impressive.

This is good, and I am sure it helped the web become the mainstream it is now. We made it much easier to build things for the web. But the price we paid is that we have a work-force that is dependent on tools and libraries and forgot about the basics. For every job offer for “JavaScript developer” you’ll get ten applicants who only ever used libraries and fail to write a simple loop using only what browsers give you.

Sure, there is a reason for that: legacy browsers. It can appear as wasted time explaining event handling to someone who just learns JavaScript and having to tell them the standard way and then how old Internet Explorer does it. It seems boring and repetitive. We lived with that pain long enough not to want to pass it on to the next generation of developers.

The solution though is not to abstract this pain away. We are in a world that changes at an incredible pace and the current and future technologies don’t allow us to carry the burden of abstraction with us forever. We need the next generation of developers to face new challenges and we can’t do that when we tell them that everything works if you use technology X, script Y or process Z. Software breaks, environments are never as good as your local machine and the experiences of our end users are always different to what we have.

A constant stream of perfect solutions?

Right now we are obsessed with building helper tools. Libraries, preprocessors, build scripts, automated packaging scripts. All these things exist and there is a new one each week. This is how software works, this is what we learn in university and this is how big systems are built. And everything we do on the web is now there to scale. If your web site doesn’t use the same things Google, Facebook, Twitter and others use to scale, you will fail when the millions of users will come. Will they come for all of us? Not likely, and if they do it will also mean changing your infrastructure.

What we do with this is add more and more complexity. Of course these tools help us and allow us to build things very quickly that easily scale to infinity and beyond. But how easy is it for people to learn these skills? How reliable are these tools not to bite us in a month’s time? How much do they really save us?

And here comes the kicker: how much damage do they do to the evolution of browsers and the competitiveness of the web as a platform?

Want to build to last? Use standards!

The code I wrote 13 years ago is terrible by today’s idea of semantic and valuable HTML. And yet, it works. Even more interesting – although eToys has been bankrupt and closed in the UK for 12 years, archive.org still has a large chunk of it for me to find to show you now as it is standards based web content with HTML and working URLs.

In five years from now, how much of what we do right now will still be archived or working? Right now I get Flashbacks to DHTML days, where we forked code for IE and Netscape and tried to tell users what browser to use as this is the future.

There seems to be a general belief in the developer community that WebKit is the best engine and seeing how many browsers use it that others are not needed any longer. This, to a degree is the doing of abstraction as no browser engine is perfect

jQuery Core has more lines of fixes and patches for WebKit than any other browser. In general these are not recent regressions, but long-standing problems that have yet to be addressed.
It’s starting to feel like oldIE all over again, but with a different set of excuses for why nothing can be fixed. Dave Methvin, jQuery core team; President of the jQuery Foundation.

We repeat the mistakes of DHTML days. Many web sites expect a Webkit browser or JavaScript to be flawlessly executed. I don’t really have a problem with JavaScript dependency for some functionality (as the “JavaScript turned off” use case is very much a niche phenomenon) but we rely far too much on it for incredibly simple tasks.

Chrome download broken

Google experienced that the other day: because of a JavaScript error the download page of Chrome was down for a whole day. Clicking the button to show a terms and conditions page only resulted in a JavaScript error and that was that. No downloads for Chrome that day.

As Jake Archibald once put it:

“All your users have JavaScript turned off until the first script is loaded and runs”

This applies to all dependencies we have, be they libraries, build scripts, server configuration, syntactical sugar to make a programming task easier and less keystrokes. We tell ourselves that we make it easier for people to build things but we are adding to a landfill of dependencies that are not going to go away any time soon.

b2. Dev: “It works on my machine, just not on the server.” Me: “Ok, backup your mail. We’re putting your laptop into production.” Oisin Grehan on Twitter

There is no perfect plugin, there is no flawless code. In a lot of cases we start having to debug the library or the plugin itself. Which is good if we file pull requests that get implemented, but looking at libraries and open issues with plugins, this is not the case a lot of times. Debugging a plugin is much harder than debugging a script that directly interacts with the browser as developer tools need to navigate around the abstractions.

Web obesity

The other unintended effect is that we add to web obesity. By reducing functionality to a simple line of code and two JavaScript includes we didn’t only make it easier for people to achieve a certain effect – we also fostered a way of working to add more and more stuff to create a final solution without thinking or knowing about the parts we use. It is not uncommon to see web sites that are huge and use various libraries as they liked one widget built on one and another built on another.

This leads to ridiculously large pages like Grolsch.com which clocks in at 388 HTTP requests with 24.29MB traffic:

massive page

What we have there is a 1:1 conversion of a Flash site to an HTML5/CSS3 site without any understanding that the benefit of Flash for brochureware sites like these full of video and large imagery was that it can stream all this. Instead of benefiting from offline storage sites like this one just add plugin after plugin to achieve effects. The visual outcome is more important than the structure or maintainability or – even worse – the final experience for the end user. You can predict that any of these sites start with the best of intentions but when deadlines loom and the budget runs out the cutting happens in Q&A and by defining a “baseline of supported browsers and platforms”.

Time for a web diet

In other words: this will break soon, and all the work was for a short period of time. Because our market doesn’t stagnate, it moves on. What is now a baseline will be an anachronism in a year’s time. We are going into a market of thin clients on mobile devices, set-top boxes and TV sets.

In essence, we are in quite a pickle: we promised a whole new generation of developers that everything works for the past and the now and suddenly the future is upon us. The “very small helper library” on desktop is slow on mobiles and loading it over a 3G connection is not fun. To me, this means we need to change our pace and re-focus on what is important.

Our libraries, tools and scripts should help developers to build things faster and not worry about differences in browsers. This means we should be one step ahead of the market with our tools and not patch for the past.

What does that mean? A few things

  • Stop building for the past – using a library should not be an excuse for a company to use outdated browsers. This hurts the web and is a security issue above everything else. No, IE6 doesn’t get any smooth animations – you can not promise that unless you spend most of your time testing in it.
  • Let browsers do what they do best – animations and transitions are now available in CSS, hardware-accelerated and rendering-optimised. Every time you use animate() you simulate that – badly.
  • Componentise libraries – catch-all libraries allowing people to do anything lead to a thinking that you need to use everything. It also means that libraries themselves get bloated and hard to debug
  • Build solid bases with optional add-ons – we have more than enough image carousel libraries out there. Most of them don’t get used as they do too many things at once. Instead of covering each and every possible visual outcome, build plugins that do one thing well and can be extended, then collaborate
  • Fix and apply fixes – both the pull request and issue queues of libraries and plugins are full of unloved feedback and demands. These need to go away. Of course it is boring to fix issues, but it beats adding another erroneous solution to the already existing pile
  • Know the impact, don’t focus on the outcome – far too many of our solutions show how to achieve a certain visual effect and damn the consequences. It is time we stop chasing the shiny and give people tools that don’t hurt overall performance

Here’s the good news: by writing tools over the last few years that allowed people to quickly build things, we got quite a following – especially in the jQuery world. This means that we can fix things at the source and go with two parallel approaches: improving what is currently used and adding new functionality that works fine on mobile devices in our next generation of plugins and solutions. Performance is the main key – what we build should not slow down computers or needlessly use up battery and memory.

There are a few simple things to do to achieve that:

  • Use CSS when you can – for transitions and animations. Generate them dynamically if needed. Zepto.js does a good job at that and cssanimations.js shows how it is done without the rest of the library
  • requestAnimationFrame beats setInterval/setTimeout – if we use it, animations happen when the browser is ready to show them, not when we force it to apply them without any outcome other than keeping the processor busy
  • Transforms and translate beat absolute positioning – they are hardware accelerated and you can easily add other effects like rotation and scaling
  • Fingers off the DOM – by far the biggest drain in performance is DOM access. This is unfortunate as jQuery was more or less made to make access to the DOM easier. Make sure to cache your DOM access results and batch DOM changes that cause reflows. Re-use DOM elements instead of creating new ones and adding to a large DOM tree
  • Think touch before click – touch events on mobile devices happen much quicker than click events as click needs delaying to allow for double-tap to zoom. Check for touch support and add if needed.
  • Web components are coming – with which we can get rid of lots of bespoke widgets we built in jQuery and other libraries. Web components have the benefit of being native browser controls with much better performance. Think of all the things you can see in a video player shown for a video element. Now think that you could control all of those

Right now the mobile web looks to me a lot like Temple Run – there is a lot of gold lying around to be taken but there are lots of stumbling blocks, barriers and holes in the road. Our tools and solutions should patch these holes and make the barriers easy to jump over, not become stumbling blocks in and of themselves.

These apples don’t taste like oranges – let’s burn down the orchard

Tuesday, February 19th, 2013

When I see comparisons of HTML5 to native apps I get the feeling that the way we measure failure and success could give statisticians a heart attack. Take Andrea Giammarchi’s The difficult road to vine via web as an example. In this piece Andrea, who really knows his stuff tries to re-create the hot app of the month, Vine using “HTML5 technologies” and comes to the conclusion – once again – that HTML5 is not ready to take on native technologies head-on. I agree. But I also want to point out that a 1:1 comparison is pretty much useless. Vine is only available in iOS. Vine also was purposely built to work for the iPhone. In order to prove if HTML5 is ready all we’d need to do is to find one single browser on one single OS, nay even only on one piece of hardware to match the functionality of Vine.

sobbing mathematically

Instead we set the bar impossibly high for ourselves. Whenever we talk about HTML5 we praise its universality. We talk about build once and run everywhere and we make it impossible for ourselves to deliver on that promise if we don’t also embrace the flexible nature of HTML5 and the web. In other words: HTML5 can and does deliver much more than any native app already. It doesn’t limit you to one environment or hardware and you can deliver your app on an existing delivery platform – the web – that doesn’t lock you in to Terms and Conditions that could change any time from under you. Nowhere is written though that the app needs to work and look the same everywhere. This would actually limit its reach as many platforms just don’t allow HTML5 apps to reach deep into hardware or to even perform properly.

What needs to change is our stupid promise of HTML5 apps working the same everywhere and matching all the features of native apps. That can not happen as we can not deliver the same experience regardless of connectivity, hardware access or how busy the hardware is already. HTML5 apps, unless packaged, will always have to compete with other running processes on the hardware and thus need to be much cleverer in resourcing than native apps.

Instead of trying to copy what’s cool in native and boring and forgotten a month later (remember Path?) if we really want to have HTML5 as our platform of choice we should play it to its strengths. This means that our apps will not look and feel the same on every platform. It means that they use what the platform offers and allow lesser able environments to at least consume and add data. And if we want to show off what HTML5 can do, then maybe showcasing on iOS is the last thing we want to do. You don’t put a runner onto a track full of quicksand, stumbling blocks and a massive wind pushing in the opposite direction either, do you?

HTML5 needs to be allowed to convince people that it is a great opportunity because of its flexibility, not by being a pale carbon copy of native apps. This leads to companies considering native as the simpler choice to control everything and force users to download an app where it really is not needed. Tom Morris’ “No I’m not going to download your bullshit app” and the lesser sweary I’d like to use the web my way thank you very much, Quora by Scott Hanselman show that this is already becoming an anti-pattern.

Personally I think the ball is firmly in the court of Android to kill the outdated and limiting stock browser and get an evergreen Chrome out for all devices. I also look for Blackberry 10 to make a splash and for Windows phone and tablets to allow us HTML5 enthusiasts to kick arse. And then there is of course Firefox OS, but this goes without saying as the OS itself is written in HTML5.

Do something crazy – it is immensely rewarding

Monday, February 18th, 2013

I am not a big fan of the cold. I like spring weather with sensible temperatures. The last 3 days I spent in Kiruna, Sweden, which is very high up north indeed and right now running at around -20 degrees centigrade. That means that the inside of your nose freezes so that you think you always have a stray booger hanging from it and it also means that any water on your hair turns into icicles:

IMG_20130217_133011IMG_20130216_115229

The trip was organised by some people in Spotify and cost me a few hundred pounds. What it gave me though was an amazing experience, seeing the Northern Lights, riding a snow mobile and – probably the most amazing bit – steering a dog sled over a frozen river, a frozen lake and through some woods. I also had to rough it as there was no space in the larger huts any more so our castle was the size of my bedroom in London, with no running water and the bathroom in an even smaller hut about 50 meters away. But that is by the by, let me tell you about the dog sledding.

IMG_20130215_080725

We got out in the early morning wearing 3 trousers, 4 pair of socks and five layers of pullovers, longsleeves and jackets. I bought a down jacket for the trip but it turned out that in order to survive safely and not to have the smell of dog on you for the rest of the week, it is a good idea to rent another thick overall and snow boots, furry hats and two pair of gloves.

IMG_20130215_112853

Feeling like the Michelin Man we drove to the dog kennel were we got introduced to our dogs and got the introduction how to steer a dog-sled.

Now, as a trainer and working in corporate environments for a long time I am used to explaining everything and getting every little detail explained to me to avoid people doing stupid things or suing me or my company. I thought that getting dragged through the woods by five dogs with a one track mind of running as fast as they can would warrant quite some introduction, too. But what we got was this:

These are your dogs, the first two are brothers, one of them is shy and the other is quite lively. They are nice though, you can pet them. Make sure to pet their sides so they feel safe or they may run towards the other dogs and get entangled with them. When the dogs get entangled, they might break a leg, be careful. Also when you stand the passenger should keep them on the leash to make sure they don’t go where you don’t want them to. These are the guiding dogs. The other three are the engines. The couple in the back is a female and a male, make sure the male one doesn’t go near other males as he will bite them and doesn’t stop until they are dead. He doesn’t like other males, but he likes females fine.

So much for the dogs – and these are sled dogs. They are nice enough but more wolf than domesticated wagging tail types. They bark and howl when they don’t run – a lot. As my partner put it, they are like arrows – they just want to go fast in one direction as soon as possible. The whole introduction to the sledge was this:

As the passenger, keep your feet on the inside of the skis or you might get stuck on a root and break your foot. Bend slightly with the curves to make the sledge go easier. For the driver: this is the brake, step on it to go slow and keep both feet on it when you want to stand in a place as the dogs might run off and you’ll fall off. This is the anchor, stomp it into the ground when you want to stand and the passenger should put the lead in the front on a pole or a tree. Keep the anchor safe as it might end up in your side, leg or the head of the passenger otherwise. Always keep both hands on the handle and steer with your weight. The dogs are nice, don’t worry. Just make sure not to run into the other sledges.

So there I was in the freezing cold with my glasses fogging over and being dragged by five semi-wild dogs who poop while they run and on every stop jump headfirst into the snow to eat it after giving you a “why did we stop, I want to run!” look. I was not in control, I was not really sure how this works or why.

IMG_20130215_101206IMG_20130215_100533_1

But the longer it went and the more I saw the joy the dogs had when running the more I felt comfortable and secure in what I am doing and the dogs reacted to my steering and my breaking without a hitch. We switched on the way back and as a passenger you managed to see the snowy landscape from a totally new point of view.

I did it. When thinking back of all the things that could go wrong I am amazed about the nonchalance of the guides when it comes to getting us into this. But it works. They trusted us to find our way to cope and to get secure and more assured in what we are doing. And that made it extremely rewarding for me.

Now it is your turn. Don’t wait for the perfect introduction, don’t wait until you are considered an expert before opening your mouth or going out in public with your ideas. You don’t have to go into the cold and get dragged along by dogs. How about starting simpler and publish some of your thoughts or send out a proposal to speak at a conference? How about organising a talk in your company? How about learning a new skill you always considered yourself to be not having any talent in? Do it, do something “crazy”.

I will miss the “Douglas Crockford of browsers”

Wednesday, February 13th, 2013

Opera as a pony Opera today announced that they are ditching their own Presto rendering engine for Webkit and V8. More details as to what that means for developers are on the ODIN blog. The reasons are reasons you expect a commercial company to give:

To provide a leading browser on Android and iOS, this year Opera will make a gradual transition to the WebKit engine, as well as Chromium, for most of its upcoming versions of browsers for smartphones and computers.

Two things led to this: Apple not allowing any other engine on iOS (which means that Opera for iOS, or ICE will be the same as Chrome on iOS – not really quite the other browser but a shell with the iOS engine under it) and developers building for webkit only and sites breaking in Opera. As Peter-Paul Koch put it:

Note carefully what this means: we web developers haven’t been doing our jobs properly. We didn’t bother to test our mobile sites on Opera Mini, even though it’s roughly as large as Safari iOS and Android.

I see this as a personal fail. I evidently haven’t been outspoken enough on the topic. I should have yelled in everybody’s ear until they did the proper thing.

It’s our own fault.

Content not showing up or showing up broken in your product is terrible for a commercial company – the web is never wrong, if your browser shows it wrongly it is your fault, right?

Wrong. I always called Opera the Douglas Crockford of browsers as it was ruthless in its implementation of standards. If something didn’t work in Opera there is a good chance that you did something wrong. Even better – fixing it in Opera in most cases meant looking at how the W3C standard meant things to work and write your code accordingly, which in most cases meant no change in other browsers, but cleaner code overall. Opera was my linting tool.

Big whoop, so what? Everybody uses Webkit, it is open source, and it is the best browser as everything just works, right? Again, I don’t feel good about this. As my colleague Robert O’Callahan put it:

Some people are wondering whether engine diversity really matters. “Webkit is open source so if everyone worked together on it and shipped it, would that be so bad?” Yes. Web standards would lose all significance and standards processes would be superceded by Webkit project decisions and politics. Webkit bugs would become the standard: there would be no way for developers to test on multiple engines to determine whether an unexpected behavior is a bug or intended.

Ex-Netscape employee and CSS working group chair Daniel Glazman agrees:

For the CSS Working Group, that’s an earthquake. One less testing environment, one less opportunity to discover bugs and issues.

Jake Archibald of the Chrome devrel team shares my views of Opera as a great testing platform, so much that when they were wrong, he just assumed it was his fault:

I develop in Chrome, then check stuff in Safari & Firefox. Usually, this would be painless, everything would be as expected (usually). Testing in IE and Opera was often less fun. But here’s the difference, things would be wrong in IE because of bugs, whereas things would be wrong in Opera because they were adhering to the spec (I’m generalising, of course). When Opera did the wrong thing with appcache FALLBACK entries I poured over the spec for a couple of hours on the assumption they were doing it right and the others were doing it wrong. Turns out Opera had a bug, but if any other browser was behaving so differently I’d have instantly assumed it was that browser getting it wrong.

As developers (well, let’s say as new developers to the web) we always complain about diversity in browsers and how hard it is to support them all. What we fail to remember there is that standards only work when they are tested and verified in many different environments. Otherwise, they aren’t standards and may just be happy accidents that are not necessarily repeatable. All browser engines have their good things and bad things and a good standard should define what is best in all of them and help implementing that across the browsers in use. As Jake found out, Presto was ahead of many others in terms of UI performance of JavaScript – a massive point in mobile:

Presto is full of surprises, and I’m only saying that half-sarcastically. In 2009 I was preparing a talk on JS performance and discovered that, in Opera, pages would continue to be responsive (scrolling, text selection) while JavaScript was stuck in a loop. No other browser did this, JavaScript blocks the UI thread.

I understand the motivation of Opera for this move, and I wish them all the luck they can have. Even more I wish that the engineering talent that comes to Webkit with this move will get a lot of power and be listened to. Opera was always a very loud voice advocating standards over what is easy and seems like a great idea at a certain time. It would be a shame if that voice gets drowned out by others using the same engine and having different ideas or a corporate agenda to follow. Standards aren’t dead, there is no “one Webkit” as much as there was no “one Internet Explorer”. I find it very disappointing that a company feels forced to make a move like that to stay commercially interesting.

Maybe I am a dreamer, but I always prefer choice over what is easy and promises me that everything just works. As, when you are honest, nothing ever just works and the only way to stay sane in this is to have a standard to compare to. We don’t only need “this works”, we also need “why does this work, and how can we ensure it is ready for changes that are coming up”.

Hello, it is me on Twitter!

Monday, February 11th, 2013

Hello and welcome. You might have come here from my Twitter profile or because of a tweet I sent you. Here I will quickly say and retain for re-use what my Twitter usage is about and how both you and I can enjoy what I do here. You could call it my Twitter manifesto, but that sounds too hoity-toity. So here goes:

Techsmas_049

What I do on Twitter

  1. I use Twitter as a channel out. I find something, I send a link/picture to share.
  2. This is me, so it is unfiltered. About 70% is technical web stuff (great resources, talks, videos, conference coverage), 25% is fluffy or awesome things on the web (hedgehogs, kittens, puppies…) and 5% is me doing stuff (trying restaurants, telling people I am meeting IRL where I am, wondering about things). I use naughty words, I find it hypocritical to add a * where an i or a u should be. I try to use them less though, but there might be things that make you blush.
  3. I tweet a lot – I know quite a few people who keep unfollowing and following me because of that reason.
  4. If you do not want the noise and just the meat, there is a way – I linked my Twitter to pinboard, so all the links I send out are here.
  5. I monitor Twitter for great things and see how you come across on it, too. This resulted in the past in people I liked becoming my colleagues or them starting to write for blogs I am editor at. It also resulted in people speaking at events. I like introducing people to each other. If you come across too aggressive, demanding or simply out of line, I will also take note of that and answer accordingly when people ask me about you.
  6. If I post something in quotes followed by a link, this is a quote, not my view. Don’t tell me your problems, tell the author, please.

What I don’t do on Twitter

  1. Advertise. I work for Microsoft, but I am not the marketing channel for Microsoft, there are other places for that. When I tweet about Microsoft stuff then it is because I think it is great, same way I tweet about Google, Mozilla, Adobe, Twitter, Facebook and many many more.
  2. I will not fix your problems. If you have an issue with a Microsoft product, there are official channels. If you have an issue with a Mozilla product (where I was for quite some time), the only – and let me repeat this – the only, best and fastest way to get something fixed is to file a bug in bugzilla about it. I don’t have a magic power over engineers to fix things faster or force them to do things. If your problem is a real, fixable issue and you are explaining the issue and what needs fixing, things happen. If you shout “this sucks, no wonder your competition is winning” then it is no wonder when busy engineers don’t really listen to you. You want your problem fixed, talk to the fixer. I will not fight your fights for you as I don’t feel your pain and can only guess the details.
  3. Plan and automate my tweets. This is all raw, nothing here is automated and yes it is only me. So when I am not in, I will not answer. Mostly this means I am on a plane.
  4. I will not retweet things you beg me to retweet*. I have quite some reach and I will retweet things I like and consider useful. If you tell me about something I might retweet it, I might not. This could mean I don’t like it but in many cases it just means I am too busy to do so. Nudge me again reminding me why something is cool. Begging or threatening to call me stuck-up and not helping struggling new people on the web will not get you anything though. If you look at what I do, you know that I am not the kind of guy to not support a great new thing or cause.
  5. Spread personal things. I have a real life and I will never share all the boring or sordid details about it. Both you and me are busy.
  6. Follow much and favourite. Both of these things are random in my case. My faves do not mean much – I found people favourite to read later. I never do that. I keep the tab open, read and then tweet about it. I have a full inbox, no need to also have a full faves list. Following is also not a sign of how much I like you or that I don’t appreciate you. I use Twitter mostly as a channel out. My information I get from RSS - I am oldschool like that.

Shit that can happen

  1. If you tell me once about something, I might miss it – this is a fast paced medium with a terrible search functionality. So email me about important things, too.
  2. I can be out of line – if you feel annoyed about something, please tell me. I am happy to follow you so you can DM me – I am always happy to improve.