• You are currently browsing the archives for the General category.

  • Archive for the ‘General’ Category

    Yahoo login issue on mobile – don’t fix the line length of your emails

    Monday, July 7th, 2014

    Yesterday I got a link to an image on Flickr in a tweet. Splendid. I love Flickr. It has played a massive role in the mashup web, I love the people who work in there and it used to be a superb place to store and share photos without pestering people to sign up for something. Twitter has also been a best-of-breed when it comes to “hackable” URLs. I could get different sizes of images and different parts of people’s pages simply by modifying the URL in a meaningful way. All in all, a kick-ass product, I loved, adored, contributed to and gave to people as a present.

    Until I started using a mobile device.

    Well, I tapped on the link and got redirected to Chrome on my Nexus 5. Instead of seeing an image as I expected I got a message that I should please download the epic Flickr app. No thanks, I just want to see this picture, thank you very much. I refused to download the app and went to the “web version” instead.

    This one redirected me to the Yahoo login. I entered my user name and password and was asked “for security reasons” to enter animated captcha. I am not kidding, here it is:

    animated captcha with bouncing letters over a letter storm or something

    I entered this and was asked to verify once more that I am totally me and would love to see this picture that was actually not private or anything so it would warrant logging in to start with.

    I got the option to do an email verification or answer one of my security questions. Fine, let’s do the email verification.

    An email arrived and it looked like this:

    verification email with cut off text

    As you can see (and if not, I am telling you now) the text seems cut off and there is no code in the email. Touching the text of the mail allows me to scroll to the right and see the full stop after “account.” I thought at first the code was embedded as an image and google had filtered it out, but there was no message of that sort.

    Well, that didn’t help. So I went back in the verification process and answered one of my questions instead. The photo wasn’t worth it.

    What happened?

    By mere chance I found the solution. You can double-tap the email in GMail for Android and it extends it to the full text. Then you can scroll around. For some reason the longest line gets displayed and the rest collapsed.

    The learning from that: do not fix line widths in emails (in this case it seems 550px as a layout table) if you display important information.

    I am not sure if that is a bug or annoyance in GMail, but in any case, this is not a good user experience. I reported this to Yahoo and hopefully they’ll fix this in the login verification mail.

    Google IOU – where was the web?

    Tuesday, July 1st, 2014

    I’ve always been a fan of Google IO. It is a huge event, full of great announcements. Google goes all in organising a great show and it is tricky to get tickets. You always walked home with the newest gadgets and were the first to learn about new products coming out. The first years I got invites as an expert. This year I got a VIP ticket which meant I paid for it but didn’t have to wait or be part of a lottery or find an easter egg or whatever else people had to do. I was off to the races and excited to go.

    IOU in a piggy bank

    IO, a kick-ass mobile and web show

    I liked the two day keynote format of the last years: on the first day you learned all about Android and new phones and tablets and on the second it was all about Chrome and Google Web Services like Google+. This wasn’t the case this year; the second day keynote didn’t happen. Sadly enough the content format seems to have stayed the same.

    I liked Google IO as it meant lots of great announcements for the web. As someone working for a browser maker I had a sense of dread each year to see what amazing things Chrome would get and what web based product would draw more people to using Chrome as their main browser. Google did well getting the hearts and minds of the web (and web developer) community.

    This is good: competition keeps us strong and the Chrome team has always been fair announcing standards support in their browser as a shared effort between browser makers instead of pretending to have invented it all. The Chrome Summit last year was a great example how that can work. This hasn’t changed. I have quite a few friends in the Chrome team and can rely on them moving the web forward for all instead of building bespoke APIs.


    Google, a web company

    Google is a company that grew on the web. Google is a company that innovated with simplicity where others overwhelmed their users. We used their search engine because it was a simple search box, not a search box in a huge web site full of news, weather, chat systems, sports news and all kind of other media provided by partners. We used GMail because it made us independent of the computer we read our mails on and it had amazingly simple keyboard shortcuts. Google understood the web and its needs – where others only allowed you to make money by plastering huge banners all over your blog, Google was OK with simple text links.

    With this in mind I was super happy to go and get my fix of Google IO for this year.

    Where is the web?

    Suffice to say, in this respect I was disappointed by this year’s Google IO. Not the whole event, but the keynote. My main worry is that there was hardly any mention of the web. It was all about the redesign of Android. It was about wearables Google doesn’t create or have much of a say in. It was about “introducing” Android TV (which has been done before and then scrapped it seems). It was about Android Auto. It was about Android’s new design and the ideas behind it. Mentions of Chrome were scarce and misguided.

    What do I mean by that? I was sure that Google will announce HTML5 Chrome Apps to run on Android and be available in the Play store. This would’ve been a huge win for the web and HTML5 developers. Instead we got an announcement that Android Apps will now run on Chromebooks. There was no detailed technical explanation how that would happen but I learned later that this means the Android Apps run in Native Client. This is as backwards as it can get from a web perspective. Chromebooks were meant to make the web the platform to work on and HTML5 as the technology. Heck, Firefox on Android creates dynamic APKs from Open Web Apps. I’d have expected Google to pull the same trick.

    There was no annoucement about Chromebooks either (other than them being the best-seller on Amazon for laptops) and when you followed Google+ in the last months, there were some massive advancements in their APIs and standards support.

    The other mention was that the new look and feel of Android will also be implemented for the web using Polymer.

    Android’s new face looks beautiful and I love the concept of drop shadows and lighting effects being handled by the OS for you. It felt like Google finally made a stand about their design guidelines. It also felt very much like Google tries to beat Apple in their own game. The copious use of “delightful experiences” in every mention of the new “material design” refresh became tiring really fast.

    Chrome and the web didn’t get any love in the IO keynote. The promise that the new design is implemented “running at 60FPS” using Polymer in Chrome was an aside and can’t have been a simple feat. It also must have meant a lot of great work of the Android team together with the Polymer and the Chrome team. Lots of good stories and learning could have been shown and explained. Other work by the Chrome and Polymer team that must have been a lot of effort like the search result page integration of app links or the Chromebook demos were just side notes; easy to miss. Most of the focus was on what messages you could get on your watch or that there might be an amazing feature like in-car navigation coming soon later this year if you can afford a brand new car.

    Chromecast was mentioned to get a few updates including working without sharing a wireless network. This could be huge, but again this was mostly about sending streaming content from Google Play to a TV, not about the web.

    Google+ apparently doesn’t exist and the fact that Hangouts now don’t need any plugin any longer and instead use WebRTC wasn’t noteworthy. Google Glass was also suspiciously missing from the keynote.

    Props to the Chrome Devrel team

    I got my web fix from the Google Devrel team and their talks. That is, when I arrived early enough and didn’t get stuck in a queue outside the room as the speaker beforehand went 10 minutes over.


    I got lots of respect for the team and a few of the talks are really worth watching:

    Others I have yet to see but look very promising:

    Another very good idea was the catered Chrome lunch allowing invited folk to chat with Chrome engineers and developers. These could become a thing they can run in offices, too.

    Other things that made me happy

    It wasn’t all doom and gloom though. I enjoyed quite a few things about Google IO this year:

    • It was great to see all the attendees from different GDG chapters. Very excited developers from all over the globe, waving their respective flags and taking pictures of one another in front of the stage was fun to see. Communities rock and people who give their free time to educate others on technologies of certain companies deserve a lot of respect and mentioning. It is a refreshing thing to see this connected with a large Silicon Valley company and is a very enjoyable contrast to the entitlement, arrogance and mansplaining you got at the parties around IO.
    • Android One – a very affordable Android phone aimed at emerging markets like India and Africa. It seems Google did a great job partnering with the right people and companies to create a roll-out that means a lot. FirefoxOS does exactly the same and all I personally want is people to be able to access the web from wherever. In many countries this means they need a mobile device that is affordable and stays up-to-date. Getting your first mobile is a massive investment. I am looking forward to seeing more about this and hope that the Android One will have a great web experience and not lock you into Play Store services.
    • The partner booths were interesting. Lots of good info on what working with Android TV was like. I loved the mention of the app students created for their blind friend to find classes.
    • The internet of things section was very exciting – especially a prototype of a $6 beacon that is a URL and not just an identifier. This was hidden behind the cars, in case you wondered what I am talking about.
    • Lots of mentions of diversity issues and that Google is working hard to solve them was encouraging to see.
    • Google Cardboard was a superb little project. Funny and a nice poke towards Occulus Rift. Well done marketing, that one.
    • Project Tango is pretty nuts. I am looking forward to see more about this.
    • Accessibility had an own booth on the main floor and it seems that people like Alice Boxhall get good support in their efforts to make Web Components available to all
    • The food and catering was good but must have cost Google and arm and a leg. I saw some of the bills that Moscone catering charges and it makes me dizzy.
    • The party was not inside the building with a band nobody really appreciated (the pilot of Silicon Valley comes to mind and the memory of the Janes Addiction “performance”) but outside in Yerba Buena gardens. It had a lovely vibe and felt less stilted than the parties at the other IOs.

    IO left me worried about Google

    Overall, as a web enthusiast I am not convinced the experience was worth getting a flight, hotel in the valley (or SF) and the $900 for the ticket.

    The crowd logistics at Google IO were abysmal. Queues twice around the building before the keynote meant a lot of people missed it. Getting into sessions and workshops was very hard and in many cases it made more sense to watch it on your laptop – if you were outside as the wireless of the conference wasn’t up to the task either. One amazingly cool feature was that the name tags of the event were NFC enabled. All you had to do was touch your phone to it and the people you met were added to your Google+ as “people I met”. The quite startling thing was that I had to explain that to a lot of people. If you do something this useful promoting one of your products and making it very worth while for your audience, wouldn’t it make sense to mention that in the beginning?

    The great thing is that with YouTube, IO has the means to bring out all the talks for me to watch later. I am very grateful for that and do take advantage of it. All the Google IO 2014 videos are now available and you can see what you missed.

    I always saw Google as one of the companies that get it and drive the web forward. The Chrome team sure does it. The Web Starter Kit (kind of Google’s Bootstrap) shows this and so does all the other outreach work.

    This keynote, however, made Google appear like a hardware or service vendor who likes to get developers excited about their products. This wasn’t about technology, it was about features of products and hardware ideas and plans. At times I felt like I was at a Samsung or HTC conference. The cloud part of the keynote claimed to be superior to all competitors without proving this with numbers. It also told the audience that all the technical info heralded at previous Google IOs was wrong. As someone who speaks a lot on stage and coaches people on presenting I was at times shocked by the transparency of the intention of the script of some of the parts of the keynote. Do sentences like “I use this all the time to chat with my friends” when announcing a new product that isn’t out yet work? The pace was all wrong. Most of the meat of the announcements for developers were delivered in a rather packed 8 minutes by Ellie Powers at 2:16:00 to 2:24:00.

    To me, this was an IOU by Google to the Web and Developer community. Give us a better message to see that you are still the “don’t be evil” company that makes the web more reachable and understandable, faster and more secure. Times may change, but Google became what it is by not playing by the rules. Google isn’t Apple and shouldn’t try to be. Google is also not Facebook. It is Google. Can we feel lucky?

    Comment on Google+

    Flight mode on

    Monday, June 9th, 2014

    Much like everybody else these days, I use my phone as an alarm clock. In addition to this, however, I also made a conscious decision to turn on flight mode on during the night. The reason is updates coming in that may or may not make it buzz or make a sound. Of course, I could turn that off. Of course I could not care about it. Of course that is nonsense as we are wired to react to sounds and blinking lights.

    men in black flasher

    In many applications the option to turn audio or visual or buzz notifications off is hidden well as their sole business model is to keep you interacting with them. And we do. All the time. 24/7. Because we might miss something important. That can so not wait. And we need to know about it now, now, now…

    I also started turning off my phone when I am on the go – on my bicycle or on the train and bus. There is no point for me keeping it on as there is no connectivity in trains in London and I get car sick trying to interact with my phone on a bus. Furthermore, so many apps are built with woefully bad offline and intermittent connection support. I am just tired of seeing spinners.

    museum of loading

    So what? Why am I telling you this? The reason is that I am starting to get bored and annoyed with social media. I sense a strong feeling of being part of a never-ending current of mediocrity, quick wins and pointless data consumption. Yes, I know the “irony” of me saying this, seeing how active I am on Twitter and how much “pointless” fluffy animal material I intersperse with technical updates.

    The point for myself is that I miss the old times of slow connections and scarcity of technical information. Well, not really miss, but I think we are losing a lot by constantly chasing the newest and most amazing and being the first to break the “news” of some cool new script or solution.
    birthday without wifi

    When I started in web development I had a modem. I also paid for my connection by the minute. I didn’t have a laptop. At work I wasn’t allowed to read personal mails or surf the web – I was there to attend meetings, slice up photoshop files, add copy to pages and code.

    At home I had a desktop. I connected to the internet, downloaded all my emails and newsgroup items (most of the time the headers only), surfed the web a bit, disconnected and started answering my emails. I subscribed to email forums like webdesign-l, evolt.org, CSS Discuss and many others. In these forums I found articles of A List Apart, Webmonkey, Digital Web and many others worth reading.

    Sounds inconvenient and terrible by nowadays standards, when we are annoyed that TV series don’t stream without buffering while we are on planes. It was, but it also meant one thing I think we lost: I cherished every email and every article much more than I do now. I appreciated the work that went into them as they were more scarce. To get someone’s full attention these days you need to be either outrageous or overpromising. The wisdom of the crowds seems to get very dubious when limited to social media updates. Not the best bubbles up, but the most impressive.

    Meeting Point

    I also forged close relationships with the people subscribed in these lists and forums by interacting more closely than 140 characters. A List Apart, for example, was not only about the articles – the more interesting and amazing solutions came from the discussions in the comments. I made my name by taking part in these discussions and agreeing and disagreeing with people. Many people I know now who speak, coach, run companies and have high positions in the mover and shaker companies of the web came from this crowd.

    I took my time to digest things, I played with technology and tried it out and gave feedback. We took our time to whittle away the rough edges and come up with something more rounded.

    We call this web of now social. We have amazing connections and collaboration tools. We have feedback channels beyond our dreams. But we rush through them. Instead of commenting and giving feedback we like, share and +1. Instead of writing a thought out response, we post a reaction GIF. Instead of communicating, we play catch up.

    The sheer mass of tech articles, videos, software betas, updates and posts released every hour makes it almost impossible to catch up. Far too many great ideas, solutions and approaches fall through the cracks because ending up on Hackernews and getting lots of likes is the goal. This means you need to be talking about the newest thing, not the thing that interests you the most.

    Maybe this makes me sound like an old fart. So be it. I think we deserve some downtime from time to time. And the content other people create and publish deserves more attention than a fly-by, glancing over it and sharing, hoping to be seen as the person with the cool news.

    Be a great presenter: deliver on and off-stage

    Thursday, June 5th, 2014

    As a presenter at a conference, your job is to educate, entertain and explain. This means that the few minutes on stage are the most stressful, but should also be only a small part of your overall work.

    Christian Heilmann listening to translations at TEDxThessaloniki

    A great technical presentation needs a few things:

    • Research – make sure your information is up-to-date and don’t sell things that don’t work as working
    • Sensible demonstrations – by all means show what some code does before you talk about it. Make sure your demo is clean and sensible and easy to understand.
    • Engagement materials – images, videos, animations, flowcharts, infographics. Make sure you have the right to use those and you don’t just use them for the sake of using them.
    • Handover materials – where should people go after your talk to learn more and get their hands dirty?
    • An appropriate slide deck – your slides are wall-paper for your presentation. Make them supportive of your talk and informative enough. Your slides don’t have to make sense without your presentation, but they should also not be distracting. Consider each slide an emphasis of what you are telling people.
    • A good narration – it is not enough to show cool technical things. Tell a story, what are the main points you want to make, why should people remember your talk?
    • An engaging presentation – own the stage, own the talk, focus the audience on you.

    All of this needs a lot of work, collecting on the web, converting, coding, rehearsing and learning to become better at conveying information. All of it results in materials you use in your talk, but may also not get to use whilst they are very valuable.

    It is not about you, it is about what you deliver

    A great presenter could carry a talking slot just with presence and the right stage manner. Most technical presentations should be more. They should leave the audience with a “oh wow, I want to try this and I feel confident that I can do this now” feeling. It is very easy to come across as “awesome” and show great things but leave the audience frustrated and confused just after they’ve been delighted by the cool things you are able to do.

    Small audience, huge viewer numbers

    Great stuff happens at conferences, great demos are being shown, great solutions explained and explanations given. The problem is that all of this only applies to a small audience, and those on the outside lack context.

    This is why a lot of times parts of your presentation might get quoted out of context and demos you showed to make a point get presented as endorsed by you missing the original point.

    In essence: conferences are cliquey by design. That’s OK, after all people pay to go to be part of that select group and deserve to get the best out of it. You managed to score a ticket – you get to be the first to hear and the first to talk about it with the others there.

    It gets frustrating when parts of the conference get disseminated over social media. Many tweets talking about the “most amazing talk ever” or “I can’t believe the cool thing $x just showed” are not only noise to the outside world, they also can make people feel bad about missing out.

    This gets exacerbated when you release your slides and they don’t make any sense, as they lack notes. Why should I get excited about 50MB of animated GIFs, memes and hints of awesome stuff? Don’t make me feel bad – I already feel I am missing out as I got no ticket or couldn’t travel to the amazing conference.

    misleading infographic

    If you release your talk materials, make them count. These are for people on the outside. Whilst everybody at an event will ask about the slides, the number of people really looking at them afterwards is much smaller than the ones who couldn’t go to see you live.

    Waiting for recordings is frustrating

    The boilerplate answer to people feeling bad about not getting what the whole twitter hype is about is “Oh, the videos will be released, just wait till you see that”. The issue with that is that in many cases the video production takes time and there is a few weeks up to months delay between the conference and the video being available. Which is OK, good video production is hard work. It does, however water down the argument that the outside world will get the hot cool information. By the time the video of the amazing talk right now is out we’re already talking about another unmissable talk happening at another conference.

    Having a video recording of a talk is the best possible way to give an idea of how great the presentation was. It also expects a lot of dedication of the viewer. I watch presentation videos in my downtime – on trains, in the gym and so on. I’ve done this for a while but right now I find so much being released that it becomes impossible to catch up. I just deleted 20 talks from my iPod unwatched as their due-date has passed: the cool thing the presenter talked about is already outdated. This seems a waste, both for the presenter and the conference organiser who spent a lot of time and money on getting the video out.

    Asynchronous presenting using multiple channels

    Here’s something I try to do and I wished more presenters did: as a great presenter should be aware that you might involuntarily cause discontent and frustration outside the conference. People talk about the cool stuff you did without knowing what you did.

    Instead of only delivering the talk, publish a technical post covering the same topic you talked about. Prepare the post using the materials you collected in preparation of your talk. If you want to, add the slides of your talk to the post. Release this post on the day of your conference talk using the hashtag of the conference and explaining where and when the talk happens and everybody wins:

    • People not at the conference get the gist of what you said instead of just soundbites they may quote out of context
    • You validate the message of your talk – a few times I re-wrote my slides after really trying to use the technology I wanted to promote
    • You get the engagement from people following the hashtag of the conference and give them something more than just a hint of what’s to come
    • You support the conference organisers by drumming up interest with real technical information
    • The up-to-date materials you prepared get heard web-wide when you talk about them, not later when the video is available
    • You re-use all the materials that might not have made it into your talk
    • Even when you fail to deliver an amazing talk, you managed to deliver a lot of value to people in and out of the conference

    For extra bonus points, write a post right after the event explaining how it went and what other parts about the conference you liked. That way you give back to the organisers and you show people who went there that you were just another geek excited to be there. Who knows, maybe your materials and your enthusiasm might be the kick some people need to start proposing talks themselves.

    Write less, achieve meh?

    Wednesday, June 4th, 2014

    In my keynote at HTML5DevConf in San Francisco I talked about a pattern of repetition those of us who’ve been around for a while will have encountered, too: every few years development becomes “too hard” and “too fragmented” and we need “simpler solutions”.

    chris in suit at html5devconf

    In the past, these were software packages, WYSIWYG editors and CMS that promised us to deliver “to all platforms without any code overhead”. Nowadays we don’t even wait for snake-oil salesmen to promise us the blue sky. Instead we do this ourselves. Almost every week we release new, magical scripts and workflows that solve all the problems we have for all the new browsers and with great fall-backs for older environments.

    Most of these solutions stem from fixing a certain problem and – especially in the mobile space – far too many stem from trying to simulate an interaction pattern of native applications. They do a great job, they are amazing feats of coding skills and on first glance, they are superbly useful.

    It gets tricky when problems come up and don’t get fixed. This – sadly enough – is becoming a pattern. If you look around GitHub you find a lot of solutions that promise utterly frictionless development with many an unanswered issue or un-merged pull request. Even worse, instead of filing bugs there is a pattern of creating yet another solution that fixes all the issues of the original one . People simply should replace the old one with the new one.

    Who replaces problematic code?

    All of this should not be an issue: as a developer, I am happy to discard and move on when a certain solution doesn’t deliver. I’ve changed my editor of choice a lot of times in my career.

    The problem is that completely replacing solutions expects a lot of commitment from the implementer. All they want is something that works and preferably something that fixes the current problem. Many requests on Stackoverflow and other help sites don’t ask for the why, but just want a how. What can I use to fix this right now, so that my boss shuts up? A terrible question that developers of every generation seem to repeat and almost always results in unmaintainable code with lots of overhead.

    That’s when “use this and it works” solutions become dangerous.

    First of all, these tell those developers that there is no need to ever understand what you do. Your job seems to be to get your boss off your back or to make that one thing in the project plan – that you know doesn’t make sense – work.

    Secondly, if we found out about issues of a certain solution and considered it dangerous to use (cue all those “XYZ considered dangerous” posts) we should remove and redirect them to the better solutions.

    This, however, doesn’t happen often. Instead we keep them around and just add a README that tells people they can use our old code and we are not responsible for results. Most likely the people who have gotten the answer they wanted on the Stackoverflows of this world will never hear how the solution they chose and implemented is broken.

    The weakest link?

    Another problem is that many solutions rely on yet more abstractions. This sounds like a good plan – after all we shouldn’t re-invent things.

    However, it doesn’t really help an implementer on a very tight deadline if our CSS fix needs the person to learn all about Bower, node.js, npm, SASS, Ruby or whatever else first. We can not just assume that everybody who creates things on the web is as involved in its bleeding edge as we are. True, a lot of these tools make us much more efficient and are considered “professional development”, but they are also very much still in flux.

    We can not assume that all of these dependencies work and make sense in the future. Neither can we expect implementers to remove parts of this magical chain and replace them with their newer versions – especially as many of them are not backwards compatible. A chain is as strong as its weakest link, remember? That also applies to tool chains.

    If we promise magical solutions, they’d better be magical and get magically maintained. Otherwise, why do we create these solutions? Is it really about making things easier or is it about impressing one another? Much like entrepreneurs shouldn’t be in love with being an entrepreneur but instead love their product we should love both our code and the people who use it. This takes much more effort than just releasing code, but it means we will create a more robust web.

    The old adage of “write less, achieve more” needs a re-vamp to “write less, achieve better”. Otherwise we’ll end up with a world where a few people write small, clever solutions for individual problems and others pack them all together just to make sure that really everything gets fixed.

    The overweight web

    This seems to be already the case. When you see that the average web site according to HTTParchive is 1.7MB in size (46% cacheable) with 93 resource requests on 16 hosts then something, somewhere is going terribly wrong. It is as if none of the performance practices we talked about in the last few years have ever reached those who really build things.

    A lot of this is baggage of legacy browsers. Many times you see posts and solutions like “This new feature of $newestmobileOS is now possible in JavaScript and CSS - even on IE8!”. This scares me. We shouldn’t block out any user of the web. We also should not take bleeding edge, computational heavy and form-factor dependent code and give it to outdated environments. The web is meant to work for all, not work the same for all and certainly not make it slow and heavy for older environments based on some misunderstanding of what “support” means.

    Redundancy denied

    If there is one thing that this discouraging statistic shows then it is that future redundancy of solutions is a myth. Anything we create that “fixes problems with current browsers” and “should be removed once browsers get better” is much more likely to clog up the pipes forever than to be deleted. Is it – for example – really still necessary to fix alpha transparency in PNGs for IE5.5 and 6? Maybe, but I am pretty sure that of all these web sites in these statistics only a very small percentage really still have users locked into these browsers.

    The reason for denied redundancy is that we solved the immediate problem with a magical solution – we can not expect implementers to re-visit their solutions later to see if now they are not needed any longer. Many developers don’t even have the chance to do so – projects in agencies get handed over to the client when they are done and the next project with a different client starts.

    Repeating XHTML mistakes

    One of the main things that HTML5 was invented for was to create a more robust web by being more lenient with markup. If you remember, XHTML sent as XML (as it should, but never was as IE6 didn’t support that) had the problem that a single HTML syntax error or an un-encoded ampersand would result in an error message and nothing would get rendered.

    This was deemed terrible as our end users get punished for something they can’t control or change. That’s why the HTML algorithm of newer browsers is much more lenient and does – for example – close tags for you.

    Nowadays, the yellow screen of death showing an XML error message is hardly ever seen. Good, isn’t it? Well, yes, it would be – if we had learned from that mistake. Instead, we now make a lot of our work reliant on JavaScript, resource loaders and many libraries and frameworks.

    This should not be an issue – the “JavaScript not available” use case is a very small one and mostly by users who either had JavaScript turned off by their sysadmins or people who prefer the web without it.

    The “JavaScript caused an error” use case, on the other hand, is very much alive and will probably never go away. So many things can go wrong, from resources not being available, to network timeouts, mobile providers and proxies messing with your JavaScript up to simple syntax errors because of wrong HTTP headers. In essence, we are relying on a technology that is much less reliable than XML was and we feel very clever doing so. The more dependencies we have, the more likely it is that something can go wrong.

    None of this is an issue, if we write our code in a paranoid fashion. But we don’t. Instead we also seem to fall for the siren song of abstractions telling us everything will be more stable, much better performing and cleaner if we rely on a certain framework, build-script or packaging solution.

    Best of breed with basic flaws

    One eye-opener for me was judging the Static Showdown Hackathon. I was very excited about the amazing entries and what people managed to achieve solely with HTML, CSS and JavaScript. What annoyed me though was the lack of any code that deals with possible failures. Now, I understand that this is hackathon code and people wanted to roll things out quickly, but I see a lot of similar basic mistakes in many live products:

    • Dependency on a certain environment – many examples only worked in Chrome, some only in Firefox. I didn’t even dare to test them on a Windows machine. These dependencies were in many cases not based on functional necessity – instead the code just assumed a certain browser specific feature to be available and tried to access it. This is especially painful when the solution additionaly loads lots of libraries that promise cross-browser functionality. Why use those if you’re not planning to support more than one browser?
    • Complete lack of error handling – many things can go wrong in our code. Simply not doing anything when for example loading some data failed and presenting the user with an infinite loading spinner is not a nice thing to do. Almost every technology we have has a success and an error return case. We seem to spend all the time in the success one, whilst it is much more likely that we’ll lose users and their faith in the error one. If an error case is not even reported or reported as the user’s fault we’re not writing intelligent code. Thinking paranoid is a good idea. Telling users that something went wrong, what went wrong and what they can do to re-try is not luxury – it means building a user interface. Any data loading that doesn’t refresh the view should have an error case and a timeout case – connections are the things most likely to fail.
    • A lack of very basic accessibility – many solutions I encountered relied on touch alone, and doing so provided incredibly small touch targets. Others showed results far away from the original action without changing the original button or link. On a mobile device this was incredibly frustrating.

    Massive web changes ahead

    All of this worries me. Instead of following basic protective measures to make our code more flexible and deliver great results to all users (remember: not the same results to all users; this would limit the web) we became dependent on abstractions and we keep hiding more and more code in loaders and packaging formats. A lot of this code is redundant and fixes problems of the past.

    The main reason for this is a lack of control on the web. And this is very much changing now. The flawed solutions we had for offline storage (AppCache) and widgets on the web (many, many libraries creating DOM elements) are getting new, exciting and above all control-driven replacements: ServiceWorker and WebComponents.

    Both of these are the missing puzzle pieces to really go to town with creating applications on the web. With ServiceWorker we can not only create apps that work offline, but also deal with a lot of the issues we now solve with dependency loaders. WebComponents allow us to create reusable widgets that are either completely new or inherited from another or existing HTML elements. These widgets run in the rendering flow of the browser instead of trying to make our JavaScript and DOM rendering perform in it.

    The danger of WebComponents is that it allows us to hide a lot of functionality in a simple element. Instead of just shifting our DOM widget solutions to the new model this is a great time to clean up what we do and find the best-of-breed solutions and create components from them.

    I am confident that good things are happening there. Discussions sparked by the Edge Conference’s WebComponents and Accessibility panels already resulted in some interesting guidelines for accessible WebComponents

    Welcome to the “Bring your own solution” platform

    The web is and stays the “bring your own solution platform”. There are many solutions to the same problem, each with their own problems and benefits. We can work together to mix and match them and create a better, faster and more stable web. We can only do that, however, when we allow the bricks we build these solutions from to be detachable and reusable. Much like glueing Lego bricks together means using it wrong we should stop creating “perfect solutions” and create sensible bricks instead.

    Welcome to the future – it is in the browsers, not in abstractions. We don’t need to fix the problems for browser makers, but should lead them to give us the platform we deserve.