• You are currently browsing the archives for the General category.

  • Archive for the ‘General’ Category

    7 Reasons why EdgeConf rocks and why you should be part of it

    Monday, July 13th, 2015

    Having just been there and seeing that the coverage is available today, I wanted to use this post to tell you just how amazing EdgeConf is as a conference, a concept and a learning resource. So here are seven reasons why you should care about EdgeConf:

    Reason 1: It is a fully recorded think-tank

    Unlike other conferences, where you hear great presentations and have meetings and chats of high significance but have to wait for weeks for them to come out, EdgeConf is live in its coverage. Everything that’s been discussed has a live comment backchannel (this year it was powered by Slack), there are dedicated note-takers and the video recordings are transcribed and published within a few days. The talks are searchable that way and you don’t need to sift through hours of footage to find the nugget of information you came for.

    Reason 2: It is all about questions and answers, not about the delivery and showing off

    The format of EdgeConf is a Q&A session with experts, moderated by another expert. There are a few chosen experts on stage but everybody in the audience has the right to answer and be part of it. This happens in normal conference Q&A in any case; Edge makes sure it is natural instead of disrupting. There is no space for pathos and grandstanding in this event – it is all about facts.

    Reason 3: The audience is a gold-mine of knowledge and experts to network with

    Edge attracts the most dedicated people when it comes to newest technology and ideas on the web. Not blue-sky “I know what will be next” thinkers, but people who want to make the current state work and point towards what’s next. This can be intimidating – and it is to me – but for networking and having knowledgable people to bounce your ideas of, this is pure gold.

    Reason 4: The conference is fully open about the money involved

    Edge is a commercial conference, with a very affordable ticket price. At the end of the conference, you see a full disclosure of who paid for what and how much money got in. Whatever is left over, gets donated right there and then to a good cause. This year, the conference generated a massive amount of money for codeclub. This means that your sponsorship is obvious and people see how much you put in. This is better than getting a random label like “platinum” or “silver”. People see how much things cost, and get to appreciate it more.

    Reason 5: The location is always an in-the-trenches building

    Instead of being in a hotel or convention centre that looks swanky but has no working WiFi, the organisers partner with tech companies to use their offices. That way you get up-close to Google, Facebook, or whoever they manage to partner with and meet local developers on their own turf. This is refreshingly simple and means you get to meet folk that don’t get time off to go to conferences, but can drop by for a coffee.

    Reason 6: If you can’t be there, you still can be part of this

    All the panels of this conference are live streamed, so even if you can’t make it, you can sit in and watch the action. You can even take part on Slack or Twitter and have a dedicated screening in your office to watch it. This is a ridiculously expensive and hard to pull off trick that many conferences wouldn’t event want to do. I think we should thank the organisers for going that extra step.

    Reason 7: The organisers

    The team behind Edge is extremely dedicated and professional. I rushed my part this year, as I was in between other conferences, and I feel sorry and like a slacker in comparison what the organisers pulled off and how they herd presenters, moderators and audience. My hat is off to them, as they do not make any money with this event. If you get a chance to thank them, do so.

    Just go already

    When the next Edge is announced, don’t hesitate. Try to get your tickets or at least make sure you have time to watch the live feeds and take part in the conversations. As someone thinking of sponsoring events, this is a great one to get seen and there is no confusion as to where the money goes.

    Slimming down the web: Remove code to fix things, don’t add the “clever” thing

    Wednesday, July 8th, 2015

    Today we saw a new, interesting service called Does it work on Edge? which allows you to enter a URL, and get that URL rendered in Microsoft Edge. It also gives you a report in case there are issues with your HTML or CSS that are troublesome for Edge (much like Microsoft’s own service does). In most cases, this will be browser-specific code like prefixed CSS. All in all this is a great service, one of many that make our lives as developers very easy.

    If you release something on the web, you get feedback. When I tweeted enthusiastically about the service, one of the answers was by @jlbruno, who was concerned about the form not being keyboard accessible.

    The reason for this is simple: the form on the site itself is none insofar there is no submit button of any kind. The button in the page is a anchor pointing nowhere and the input element itself has a keypress event attached to it (even inline):

    screenshot of the page source codeclick for bigger

    There’s also another anchor that points nowhere that is a loading message with a display of none. Once you click the first one, this one gets a display of block and replaces the original link visually. This is great UX - telling you something is going on – but it only really works when I can see it. It also gives me a link element that does nothing.

    Once the complaint got heard, the developers of the site took action and added an autofocus attribute to the input field, and proudly announcing that now the form is keyboard accessible.

    Now, I am not having a go here at the developers of the site. I am more concerned that this is pretty much the state of web development we have right now:

    • The visual outcome of our tools is the most important aspect – make it look good across all platforms, no matter how.
    • As developers, we most likely are abled individuals with great computers and fast connections. Our machines execute JavaScript reliably and we use a trackpad or mouse.
    • When something goes wrong, we don’t analyse what the issue is, but instead we look for a tool that solves the issue for us – the fancier that tool is, the better

    How can this be keyboard accessible?

    In this case, the whole construct is far too complex for the job at hand. If you want to create something like this and make it accessible to keyboard and mouse users alike, the course of action is simple:

    • Use a form elment with an input element and a submit button

    Use the REST URL of your service (which I very much assume this product has) as the action and re-render the page when it is done.

    If you want to get fancy and not reload the page, but keep all in place assign a submit handler to the form element, call preventDefault() and do all the JS magic you want to do:

    • You can still have a keypress handler on the input element if you want to interact with the entries while they happen. If you look at the code on the page now, all it does is check for the enter button. Hitting the enter button in a form with a submit button or a button element submits the form – this whole code never has to be written, simply by understanding how forms work.
    • You can change the value of a submit button when the submit handler kicks in (or the innerHTML of the button) and make it inactive. This way you can show a loading message and you prevent duplicate form submissions

    What’s wrong with autofocus?

    Visually and functionally on a browser that was lucky enough to not encounter a JavaScript error until now, the autofocus solution does look like it does the job. However, what it does is shift the focus of the document to the input field once the page has loaded. A screenreader user thusly would never ever learn what the site is about as you skip the header and all the information. As the input element also lacks a label, there isn’t even any information as to what the user is supposed to enter here. You sent that user into a corner without any means of knowing what’s going on. Furthermore, keyboard users are primed and ready to start navigating around the page as soon as it loads. By hijacking the keyboard navigation and automatically sending it to your field you confuse people. Imagine pressing the TV listings button on a TV and instead it just sends you to the poodle grooming channel every time you do it.

    The web is obese enough!

    So here’s my plea in this: let’s break that pattern of working on the web. Our products don’t get better when we use fancier code. They get better when they are easier to use for everybody. The fascinating bit here is that by understanding how HTML works and what it does in browsers, we can avoid writing a lot of code that looks great but breaks very easily.

    There is no shortage of articles lamenting how the web is too slow, too complex and too big on the wire compared to native apps. We can blame tools for that or we could do something about it. And maybe not looking for a readymade solution or the first result of Stackoverflow is the right way to do that.

    Trust me, writing code for the web is much more rewarding when it is your code and you learned something while you implemented it.

    Let’s stop adding more when doing the right thing is enough.

    Over the Edge: Web Components are an endangered species

    Wednesday, July 1st, 2015

    Last week I ran the panel and the web components/modules breakout session of the excellent Edge Conference in London, England and I think I did quite a terrible job. The reason was that the topic is too large and too fragmented and broken to be taken on as a bundle.

    If you want to see the mess that is the standardisation effort around web components right now in all its ugliness, Wilson Page wrote a great post on that on Mozilla Hacks. Make sure to also read the comments – lots of good stuff there.

    Web Components are a great idea. Modules are a great idea. Together, they bring us hours and hours of fun debating where what should be done to create a well-performing, easy to maintain and all around extensible complex app for the web. Along the way we can throw around lots of tools and ideas like NPM and ES6 imports or – as Alex Russell said it on the panel: “tooling will save you”.

    It does. But that was always the case. When browsers didn’t support CSS, we had Dreamweaver to create horribly nested tables that achieved the same effect. There is always a way to make browsers do what we want them to do. In the past, we did a lot of convoluted things client-side with libraries. With the advent of node and others we now have even more environments to innovate and release “not for production ready” impressive and clever solutions.

    When it comes to componentising the web, the rabbit hole is deep and also a maze. Many developers don’t have time to even start digging and use libraries like Polymer or React instead and call it a day and that the “de facto standard” (a term that makes my toenails crawl up – layout tables were a “de facto standard”, so was Flash video).

    React did a genius thing: by virtualising the DOM, it avoided a lot of the problems with browsers. But it also means that you forfeit all the good things the DOM gives you in terms of accessibility and semantics/declarative code. It simply is easier to write a <super-button> than to create a fragment for it or write it in JavaScript.

    Of course, either are easy for us clever and amazing developers, but the fact is that the web is not for developers. It is a publishing platform, and we are moving away from that concept at a ridiculous pace.

    And whilst React gives us all the goodness of Web Components now, it is also a library by a commercial company. That it is open source, doesn’t make much of a difference. YUI showed that a truckload of innovation can go into “maintenance mode” very quickly when a company’s direction changes. I have high hopes for React, but I am also worried about dependencies on a single company.

    Let’s rewind and talk about Web Components

    Let’s do away with modules and imports for now, as I think this is a totally different discussion.

    I always loved the idea of Web Components – allowing me to write widgets in the browser that work with it rather than against it is an incredible idea. Years of widget frameworks trying to get the correct performance out of a browser whilst empowering maintainers would come to a fruitful climax. Yes, please, give me a way to write my own controls, inherit from existing ones and share my independent components with other developers.

    However, in four years, we haven’t got much to show.. When we asked the very captive and elite audience of EdgeConf about Web Components, nobody raised their hand that they are using them in real products. People either used React or Polymer as there is still no way to use Web Components in production otherwise. When we tried to find examples in the wild, the meager harvest was GitHub’s time element. I do hope that this was not all we wrote and many a company is ready to go with Web Components. But most discussions I had ended up the same way: people are interested, tried them out once and had to bail out because of lack of browser support.

    Web Components are a chicken and egg problem where we are currently trying to define the chicken and have many a different idea what an egg could be. Meanwhile, people go to chicken-meat based fast food places to get quick results. And others increasingly mention that we should hide the chicken and just give people the eggs leaving the chicken farming to those who also know how to build a hen-house. OK, I might have taken that metaphor a bit far.

    We all agreed that XHTML2 sucked, was overly complicated, and defined without the input of web developers. I get the weird feeling that Web Components and modules are going in the same direction.

    In 2012 I wrote a longer post as an immediate response to Google’s big announcement of the foundation of the web platform following Alex Russell’s presentation at Fronteers 11 showing off what Web Components could do. In it I kind of lamented the lack of clean web code and the focus on developer convenience over clarity. Last year, I listed a few dangers of web components. Today, I am not too proud to admit that I lost sight of what is going on. And I am not alone. As Wilson’s post on Mozilla Hacks shows, the current state is messy to say the least.

    We need to enable web developers to use “vanilla” web components

    What we need is a base to start from. In the browser and in a browser that users have and doesn’t ask them to turn on a flag. Without that, Web Components are doomed to become a “too complex” standard that nobody implements but instead relies on libraries.

    During the breakout session, one of the interesting proposals was to turn Bootstrap components into web components and start with that. Tread the cowpath of what people use and make it available to see how it performs.

    Of course, this is a big gamble and it means consensus across browser makers. But we had that with HTML5. Maybe there is a chance for harmony amongst competitors for the sake of an extensible and modularised web that is not dependent on ES6 availability across browsers. We’re probably better off with implementing one sci-fi idea at a time.

    I wished I could be more excited or positive about this. But it left me with a sour taste in my mouth to see that EdgeConf, that hot-house of web innovation and think-tank of many very intelligent people were as confused as I was.

    I’d love to see a “let’s turn it on and see what happens” instead of “but, wait, this could happen”. Of course, it isn’t that simple – and the Mozilla Hacks post explains this well – but a boy can dream, right? Remember when using HTML5 video was just a dream?

    That stream of tweets at conferences…

    Sunday, June 21st, 2015

    A few weeks ago, I wrote the That One Tweet post explaining how one tweet managed to puncture my balloon of happy and question if my work is appreciated. All of this is caused the gremlin of self-doubt living in all of us and it was mostly a reminder to tell it to mind its own business.

    Currently writing event reports, I think it would be terrible of me not to mention the other side of this. I want to take this opportunity to thank deeply and thoroughly people who use twitter to report, comment and encourage presenters and organisers of events. You rock, so here’s a hedgehog wizard to tell you as much:

    hedgehog dressed as a wizard

    I was very humbled and lucky to be at a few events in the last weeks where the audience used Twitter not only to post selfies and tell the world where they are, but also to report and keep a running commentary on talks. Others delivered beyond expectation by doing sketchnotes and posting those. I am humbled by and jealous of your creativity and dedication. Having good Twitter feedback has numerous effects that inflate my happy balloon:

    • It is superbly rewarding to see people deeply care about what you do.
    • It is insightful to see the tidbits of information people extract from your talks and what they considered quote-worthy. Yes, that can also be scary, but is a good reminder to explain some bits in better details next time
    • It makes my professional life so much easier as I can collect feedback and show it to my managers and outreach departments.
    • It allows me to show people that a personal touch and a presenter showing his or her views is much more beneficial to a company than a very polished slide deck people have to present
    • It shows me that I reach people with what I do. Feedback is scarce, and whilst immediate feedback tends to be highly polarised I have something to ponder
    • It gives me a fuzzy feeling when people find the need to align themselves with an event and tell the world how much of a good time they have. We have no lack of soulless events that people go to because they get a random ticket or to drop off as many business cards as they can. It feels great to see attendees go all in and praise an event for being different.
    • ROI of events is tough to measure. By being able to quote tweets, show people’s blog posts and photos I have ammunition to show people why my time there and our money in the support pot of events is worth it.

    So, here’s to the people who give feedback on talks and events on Twitter. You make me happy like this puppy:

    Incredibly happy puppy

    Keep up the great work, you can be sure that it is very appreciated by presenters and conference organisers alike

    UA Sniffing issue: Outdated PageSpeed sending WebP images to Microsoft Edge

    Monday, June 8th, 2015

    PageSpeed by Google is a great way to add clever performance enhancements to your site without having to do a lot by hand. Not surprisingly, a lot of people used it when it came out. Sadly enough, people then don’t upgrade it when Google does which means there are a lot of sub-optimal installations out there.

    This wouldn’t be an issue, if the old versions didn’t use user agent sniffing to try to detect a browser, which leads to a lot of false positives.

    Dogs sniffing each others backsides
    Figure 1: User Agent Sniffing in action

    One of these false positives is that Microsoft Edge Mobile is detected as Chrome, which means that PageSpeed converts images to WebP. MS Edge does not support WebP, which is why you’ll have broken images:

    broken images on faz.net

    The fix: upgrade PageSpeed

    The fix is easy: just upgrade your PageSpeed to the latest version as the team moved on from UA Sniffing. There should not be any backwards compatibility issues. Upgrading can be done via package managers on Apache, but with NGINX, it requires compilation. Version 1.8 was the first version that turned on WebP transcoding by default. Version 1.9 fixed it by making sure it worked off of accept header rather than UA string.

    How to test if your server does it right

    If you want to test if a server does the right thing (which is using accept headers instead of UA sniffing), use MS Edge.

    A quick and dirty way is also to change your user agent string to the following and surf to the site. This is effectively doing reverse sniffing, so it is OK to detect falsy detection scripts, but not a good idea to do real capability/interoperability testing.

    Mobile UA String for Edge (please, don’t use for sniffing)

    Mozilla/5.0 (Windows Phone 10; Android 4.2.1; Microsoft; NOKIA) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.143 Mobile Safari/537.36 Edge/12.0

    Desktop UA String for Edge (please, don’t use for sniffing)

    Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36 Edge/12.0

    You can do this in most developer tools (I use the User Agent Switcher extension in Firefox which is also available for Chrome). If you are on Windows/IE or MS Edge, you can go to the F12 developer tools and change the browser profile to “phone”.

    Got it fixed? Thanks! Tell us so we can praise you

    If you upgraded and fixed this interop issue, feel free to ping me or @MSEdgeDev and we’ll be happy! Let’s fix the web, one bad sniff at a time.