Christian Heilmann

You are currently browsing the Christian Heilmann blog archives for July, 2017.

Archive for July, 2017

How can we make more people watch conference videos?

Wednesday, July 26th, 2017

It is incredible how far we’ve come in the coverage of events. In the past, I recorded my own talks as audio as not many conferences offered video recordings (and wrote about this as a good idea in the developer evangelism handbook). These days I find myself having not having to do this as most conferences and even meetups record talks. Faster upload speeds and simple, free hosting on YouTube and others made that possible.

And yet it is expensive and a lot of work to record and publish videos of your conference. And if you don’t make any money with them it is a bit of advertising for your event with a lot of overhead. That’s why it is a shame to see just how low the viewing numbers of some great conference videos are. When talking to conference organisers, I heard some astonishingly low numbers. The only thing to boost them seems to be to deliver them one after the other with dedicated social media promotion. Which, again, is a lot of extra effort.

In order to see what would make people watch more conference talks, I took a quick Twitter poll:

Here are the quick results of 344 votes:

  • 29% chose shorter videos without Q&A
  • 37% would like transcripts with timestamps
  • 22% would like to have videos offline
  • 12% wanted captions.

I love watching conference talk videos. I watch them offline, on an iPod in the gym or on planes and trains. Basically when I am not able to do anything else, they are a great way to spend your time and learn something. There are a few things to consider to make this worth my while though:

  • The talk needs to make sense to watch on a small screen. Lots of live code in a terminal with a 12px font isn’t. That is not to say these aren’t good talks. They are just not working as a video.
  • The talk needs to be available offline (I use YouTube DL to download YouTube videos, some publishers on Vimeo offer downloads, Channel 9 always has the videos to download)
  • The talk needs to be contained in itself – it is frustrating to hear references to things I should know or bits that happened at the same event I wasn’t part of. It is even more annoying to see a Q&A session where you wait for the mic to arrive for ages or the presenter answering without me knowing what the question is.

I’ve written about the Q&A part of videos in detail before and I strongly believe that cutting a standard Q&A will result in more viewers and happier presenters. For starters, the videos will be shorter and it feels like less of an effort to watch the talk when it is 25 minutes instead of 45-50.

At technical events I am OK with some of these annoyances. After all, it is more important to entertain the audience at the event. And it is amazing when presenters take the time and effort to see other talks and reference them. However, there is a lot of benefit to consider the quality and consumption of a recording, too.

Having recordings of conference talks is an amazing gift to the community. People who can’t afford to go to events or even can’t afford to travel can still stay up-to-date and learn about topics to deep dive into by watching videos. Easy to consume, short and to the point videos can be a great way to increase the diversity of our market.

“You are here to talk to the online audience”

When I spoke at some TEDx events, this was the advice of the speaking coaches and organisers. TED is a known brand to have high quality online content. And it is almost unaffordable to go to the main TED events. Which makes this advice kind of odd, but their success online shows that there are on to something.

TED talks are much shorter than the average conference talk. They are more performance than presentation. And they come with transcripts and are downloadable.

Now, we can’t have only these kind of talks at events. But maybe it is a good plan to do some editing on the recordings and turn them into more of an experience than a record of what happened on stage that gets delivered as soon as possible. This means extra work and is some overhead, for sure. But I wonder how much of it could be automated already.

In addition to the poll results there were some other good points in the comments on Twitter and Facebook.

Less of the speaker upper body and more of the slides. Or slides to download. Also, speakers who pace themselves to sound good at 1.5x speed.

It seems to be pretty common by people who spend time watching talks to speed them up. This is an interesting concept. Good editing between slides and presenter was a wish a lot of people had. It shouldn’t be hard to publish the slides along with the video, and something presenters should consider doing more.

Not on the list but “editing” plus a solid couple of paragraphs of what the talk covers and why I should or shouldn’t watch it.

This is another easy thing for presenters to do. We’re always asked to offer a description and title of the talk that should zing and get people excited. Providing a second one that is more descriptive to use with the video isn’t that much overhead.

For English spokers, most of the conferences, no problem. But for non English spokers, massive failure. Reading is really more easy trying to listen and understand. Some guys speaks really fast. So I can’t understand talks.

This is a common problem and a presenter skill to work on. Being understandable by non-native speakers is a huge opportunity. So, some pacing and avoiding slang references are always a good idea.

The possibility to download the videos on tablets, smartphones and laptops so I can see them during commuting time

Offering videos to download should be not too hard. If you’re not planning to sell them anyways, why not?

Also offline availabilty with chapter marks/timestamps. I’ll vote for transcripts for skim reading to get to the gold nuggets. But sometimes a good speaker is an enjoyable 50min experience. I’d rather read transcripts. I never get blocks of quiet. Links to slides to follow along, or (even better) closed captions so I can play them muted If they were shorter and had a PowerPoint with main points to download after

This, of course, is the big one. A lot of people asked for transcripts, chapters and time stamps. Either for accessibility reasons or just because it is easier to skim and jump to what is important to you. This costs time and effort.

And here we have a Catch-22: if not many people watch the recordings of an event, conference organisers and companies don’t want to spend that time and effort. Manual transcription, editing and captioning isn’t cheap.

The good news is that automated transcription has gone leaps and bounds in the last years. With the need to have voice commands on mobiles and home appliances a lot of companies concentrated on getting this much better than it used to be.

YouTube has automated captions with editing functionality for free. Most cloud providers offer video insights.

One service that blew me away is VideoIndexer.

Video Indexer Interface

(Yes, this is by the company I work for, but it came as a surprise to me that this offer brings together many machine learning APIs in a simple interface.)

Using VideoIndexer, you not only generate an editable time-stamped transcript, but you also get emotion recognition, image to text conversion of video content, speaker recognition and keyword extraction. That way you to offer an interface that allows people to jump where they want to without having to scrub through the video. I’d love to see more offers like these and I am sure there are quite a few out there already in use by big TV companies and sports broadcasters.

Summary

All in all I am grateful to have the opportunity to watch talks of events I couldn’t be at and I’m making an effort to be a better online citizen by providing better descriptions and be more aware of how what I am saying can be consumed as a video afterwards.

My favourite quote in the comments was from Tessa Mero:

Would be fun watching it with someone so we can discuss the content during/after the video. Need social engagement to make learning more fun.

Videos of talks are a great opportunity to learn something and have fun with your colleagues in the office. Pick a room, set up a machine connected to the beamer, get some snacks in, watch the talk and discuss how it applies to your work afterwards. Conference organisers spend a lot of effort to record talks, presenters put a lot in to make the talk exciting and educational. And you can benefit from all of that for free.

Also published on Medium

Debugging JavaScript – console.loggerheads?

Saturday, July 8th, 2017

The last two days I ran a poll on Twitter asking people what they use to debug JavaScript.

  • console.log() which means you debug in your editor and add and remove debugging steps there
  • watches which means you instruct the (browser) developer tools to log automatically when changes happen
  • debugger; which means you debug in your editor but jump into the (browser) developer tools
  • breakpoints which means you debug in your (browser) developer tools

The reason was that having worked with editors and developer tools in browsers, I was curious how much either are used. I also wanted to challenge my own impression of being a terrible developer for not using the great tools we have to the fullest. Frankly, I feel overwhelmed with the offerings and choices we have and I felt that I am pretty much set in my ways of developing.

Developer tools for the web have been going leaps and bounds in the last years and a lot of effort of browser makers goes into them. They are seen as a sign of how important the browser is. The overall impression is that when you get the inner circle of technically savvy people excited about your product, the others will follow. Furthermore, making it easier to build for your browser and giving developers insights as to what is going on should lead to better products running in your browser.

I love the offerings we have in browser developer tools these days, but I don’t quite find myself using all the functionality. Turns out, I am not alone:

The results of 3970 votes in my survey where overwhelmingly in favour of console.log() as a debugging mechanism.

Twitter poll
Poll results: 67% console, 2% watches, 15% debugger and 16% breakpoints.

Both the Twitter poll and its correlating Facebook post had some interesting reactions.

  • As with any too simple poll about programming, a lot of them argued with the questioning and rightfully pointed out that people use a combination of all of them.
  • There was also a lot of questioning why alert() wasn’t an option as this is even easier than console().
  • There was quite some confusion about debugger; – seems it isn’t that common
  • There was only a small amount of trolling – thanks.
  • There was also quite a few mentions of how tests and test driven development makes debugging unimportant.

There is no doubt that TDD and tests make for less surprises and are good development practice, but this wasn’t quite the question here. I also happily discard the numerous mentions of “I don’t make mistakes”. I was pretty happy to have had only one mention of document.write() although you do still see it a lot in JavaScript introduction courses.

What this shows me is a few things I’ve encountered myself doing:

  • Developers who’ve been developing in a browser world have largely been conditioned to use simple editors, not IDEs. We’ve been conditioned to use a simple alert() or console.log() in our code to find out that something went wrong. In a lot of cases, this is “good enough”
  • With browser developer tools becoming more sophisticated, we use breakpoints and step-by-step debugging when there are more baffling things to figure out. After all, console.log() doesn’t scale when you need to track various changes. It is, however, not our first go-to. This is still adding something in our code, rather than moving away from the editor to the debugger
  • I sincerely hope that most of the demands for alert() were in a joking fashion. Alert had its merits, as it halted the execution of JavaScript in a browser. But all it gives you is a string and a display of [object object] is not the most helpful.

Why aren’t we using breakpoint debugging?

There should not be any question that breakpoint debugging in vastly superior to simply writing values into the console from our code:

  • You get proper inspection of the whole state and environment instead of one value
  • You get all the other insights proper debuggers give you like memory consumption, performance and so on
  • It is a cleaner way of development. All that goes in your code is what is needed for execution. You don’t mix debugging and functionality. A stray console.log() can give out information that could be used as an attack vector. A forgotten alert() is a terrible experience for our end users. A forgotten “debugger;” or breakpoint is a lot less likely to happen as it does pause execution of our code. I also remember that in the past, console.log() in loops had quite a performance impact of our code.

Developers who are used to an IDE to create their work are much more likely to know their way around breakpoint debugging and use it instead of extra code. I’ve been encountering a lot of people in my job that would never touch a console.log() or an alert() since I started working in Microsoft. As one response of the poll rightfully pointed out it is simpler:

So, why do we then keep using console logging in our code rather than the much more superior way of debugging code that our browser tooling gives us?

I think it boils down to a few things:

  • Convenience and conditioning – we’ve been doing this for years, and it is easy. We don’t need to change and we feel familiar with this kind of back and forth between editor and browser
  • Staying in one context – we write our code in our editors, and we spent a lot of time customising and understanding that one. We don’t want to spend the same amount of work on learning debuggers when logging is good enough
  • Inconvenience of differences in implementation – whilst most debuggers work the same there are differences in their interfaces. It feels taxing to start finding your way around these.
  • Simplicity and low barrier of entry – the web became the big success it is by being independent of platform and development environment. It is simple to show a person how to use a text editor and debug by putting console.log() statements in your JavaScript. We don’t want to overwhelm new developers by overloading them with debugger information or tell them that they need a certain debugging environment to start developing for the web.

The latter is the big one that stops people embracing the concept of more sophisticated debugging workflows. Developers who are used to start with IDEs are much more used to breakpoint debugging. The reason is that it is built into their development tools rather than requiring a switch of context. The downsides of IDEs is that they have a high barrier to entry. They are much more complex tools than text editors, many are expensive and above all they are huge. It is not fun to download a few Gigabyte for each update and frankly for some developers it is not even possible.

How I started embracing breakpoint debugging

One thing that made it much easier for me to embrace breakpoint debugging is switching to Visual Studio Code as my main editor. It is still a light-weight editor and not a full IDE (Visual Studio, Android Studio and XCode are also on my machine, but I dread using them as my main development tool) but it has in-built breakpoint debugging. That way I have the convenience of staying in my editor and I get the insights right where I code.

For a node.js environment, you can see this in action in this video:

Are hackable editors, linters and headless browsers the answer?

I get the feeling that this is the future and it is great that we have tools like Electron that allow us to write light-weight, hackable and extensible editors in TypeScript or just plain JavaScript. Whilst in the past the editor you use was black arts for web developers we can now actively take part in adding features to them.

I’m even more a fan of linters in editors. I like that Word tells me I wrote terrible grammar by showing me squiggly green or red underlines. I like that an editor flags up problems with your code whilst you code it. It seems a better way to teach than having people make mistakes, load the results in a browser and then see what went wrong in the browser tools. It is true that it is a good way to get accustomed to using those and – let’s be honest – our work is much more debugging than coding. But by teaching new developers about environments that tell them things are wrong before they even save them we might turn this ratio around.

I’m looking forward to more movement in the editor space and I love that we are able to run code in a browser and get results back without having to switch the user context to that browser. There’s a lot of good things happening and I want to embrace them more.

We build more complex products these days – for good or for worse. It may be time to reconsider our development practices and – more importantly – how we condition newcomers when we tell them to work like we learned it.