Christian Heilmann

Author Archive

WebMCP – a much needed way to make agents play with rather than against the web

Monday, February 16th, 2026

WebMCP is an exciting W3C proposal that just landed in Chrome Canary to try out. The idea is that you can use some HTML attributes on a form or register JavaScript tool methods to give agents direct access to content. This gives us as content providers and web developers an active way to point agents to what they come for rather than dealing with tons of traffic of scripts that haphazardly and clumsily try to emulate a real visitor.

Agents vs. the web

The current relationship of agentic AI and the web is predatory, wasteful and fraught with error. What is happening is that agents scrape web sites, take screenshots and scan those or keep trying to fill out form fields and click on buttons to get to content that was meant for real, human visitors. Under the hood, agents use browser automation we created for testing, both of browsers and web apps. But instead of going through a defined test suite with knowledge of the structure of the web app, agents brute force their way on. This is exactly what we’ve been hardening the web against because of malware, spammers and content thieves. Companies like Cloudflare make a good living providing the tools for that. Publishing on the web is full of hazards. Just publish an free form to enter things you store in a database and boy will you have to deal with a mess within seconds. Spam and malware bots are quite at the ready to find any vulnerability to post their content to your site and XSS protection is the biggest game of whack-a-mole I hate having to play.

Agents vs. user wallets

For the users of agents this means that they burn through tokens much quicker as the agent grabs web content that is bloated, slow to parse and often needs several authentication steps. WebMCP can improve this as it allows content providers to show agents where the content to index is and what to put into form fields to reach the content it came for. Or – even better – it gives agents programatic access to trigger functionality and get content instead of trying to fill out a form and perform a side wide search that needs filtering afterwards.

Agents are now first-class citizens

In essence, this standard and its implementation in Chrome means that agents have become first-class citizens of the world wide web. A future we as publishers have to deal with. The good thing is that the web is pretty much ready for this, as we’ve done it before for search engine bots, syndication services and many other automated travelers of the information superhighway.

The web was designed to be machine readable!

The thing that annoys me about this is that we are re-inventing the wheel over and over again. When the web came around it was an incredible simple and beautiful publishing mechanism. HTML described the content and all you needed to do was to put a file on a server:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>…</title>  
    <meta name="description" content="…">
    <meta name="keywords" content="…">
    <meta name="author" content="…">  
</head>
<body>
   <header>…</header>
   <main>
      <article>…</article>
    </main>
    <footer>…</footer>
</body>
</html>

The header, main and footer elements do not only help assistive technologies to understand the structure of the page, but they also help search engines and agents to find the content they are looking for. A clear description and keywords help search engines to understand what the page is about and to index it correctly. A good title makes it easy for users to see what the app is about. The meta tags and the structure of the page make it easy for agents to find the content they are looking for and to understand how to interact with it.

The web was designed to be machine readable and to allow for easy indexing and syndication. We had meta tags for search engines, we had sitemaps, we had RSS feeds and APIs. We had all the tools we needed to make our content discoverable and accessible to machines. But instead of using those tools, we have been building more and more complex web pages that are designed for human consumption and then trying to scrape them with agents. This is not only inefficient but also disrespectful to the web and its creators.

Semantic HTML would be a great thing for agents!

But the agent creators aren’t to blame for this, they are just trying to get the content they need to provide their users with the best experience possible. The problem is that we as content providers have given up on semantic HTML and machine readability in favour of flashy designs and complex interactions that are meant to impress human visitors but are a nightmare for agents to parse and understand. And are often a nightmare for human visitors as well, but that’s a different topic. I’ve been advocating for semantic HTML for decades, as I just love that it means that my content comes with a description and interaction for free. But for decades now we have been fighting a new breed of developers that see the web as a compilation target for their JavaScript and not as a publishing platform. Why bother with semantic HTML when you can just throw a div on the page and style it to look like anything you want? Why bother with meta tags when you can just stuff your content with keywords and hope for the best?

The meta content like description, keywords and author are still there, but they are often ignored or misused. The same goes for sitemaps and RSS feeds. We have been so focused on making our content look good for humans (and act and look like native apps) that we have neglected the machine readability of our content. And we have been focusing hard on making our content look good for search engines, which is a different kind of machine than agents. The meta description, title and keywords had a short life span of usefulness as search engines quickly learned to ignore them and rely on the actual content of the page because often the meta content was misleading or stuffed with keywords. Instead of using these in-built mechanisms of the web we added tons of extra information to the HTML head for Twitter, Facebook and many other services, some of which are dead by now and just add to the overall landfill of forgotten bespoke HTML solutions. Maybe this is a good time to read up on meta tags, alternative display modes of your content connected via LINK elements beyond 34 CSS files and 20 fonts.

Will WebMCP get adoption or will we take another loop around the conversion tree?

The question is, will we use this opportunity to make the web better for everyone, or will we continue to build bloated and inefficient web pages that are designed for human consumption or – worse – optimised for developer convenience. Will providers of agent services embrace this standard or discard it as a nice to have and keep brute forcing their way through the web. Or find other ways to make the web cheaper to read by agents. Cloudflare just introduced Markdown for Agents – a service that turns your already rendered HTML with thousands of DIVs and unreadable class names into structured markdown. Markdown, a non-standardised format, that just caused a scary security issue in Windows Notepad.

Alternative content has been a staple for Web2.0

We have had the tools for quite a while, many content providers offer feeds and APIs you and your agent can play with. Did you know for example that WordPress has a built in REST API that gives you access to all the content of a WordPress site? You can use that to get the content you need without having to scrape the web page. Terence Eden wrote a great article about how to use the WordPress REST API to get content with the lovely title Stop crawling my HTML you dickheads – use the API!.

Find-ability has always been the issue with this. Remember the incredibly simple and powerful idea of Microformats? They were a way to add semantic meaning to your content by using a few CSS classes. They were a way to make your content more machine readable and accessible without having to change the way it looked for human visitors. But they never really took off because they were not widely adopted and because they were not supported by search engines or somehow shown to end users in browsers. They were a great idea, ahead of their time and they never really caught on.

I am on team WebMCP, are you?

With WebMCP, we have the opportunity to go back to the roots of the web and make our content truly machine readable and accessible. We can use the new attributes and methods to point agents to the content we want them to index and to provide them with the information they need to understand our content. This is a chance to make the web a better place for both humans and machines, and to create a more symbiotic relationship between the two. We can use WebMCP to create a more efficient and effective web, where agents can easily find and index the content they need, and where publishers can have more control over how their content is accessed and used by agents. This is excellent news for the future of the web and for the future of AI, and I can’t wait to see how it evolves and how we can use it to create a better web for everyone.

When being Hitler’s guard was a literal drag…

Monday, February 2nd, 2026

Quick segue here, but this story is too good. In 1942, Die Grosse Liebe came out, Goebbel’s Magnum Opus other than Triumph of the Will. The Nazi propaganda minister was really into this movie and wanted it to be a huge success swaying the emotions of the German people back to believe in winning the all-out war. Movies back then had a double release: they came to the cinemas and the songs in them also came out on record at the same time. Songs were specifically made to be positive, easy to get into and some epic. This movie features the songs “Davon geht die Welt nicht unter” (the world doesn’t collapse because of that) which was the lighter bit and the epic “Ich weiß, es wird einmal ein Wunder gescheh’n” (I know, one day a wonder will happen), being the epic one. This one had a bombastic scene in the movie where the then superstar Zarah Leander sings it in front of a wall of splendid ballet dancers:

The issue here was that Miss Leander was curvy and the dancers in comparison incredibly lithe and petite. This didn’t give the scene enough gravitas and took some of the limelight away from her. The solution that Goebbels offered on the spot of the shoot was to replace the dancers with members of Hitler’s personal guards. So what you see in the scene is burly men in drag. Well, you would, but the editors made very sure that there is no closeup of the chorus but only of Zara Leander.

The chorus of ballet dancers to the song in the movie clearly being men in drag

It is a shame that there aren’t more behind the scenes shots of that, as the pissed facial expression of some of them is excellent.

Zoomed in closeups of the men in dresses looking not happy at all.

My favourite is the last one that looks eerily like Eric Idle in his Monty Python days in drag.

You can watch the full movie on archive.org with English subtitles. It is a piece of propaganda trash, but also very well made.

Monky Business: Creating a Cistercian Numerals Generator

Tuesday, January 13th, 2026

In the 13th century Cistercian monks came up with a way to show the numbers from 1 to 9999 as a single character.

The cistercian numerals showing numbers 1 - 9 and 10x multiples of those as different characters

The way it works is to add the lines of different characters to each other until the number is reached. So, if you want to show 161, you take the 1, the 60 and the 100 and add them together:

Showing the correct numeral for 161 by showing the ones for 1, 60 and 100 and adding them to the same image

Same with 1312 as 1000 + 300 + 10 + 2:

Showing the correct numeral for 1312 by showing the ones for 2, 10, 300 and 1000 and adding them to the same image

Which is pretty much incredible, so I thought it would be fun to create a generator for those characters. And here it is:

Screen recording of the generator in action

And while we’re at it, why not have a Cistercian Clock ?

How to use the generator

Open it in your browser and enter the numbers you want to generate. You can also get the source code, download it and use it offline. You can generate numerals as PNG or as SVG, click them to download the images and click the X buttons to remove them.

How to use the code in your own products

The generator is based on a script I wrote to generate the numerals, all available on the GitHub Repo. There are two flavours, a simple Node based one that returns SVG strings and a more advanced one that allows for in-browser PNG and SVG generation and customisation.

toCistercian.js – Node or browser number to Cistercian numeral converted in SVG

You can use this on the command line using:

node toCistercian.js {number}

For example `node toCistercian.js 161` results in the following SVG:

<svg width="120" height="180" xmlns="http://www.w3.org/2000/svg">
    <title>Cistercian numeral for 161</title>
    <line x1="60" y1="20" x2="60" y2="160" stroke="#000" stroke-linecap="square" stroke-width="4"/>
    <line x1="60" y1="20" x2="100" y2="20" stroke="#000" stroke-linecap="square" stroke-width="4"/>
    <line x1="60" y1="20" x2="60" y2="160" stroke="#000" stroke-linecap="square" stroke-width="4"/>
    <line x1="20" y1="20" x2="20" y2="60" stroke="#000" stroke-linecap="square" stroke-width="4"/>
    <line x1="60" y1="20" x2="60" y2="160" stroke="#000" stroke-linecap="square" stroke-width="4"/>
    <line x1="100" y1="160" x2="60" y2="160" stroke="#000" stroke-linecap="square" stroke-width="4"/>
</svg>

You can also use this in a browser as shown in the simple example:

<output></output>
<script src="toCistercian.js">
</script>
<script>
    const svg = toCistercian(1312);
    document.querySelector('output').innerHTML = svg;
</script>

Cistercian.js – convert to svg/png/canvas with customisation

The generator uses the more detailed cistercian.js version, which allows you to generate numerals in various versions and formats.

Usage is in JavaScript and a browser environment.

const converter = new Cistercian();
converter.rendernumber(1312);

This would add an `output` element to the body and render the numeral with a text representation and a button to remove it again.
You can configure it to change the look and feel and what gets rendered by calling the `configure` method. See the advanced example for that.

If you want, for example, to render the numeral inside the element with the ID `mycanvas` as SVG with a `width` of `400`, lines 10 pixels thick and in the colour `peachpuff` and without any text display or button to delete, you can do the following:

<div id="mycanvas"></div>

myConverter.configure({
    renderer: 'svg',
    canvas: { width: 400 },
    stroke: { colour: 'peachpuff', width: 10 },
    addtext: false,
    addinteraction: false,
    outputcontainer: document.getElementById('mycanvas')
});
myConverter.rendernumber(1312);

How I built the thing

As with many things I code for fun, this started offline, with me thinking how to approach this issue. In essence, all I had was an image of the numerals. When I got home, I thought I should give this to Copilot to vibe code like all the cool kids do. I asked it to take this image of numerals and create SVG versions for each of them (so I could link to them). The result was fast, immediate, confident and utter garbage.

Generated SVG for 1-9 of the numerals, all wrong

So I went back to analysing the numerals and instead of creating them as SVGs, I created them as a dataset. In essence, these are characters on a 3 by 5 grid. I numbered the points and wrote them down as coordinates:

my glyph cheatsheet

this.points = [
    [10,10],[30,10],[50,10],
    [10,30],[30,30],[50,30],
    [10,50],[30,50],[50,50],
    [10,60],[30,60],[50,60],
    [10,80],[30,80],[50,80]
];

The middle line is never used in the real numerals, but hey, why not?

Then I looked at the numerals and noted down which points are connected for each of them. 1 and 13 are always there as this is a vertical line in the middle. This gave me the dataset to use with Canvas or generate SVG from. Here are the indices of the points array that describe all the glyphs:

this.glyphs = {
    0: [[1,13]],
    1: [[1,2]], 10: [[0,1]], 100: [[14,13]], 1000: [[12,13]],
    2: [[4,5]], 20: [[3,4]], 200: [[10,11]], 2000: [[9,10]],
    3: [[1,5]], 30: [[1,3]], 300: [[13,11]], 3000: [[13,9]],
    4: [[4,2]], 40: [[4,0]], 400: [[10,14]], 4000: [[10,12]], 
    5: [[1,2],[2,4]], 50: [[0,1],[0,4]], 500: [[13,14],[14,10]], 5000: [[13,12],[12,10]],
    6: [[2,5]], 60: [[0,3]], 600: [[14,11]], 6000: [[12,9]],
    7: [[1,2],[2,5]], 70: [[0,1],[0,3]], 700: [[13,14],[14,11]], 7000: [[13,12],[12,9]],
    8: [[4,5],[5,2]], 80: [[4,3],[3,0]], 800: [[10,11],[11,14]], 8000: [[12,9],[9,10]],
    9: [[1,2],[2,5],[5,4]], 90: [[0,1],[0,3],[3,4]], 900: [[13,14],[14,11],[11,10]], 9000: [[13,12],[12,9],[9,10]]
};

The rest was just comparing and looping over this array.

The logic of adding to the final numeral was not too taxing either. When the number wasn’t defined in the glyphs array, I turn it into a string and loop over it from the end to the start. Each number then gets the added zeroes to allow for the lookup:

let chunks = number.toString().split('').reverse();
chunks.forEach((chunk, index) => {
let value = chunk + '0'.repeat(index);

So, for 1312, this would become 1312 and on each loop iteration I get the data:

  • 2
  • 10
  • 300
  • 1000

Feel free to check the source of the script for some more fun bits. And yes, I did use Copilot to help with some of the cruft code I didn’t feel like writing by hand, especially turning functions into methods and such.

I had fun, I hope you find it interesting, too.

You are already behind by not having read this post.

Friday, January 2nd, 2026

A bunch of people running frantically with their laptops and a clock in the background

Lately, I have found an incredibly annoying pattern in social media—especially LinkedIn posts: the “you are already behind” posts, claiming that by not using product $XYZ you have already been beaten by the competition. These incendiary headlines are often followed up by a testimonial that the author used $XYZ to deliver 10-23x the amount of work he (yes, most of the time “he”) used to deliver. And, of course, the post ends with a special discount to try out product $XYZ.

I utterly despise this narrative; it is insincere, plays on people’s worries, and doesn’t actually deliver any solution. It’s plain and simply an ad for a product or a crowbar approach to paint the original poster as a thought leader who knows what the future holds.

Fun fact: in my whole career, I’ve always been told that I am behind the pack for not embracing certain products or technologies, and many passed me by without affecting my career or the products I built and sold at all.

  • I didn’t replace web development with Flash/Silverlight.
  • I didn’t go from the web into Second Life/Metaverse.
  • I didn’t bet exclusively on Android or iOS.
  • I didn’t build apps for Facebook/Myspace/WeChat exclusively.
  • I didn’t bet on Crypto or Blockchain.

And yet, here I am, having had a good career and some money. I didn’t burn out and don’t really feel like I should have been one of the super early adopters failing to make an impact in the long run. I celebrate the successes of other people and fast implementers, but I also spent far too much time fixing what others innovated to work in production. There’s always a “but $PERSON uses $XYZ and delivers much faster and simpler than you do.” Well, let them. Remember that you are a professional, and you want to deliver great work, which takes time, effort, and quality thinking. So, demand that from your customers and especially from yourself.

So here’s my advice: don’t interact with posts like these, avoid people who post these, and instead find your own pace and peace of mind. You will be most effective when you are happy in achieving what you want to, not what a made-up market or competition demands from you. If I am already behind the people running against a wall, that’s actually a good thing.

Building my faux lego advent calendar feels like current software development

Friday, December 26th, 2025

I’ve stated on several occasions that Lego made me a developer. I was the youngest of four kids who inherited a huge box of bricks with no instruction booklets. So I took lots of smaller bits to build bigger things and re-used skills and ways to connect things. I came up with my own models just to dismantle them and re-arrange things.

Much like you write software:

  • You write functionality
  • You make it re-usable as functions
  • You componentise them as objects with methods and properties
  • You collate them as classes
  • You pack them up as libraries for people to ignore to go back to the first step

Now, this December my partner got me a Blue Brixx advent calendar with Peanuts characters that can be Christmas ornaments. It taught me that Blue Brixx is much more like current software development.

The advent calendar box, individual boxes and some of the models I already assembled with a plastic bag full of leftover bricks.

Lego has some unspoken rules and good structure

Lego is great to assemble and sometimes tricky to detach. But it is always possible.

Don’t tell me you are a child of the 80s if you haven’t at least chipped one tooth trying to separate some stubborn 4×2 Lego bricks.

With Lego you get instructions that show you each step of the way which parts are necessary. It’s a bit like following a tutorial on how to develop a software solution.

With Lego, you have all the necessary bricks and none should be left over. Much like with IKEA, any time you have to use force, you’re doing something wrong and it will hurt you further down the track.

Blue Brixx is different

Blue Brixx, because of its size, make and price, is different. The models are adorable and fun to build, but you need to prepare a different approach.

  • There are no notches on the underside which means the bricks don’t mesh as nicely as Lego does. You will sometimes have to use force to keep the half done model together or make a brick fit.
  • Every model so far had missing bricks. Some had bricks in colours that aren’t in the model and the further I got into the calendar, the more I collected bricks to use later on. Interestingly I often found bricks that were missing in one model as leftovers in the other, so I assume there is a packing issue.
  • Some models have glue-on faces for the characters. These stickers are the worst quality I have ever seen and an exercise in frustration. They also mean that you can’t detach the model again.
  • The instruction booklets do not list the bricks needed for each step. You need to guess that from the 3D illustration.
  • As there is a low contrast at times this means you will use the wrong bricks and then miss them in a future step. This means detaching the model, which is tough with one this size.

The instruction booklet and zoomed in showing that you need to guess the bricks in use at each step.

Current software development feels similar

Which is a bit like software development these days. We use libraries, frameworks, packages and custom-made, reusable solutions. Often we find ourselves assembling a Frankenstein solution that is hard to maintain, tough to debug, has horrible performance and gobbles up memory.

Just because we re-used bricks we’re not quite sure if we put them together the right way. And we sometimes have to use force to make them work together in the form of converters and optimisers. We add tons of bricks upfront that are loosely connected and lack structural integrity, so we add even more tools to then analyse what’s shipped but isn’t needed and remove it again. We don’t have a manual to follow and we look at the shiny end results built with a certain library and want to take a shortcut there.

I’ve seen far too many products that used several libraries because one component of each was great, resulting in a bloated mess nobody understands.

This is exacerbated by vibe coding. The idea is never to care about the code, but only about the solution, and that will always result in starting from scratch rather than maintaining and changing a product. Think of this as Lego models you glued together.

My workflow: tooling up and structuring

OK, the first thing I realised is that I need new glasses. I already have varifocals, but my eyesight must have declined – spoiler: it did in 3 years. I either can check the instruction booklet with the surprise brick illustrations or find the correct one without my glasses or I need the glasses to find the small brick on the table. This is frustrating, not to even mention the ergonomics of the situation resulting in a hurting back.

Until my new glasses arrive I am using a LED panel lamp I normally use for my podcasts to give the bricks more contrast and see much more detail.

If that is not enough I use my mobile phone as a magnifier to analyse the booklet.

And last but not least I started to pre-sort the bricks of each model before assembling it. This gives me weird looks by my partner of the “what a nerd” variety, but it really helps.

A model instruction booklet with sorted bricks around it.

All the bricks of the current model sorted and collated into 2xsomething 1 x something, angles and connectors, diagonal bricks and non-standard ones and 2x2 or 1x1

This is also how I build software and try to find my way in this “modern” world of cutting straight to the final product:

  • Find a editor environment you are comfortable with – I for one still don’t feel comfortable paying to develop, even if it is tokens
  • Structure the solution you want to build and plan it – then find the helper tools to make it easy for you to reach that goal
  • Always keep things understandable and documented to make it easy to change parts deep inside the product later without having to dismantle it completely.
  • Leave behind documentation that has all the necessary details and steps to make what you did repeatable.

Building these things is work, but it also gives me joy to have assembled them by hand. I also learn a lot how certain parts are always achieved in the same way (hair, arms, legs, parcels…) and It gets easier the more I do it.

I doubt that I would feel the same fulfilment if I asked ChatGPT to build me a 3D model and print the thing.