heap of multicoloured fishing nets in Swanage

Metaphors for AI: Networks, holes and loops

I have been observing metaphors for generative AI for some time. This does not mean that I understand what’s going on in AI, but they provide me with an illusion of knowledge. They throw a net or mesh of metaphors over the topic that provides something of an epistemological safety net. But sometimes that net or mesh of metaphors has a hole in it, exposing the holes in my knowledge.

This insight came to me when I was listening into a discussion on Bluesky started by Ted Underwood, who uses machine learning to study literary imagination and vice versa. He wondered whether people who don’t code would like to know more about the new approach offered by coding agents. Ha, I thought, I certainly would; and I read some of the replies.

Erik Möller, a freelance journalist and software developer, said something that made me think about AI and about metaphors used for and in AI. “IMO one core intuition to develop is the power of the LLM’s context window to shape its output. Agents optimize the use of that context for a given purpose — turning the model from much-maligned spicy autocomplete into a problem-solver.”

I knew the metaphor of ‘spicy autocomplete’ and a had vaguely heard about ‘context windows’, but I wasn’t sure whether I really understood that metaphor enough to grasp the whole argument, apart from the fact that the dismissive spicy autocomplete metaphor was perhaps no longer as valid as critics might think.

This made me think about how metaphors build on each other, especially in technical discourse, and how knowledge of technology can only emerge from these metaphors if we know not only the language but also the technology, as far as that’s possible.

Metaphors of critique and metaphors of technique

Let’s unpack Erik Möller’s comment in more detail (with a little help from Claude):

The ‘spicy autocomplete‘ metaphor suggests that LLMs are just fancy versions of a smart phone’s predictive text. They predict the next word based on what came before, with some randomness (‘spice’) thrown in. This framing is meant to be dismissive; it implies the model is just pattern-matching without real understanding or problem-solving ability.

I thought the ‘spicy autocomplete’ metaphor had a specific author, but it seems to be a community-driven meme that gained traction to simplify the concept of ‘probabilistic next-token prediction’. By contrast, the sister metaphor ‘autocomplete on steroids’ seems to have been invented by Gary Marcus in 2021. They are both what I would like to call ‘metaphors of critique’, alongside ‘stochastic parrot’, ‘blurry JPEG of the web’, ‘mirror’, or ‘snake oil’ and other such metaphors.

The metaphor of ‘context window‘ refers to the amount of text an LLM can ‘see’ in one go – everything it can consider when generating its response. For current models, this might be tens or hundreds of thousands of words. The crucial insight is that what you put in this window shapes what comes out.

Coding agents strategically fill that context window with relevant information: error messages, file contents, documentation, previous attempts, step-by-step plans. By carefully orchestrating what goes into the context window, agents can, it seems, make the LLM behave less like autocomplete and more like a problem-solver that iterates, debugs, and reasons through tasks.

Some have described a ‘context window’ as a model’s ‘short-term memory’ – another, more anthropomorphic, metaphor to describe a metaphor. This is not uncommon. ‘Context window’ is one of many technical metaphors, such as ‘tokens’ (the ‘building blocks’ or ‘words’ or parts of words that a model processes) or ‘gradient descent’ (visualised as walking downhill in thick fog, where the AI iteratively feels the slope to find the lowest point, minimising error in training) where metaphors nestle within metaphors. In contrast with the metaphors of critique, I call all these ‘metaphors of technique’. They are part of and inherent to core AI discourse.

Metaphors, networks and holes

Someone, like me, who understands ‘spicy autocomplete’ as a metaphor, but does not have much insight into the meaning of ‘context window’ might miss why Erik Möller’s comment is a rebuttal. The argument depends on understanding that context windows are a real architectural feature with specific properties, that they can be manipulated strategically, and that this manipulation can transform the system’s behaviour. Without understanding the ‘context window’ metaphor, people lack the conceptual vocabulary to understand how the system could be more than spicy autocomplete.

This shows that AI metaphors (like all metaphors) form an interconnected web of understanding. BUT and that’s a big but, this superficial web or net also has holes in it that, on occasion, need filling before deeper understanding can be achieved. This can be done by actively and critically engaging with the technology itself and the people who use it, so that you are not just captured by the metaphors of critique or the metaphors of technique alone. In doing so you can start filling the holes in your metaphorical network. You can also spin the network out by learning new metaphors. But if you are not careful you might end up filling your holes with ever more metaphors in a bit of an endless loop.

It is not enough to know individual metaphors; you need to know enough of the metaphorical network to see what’s missing and then set about filling the holes through engagement with the technology and dialogue with the people. It’s not easy to weave understanding between metaphors of critique and metaphors of technique. So, a post on coding agents would be welcome. Science communication has a vital role to play in this context.

Metaphorical networks enable understanding. They are our epistemological safety net, but we should not use them as a hammock to lull ourselves into a false sense of epistemological security. Every day is a metaphorical learning day.

Image: Heap of multicoloured fishing nets lying round the old harbour in Swanage


Discover more from Making Science Public

Subscribe to get the latest posts sent to your email.

Comments

Leave a comment

Discover more from Making Science Public

Subscribe now to keep reading and get access to the full archive.

Continue reading