In my previous two posts I have made the case for an AI metaphor observatory and surveyed the recent academic landscape of studies dealing with metaphors for AI in the sense of GenAI and LLMs. In this post, the third and last in my ‘trilogy’, I’ll attempt to review recent trends and shifts in metaphor usage.
To resume my seashell collecting metaphor: In the first post of this trilogy, I said that we needed to watch the shoreline. In a second post I displayed what I found on the beach and in this third post I examine how the patterns are shifting with the tides.
In August 2024, I worked with Claude to create a taxonomy of AI metaphors emerging at that time. We catalogued metaphors like Crystal Ball, Oracle, Knowledge Engine, and Digital Assistant, organising them into tidy categories: Tool-based, Knowledge and Information, Cognitive and Mental, Collaborative and Social, and so on.
That taxonomy captured a moment in time – the tail end of the initial wave of public excitement about generative AI. Most metaphors emphasised wonder, utility, and potential, but risks also began to be discussed.
Now, barely over a year later, in 2025, the academic literature on metaphors for AI has exploded. Dozens of papers, multiple conference collections, systematic studies across media, policy, and education have been published. When I reviewed this new body of work, I realised that the metaphor landscape has shifted over time.
This shift is not just about new metaphors appearing. It’s about who is making metaphors, for what purposes, and what work those metaphors are being asked to do. I teamed up with Claude Sonnet 4.5 again; we created a taxonomy of novel metaphors that were discussed in 2025 and compared the 2024 and 2025 taxonomies (and one should stress that the boundaries around and between these taxonomies are very fuzzy and overlapping).
This comparison, together with collecting more emergent metaphors washing up on the beach, showed how our collective understanding of AI has shifted from aspirational optimism to critical engagement, from a landscape dominated by hype to a landscape dominated by critique.
Early metaphors for AI: A landscape of hype and emerging criticism
Our 2024 taxonomy reflected metaphors circulating in public discourse about a novel form of AI now generally called GenAI and involving machine learning and large language models. It focused mainly on metaphors appearing after OpenAI’s launch of ChatGPT and the subsequent spread of other LLMs from November 2022 onwards.
These chatbots appeared in the context of tech marketing and early adoption. Here is what dominated – GenAI powered Chatbots were described as:
The magical and mystical
- crystal ball, oracle, time traveller (predictive powers)
- language wizard, storytelling djinn (creative powers)
- knowledge ocean, Infinite Encyclopaedia (vast knowledge)
- Superintelligence/AGI/ASI (an entity whose capabilities would far exceed human understanding and control)
These metaphors emphasised AI’s capabilities in quasi-supernatural terms. They invited awe and positioned AI as something that transcends ordinary tools.
The helpful and collaborative
- digital assistant, virtual assistant
- digital colleague, collaborator, companion
- creative collaborator, conversational Companion
These metaphors framed AI as fundamentally supportive – something that works with humans, not against them. The emphasis was on partnership and augmentation.
The powerful but controllable
- Swiss army knife, toolbox (versatile utility)
- force multiplier (amplification)
- cognitive extension, cognitive prosthesis (enhancement)
Even such ‘powerful’ metaphors suggested control – tools you pick up and put down, prosthetics you can remove.
However, as soon as GenAI and LLMs were increasingly promoted to and used by the general public, there were critical voices on the horizon, highlighting risks and dangers.
The cautiously abstract
Some initial risk metaphors we found were mostly generic:
- black box (opacity)
- hallucination (unreliable, confabulating)
- double-edged sword (ambivalent outcomes)
- Pandora’s Box (unspecified dangers)
- nuclear bomb (dramatic but vague)
Some were more specific to Gen AI:
- pollution, contamination, collapse, Enshittification, Slop (the danger that human knowledge may be polluted by AI nonsense; this metaphor has become less vague over time!)*
- asbestos (initially regarded as safe, later discovered to be dangerous and toxic) (see also here)
The risky and dangerous
Some scholars and commentators, like Emily Bender and Gary Marcus were sounding early alarm bells.** Chatbots were described as:
- stochastic parrot (LLMs string together linguistic forms haphazardly according to probabilistic information, without any internal model of the world or the intent behind the communication – this metaphor had first been introduced in 2021 but still works as a focal point for critique now)
- autocompletion on steroids (powerful, but still limited to predicting the next likely word) (see here)
- blurry JPEG (lossy compression, degraded information)
- mechanical medium/séance for text (a tool channelling the collective voice of the internet, like a séance for text)
- Frankenstein’s assistant (creations patched together from human labour, data, and language, capable of immense productivity and harm) (the monster metaphor was also discussed in an early paper on machine learning)
In terms of critical metaphors, there is a continuum from the early 2020s to now but with critical metaphors now gaining momentum and becoming more specific.
At the same time some of these older metaphors are undergoing critique themselves, are extended or ridiculed. So, alongside ‘autocompletion on steroids’, we now also have ‘Clippy on steroids‘. AI Slop is surrounded by words like deluge, flood, cesspool, wade through etc. (which might be different in other languages that focus more on ‘junk’ or ‘garbage’). And the metaphor of ‘hallucination’ has just been replaced by a completely novel one of ‘multiverse‘: “It [the LLM] is not hallucinatory but multiversal. When generative AI presents fabricated information, it opens a path to another reality for the user; it multiverses rather than hallucinates.”
Current AI metaphors: A landscape of critique and emerging optimism
In contrast to the 2024 taxonomy, the 2025 taxonomy based on metaphors extracted from academic literature presented a different metaphorical terrain which I surveyed in post 2. In the following, I will list some of the metaphors discussed in that literature; this means there is some overlap with the previous period (and of course there are many more critical metaphors out there – I have included some that I collected on the fly while writing this post).
The critical and grounded
Various critics, such as Shannon Vallor, Cory Doctorow, Naomi Klein, Olivia Guest and many more, are continuing and deepening critiques that emerged in the early 2020s by focusing on various metaphors and assessing remaining issues around hype. Chatbots are described as:
- funhouse mirror (distorting rather than reflecting)
- AI mirror (AI systems are not autonomous, intelligent entities, but rather reflective surfaces that expose human biases, values, and power structures embedded in their design and training data)
- Ouija board (the potential for pervasive, hidden, and long-term societal harms that are difficult to remove once integrated)
- snake oil (promoting something fraudulent as a miracle solution – but see here for nuanced assessment of the metaphor)
- theft-tech (technology built on stealing people’s knowledge)
- critical washing (researching the harms of AI but not actively reducing harms of AI)
- doomsday machine (an AI system that, once set in motion, cannot be stopped or reasoned with)
Such metaphors are specific about limitations. They don’t just say ‘be careful’ – they identify particular failure modes.
The visceral and embodied
- eating plastic for your cognition (toxic consumption)
- synthetic text extruding machine (synthetic, potentially harmful)
- high-heeled shoes (“makes my writing look noble and elegant, although I occasionally fall flat on my face”)
- opium (addictive substance)
- fast food (convenient but unhealthy)
These metaphors engage the body – they make you feel something about AI’s effects on human capacities.
The political and colonial
- colonising loudspeaker (amplifying dominant cultures, silencing others)
- technology of power (enforces norms, controls discourse)
- Western museum (displaying other cultures through a colonial lens)
- registry of power (tool of surveillance and control)
These metaphors explicitly name power dynamics. They ask: whose knowledge, whose language, whose values are being centred and amplified?
The reflexive and bidirectional
- “I am a stochastic parrot, and so r u” (humans as computational)
- humanising machines and dehumanising us (see also work by Olivia Guest)
- mirror (Shannon Vallor) (reflects us back to ourselves)
- autotune for knowledge (smoothing human thought)
These metaphors explore how AI metaphors reshape our self-understanding. If we call AI intelligent, do we start thinking of ourselves as computational?
The pedagogically sophisticated
Rather than individual metaphors, we now have entire frameworks discussing higher level aspects of metaphors (see post 2):
- functions / roles / qualities / agency
- tool / transformer / threat
- technical support / text development / transformative potential / threat (4T Pyramid)
- functional / critical / rhetorical (multiliteracies approach)
Metaphors that fall into these categories are not just ways of describing AI – they are teaching tools for critical AI literacy.
The cautiously optimistic
Such metaphors are still thin on the ground. When they occur, they are not as dizzy as in the first phase but informed by adoption as use. Ted Underwood recently posted on Bluesky: “my actual motive for being interested in language models is that ‘interactive, tunable libraries’ is a dizzyingly attractive idea”. The library metaphor is expanding from the ‘Library of Babel’ to ‘interactive and tuneable library’…. Andrew Maynard, co-author AI and the Art of Being Human, has put forward a metaphor of ‘intellectual craft‘ that is worth exploring….
People are beginning to point out that some critiques of AI which are repeated over and over again, such as for example the hallucination of references, are no longer pertinent for modern models. This has to be kept in mind also when reading this blog!
These emerging metaphors and there critiques suggest that the metaphor landscape continues to evolve. Looking back across the full arc of change, seven key shifts stand out.
Key shifts: What changed and what it means
From benefits to harms (and how harms happen)
Early metaphors acknowledged emerging risks but were sometimes still abstract. For example, ‘double-edged sword’ tells you there are dangers but not what they are. In later metaphors harms are increasingly specified and multiplied. This shift from vague to specific risk metaphors matters, as vague metaphors allow everyone to project their own concerns, while specific harm metaphors enable concrete discussion of problems and solutions.
From individual use to social systems
Early metaphors focused on personal utility; what AI does for you (assistant, prosthesis, toolbox). Later metaphors focus on collective impact – what AI does to us (registry of power, colonising loudspeaker, technology of power). This is important because individual-focused metaphors make AI ethics a personal responsibility (‘use it wisely’). System-focused metaphors demand institutional and policy responses.
From magical to material
Early metaphors highlight the wondrous, magical and remarkable – crystal ball, oracle, wizard, djinn, while later metaphors emphasise the mundane and the messy – blurry JPEG, fast food, auto-complete, plastic. While early metaphors invited passive wonder, material and messy metaphors invite investigation – one can examine a JPEG, test fast food’s nutrition, study plastic’s composition.
From one-way to bidirectional
Early metaphors described AI in terms of something else – one direction only, while later metaphors begin to pay explicit attention to reciprocal influences – if AI is like humans (thinking, reasoning), do humans become more like AI (computational, statistical)? Such bidirectional and reflexive metaphors (here the mirror metaphor becomes important) reveal how AI discourse reshapes our understanding of human cognition, creativity, and agency.
From categorical clarity to contextual complexity
Early metaphors could be sorted neatly into benefit/risk categories. ‘Digital assistant’ was clearly positive. ‘Pollution’ was clearly negative, while later metaphors (or rather their creators!) began to recognise that the same metaphor does different work in different contexts. The seemingly neutral ‘tool’ metaphor is now called “perhaps the most insidious metaphor” because it hides its values behind claims of neutrality (see also here).
Simple categorisation suggests we can just ‘choose better metaphors’, while contextual analysis reveals metaphors as sites of ongoing contestation.
From description to intervention
Our early taxonomy described metaphors people (or AIs themselves) were using, while our later taxonomy focused on how researchers use metaphor analysis in various ways, as research methodology (studying perceptions through metaphor generation); pedagogical tool (teaching critical AI literacy via metaphor reflection; or diagnostic instrument (revealing blind spots and biases).
Metaphors have shifted from being objects of study to tools of critical engagement. Researchers have shifted their focus from organising metaphors by what AI does (tool, knowledge, prediction) to organising metaphors by what metaphors do (how they position AI socially, politically, pedagogically, and so on).
The anthropomorphisation gradient
In the early days of chatbot mania, we found human-like metaphors (assistant, colleague) but no systematic framework for degrees of human-likeness. We can now find explicitly hierarchies from minimal to maximum anthropomorphism:
- Minimal: calculator, map
- Low: database, library
- Medium: garden, neural network
- Medium-High: stochastic parrot, octopus
- High: intern, assistant, tutor
- Very High: genius, teacher
- Maximum: sorcerer, golem, overlord
This gradient shows that metaphors are not just used to anthropomorphise randomly; they are used to negotiate fundamental questions about AI agency, autonomy, and moral status.
What disappeared or diminished?
Some early metaphors have largely vanished (but not gone altogether) from current academic discourse, such as:
- knowledge repositories (ocean, encyclopaedia, Infinite Library)
- mystical prediction (crystal ball, oracle, time traveller)
- pure enhancement (cognitive prosthesis, augmented reality)
- neutral professionalism (data doctor, financial advisor)
These metaphors served the adoption phase – convincing people to try AI. Once adoption happened, attention shifted to consequences.
Who changed the metaphors?
The shift from early to current metaphors is not just about which metaphors but who is making them. While early metaphors came primarily from tech companies and marketing, early adopters and enthusiasts, as well as AI systems ‘themselves’, with some early critical commentators sounding warnings, later metaphors came primarily from critical scholars in the fields of STS, media studies, linguistics, and CAIL studies, as well as educators grappling with classroom impacts, students experiencing AI’s effects on their learning, researchers studying diverse populations (EFL learners, Vietnamese teachers, Chinese students, Australian publics), and so on.
The diversification of metaphor-makers has diversified the metaphors. Alongside there is also a diversification of AI metaphor commenters and critics – different stakeholders, different stakes.
From wonder to reckoning
The transformation from the early more optimistic/hype landscape transforming into a later more critical landscape reveals something important about how societies make sense of disruptive technologies. What does this evolution tell us?
In a first phase there was wonder and magical metaphors tried to domesticate something genuinely new and strange, to familiarise people with something unfamiliar. In a second phase, we saw more practical metaphors that integrated the technology into daily life. In the third and current phase we see an increasing flood of critical metaphors highlighting what has been lost, distorted, or damaged. We are now in the reckoning phase and the metaphors reflect it.
One thing is striking: unlike previous technological waves, the reckoning arrived fast. It took decades for critical perspectives on social media to gain traction. With AI, the critical metaphors emerged within months of mass adoption. There might be many reasons for this. It might be that we learned from other social media harms; or that AI’s impacts on knowledge work and education are immediate and visible and that educators and students encountered problems in real time; or that the technology itself is less opaque than previous black boxes (we can see training data, we can test outputs).
What comes next for the observatory?
I hope to have demonstrated in these three posts that we need an AI metaphor observatory – not just to collect metaphors for AI, but to track their evolution and understand what that evolution reveals about our changing relationship with AI.
Comparing early and current metaphor discourse reveals, yet again, that metaphors are not just decorations on our thinking about AI – they are social sensors, registering shifts in understanding, anxiety, hopes, and critique. Metaphors also change over time and differ across demographic groups.
The magical metaphors of the early days captured a moment of possibility and uncertainty. The critical metaphors of our time capture a moment of reckoning and resistance. Both are necessary; both tell us something important.
The observatory’s work is to keep watching – to notice not just which metaphors wash up on the shore, but how the patterns change with each tide. Because those patterns reveal where we have been, where we are, and possibly where we are heading in our relationship with AI. Perhaps most importantly: they reveal who gets to name that relationship, whose metaphors circulate, and whose framings shape policy, practice, and public understanding.
The beach is getting more crowded. The shells keep coming. The observatory work continues and it would be great if it could become a collective and participatory effort.
Epilogue
There is, of course, also research being carried out into whether, how and why LLMs can generate novel metaphors and analogies or not… this is a topic for another day, or, indeed for somebody else! One should also keep an eye on the changes in visual metaphors, such as the Ouroboros “feeding on itself” (see here and here) – something that is fortunately done already at the ‘Better images for AI‘ project!!
Footnotes
*SLOP On 25 November 2025 the Australian Macquarie Dictionary announced ‘AI slop’ as its word of the year (see article in The Guardian). Here is a nice quote from the dictionary’s announcement: “While in recent years we’ve learnt to become search engineers to find meaningful information, we now need to become prompt engineers in order to wade through the AI slop. Slop in this sense will be a robust addition to English for years to come. “The question is, are the people ingesting and regurgitating this content soon to be called AI sloppers?”
Now The Economist has voted ‘slop’ word of the year and there is an article on the ‘Slopocalypse‘….. And also: Here is a discussion about the meaning/definition of AI Slop by AI insiders…. One could write a whole post about the life and work of the word slop in the context of AI…. And so has the Merriam-Webster Dictionary!
There is, of course, also talk of a ‘Slopocaplypse‘….
And there are now (December 2025) even ‘slop collectors‘ (reminds me ‘of the middle ages….’night soil men’ in the olden days) And there is also ‘slopwashing‘….and ‘slopsquatting‘. And somebody just (26 December) coined ‘Slopper Barons‘!!
I have now written a whole blog post on ‘slop slang‘!
**Paperclip maximiser As early as 2003 Nick Bostrom introduced the metaphor of ‘the paperclip maximiser‘ (when AI runs amok). This metaphor gained prominence around the time Chatbots crept into public consciousness (and was one of the first metaphors somebody pointed out to me at the end of 2022 post-ChatGPT). Around 2018 it “motivated Stephen Hawking and Elon Musk to express concern about the existential threat of AI”, but it is now coming in for criticism from Emily Bender for example.
Image: The Observatory at Delhi, by Thomas and William Daniell (1808), Wikimedia Commons

Leave a reply to Metaphors for AI: An overview of recent studies – Making Science Public Cancel reply