sepia colour old book opened and sparkles coming out

Making and unmaking AI metaphors and magic

Twenty years ago, Noel and Amanda Sharkey, seminal contributors to early AI debates, wrote an article on artificial intelligence and natural magic which deserves to be read again today.

They focus on robots and could probably not have foreseen the advances made in the last three years in generative AI, but what they say holds true. Be it robots or chatbots, AI is “part of a long tradition that has run from ancient times that treated the precursors of robots, the automata, as part of Natural Magic or conjury.” And now comes an important sentence: “Deception is an integral part of AI and robotics; in some ways they form a science of illusion.”

This illusion is especially one of “animacy and thought” and feeds on “innate human predispositions such as zoomorphism, the willing suspension of disbelief and a tendency to interpret AI devices as part of the social world.”

They urge researchers to not “allow themselves to be deceived by their own illusions”. This caution and this warning should have been heard in 2022/2023 before almost everybody began to fall for the increasing charm not of robots but of chatbots. Strangely, just like anthropomorphising chatbots, seeing AI as magical seems, anthropologically speaking, almost inevitable. But perhaps not quite, as we shall see….

Between AI magic and math

As soon as chats aka LLMs aka GenAI appeared on the horizon at the end of 2022, their magical aura began to fascinate me. First I saw visual symbols of magic like wands and sparkles sprinkled all over AI and now found that is a quite intentional strategy and reflects “corporate magic tricks”. Then I saw GenAI chatbots describing themselves as magical and found that this reflects an enchanted discourse promoted by developers. Then I saw prompts and prompting advice which reminded me of incantations, spells and magic words, and for good reason. Again, I found that developers themselves use words like ‘spellcasting’, ‘alchemy’, or ‘invoking’. There is a lot of magic talk out there.

Not surprisingly, AI is now infiltrating magic itself “where AI acts as a ‘digital genie’ creating, automating, and enhancing experiences that feel impossible” – a doubling of deception, if you like.

There are, of course, also people who try to steer researchers away from being deceived by their own delusions of magic. They point out that GenAI is not magic but math or calculus; not magic, but manual labour or ghostwork (also here); not magic, but stochastic parrots…. (a move away from magic that I have tried to chart here).

In this post I’ll focus on AI magic and metaphors again, but I’ll try to trace how some people have creatively explored and questioned that relation in some recent work and opened it up for scrutiny and engagement.

Between AI enchantment and disenchantment

In a post from 2023 Leon Furze has explored AI metaphors and “the ‘mythologising’ of AI narratives; the conflation of Artificial Intelligence with magic, religion, and the sublime; and the ways we anthropomorphise the technology”. As he rightly points out, AI certainly deserved Arthur C. Clarke’s 1973 adage that “Any sufficiently advanced technology is indistinguishable from magic.” To this I would add that any sufficiently advanced metaphor is, of course, also indistinguishable from magic.

As we have seen, this enchantment of GenAI was not altogether unintentional. The visual and verbal imagery chosen by corporations and the language used by their executives combined references of alchemy with algorithms – what Peter Nagy and Gina Neff called a “conjuration of algorithms”. This contributed to what Alexander Campolo and Kate Crawford called in 2020 “enchanted determinism”, which in turn reflected what chatbots said when asked what metaphors they would use for themselves.

Taking inspiration from Max Weber’s theory of disenchantment and applying it to deep learning, a type of machine learning fundamental to how GenAI operates, Campolo and Crawford pointed out: “Deep learning occupies an ambiguous position in this framework. On one hand, it represents a complex form of technological calculation and prediction, phenomena Weber associated with disenchantment. On the other hand, both deep learning experts and observers deploy enchanted, magical discourses to describe these systems’ uninterpretable mechanisms and counter-intuitive behavior.”

Going beyond AI enchantment and disenchantment

Taking these insights as a springboard, Maria Luce Lupetti and Dave Murray-Rust set about ‘(un)making AI magic’ from a designer perspective (CHI 2024). 

Their central argument is that AI products are commonly sold/positioned as being (inherently) enchanting. Their opacity and complexity create a ‘magical aura’ that designers can, however, consciously choose to amplify or diminish. Rather than dismissing enchantment as purely problematic, the paper asks designers to understand and navigate it skilfully.

They take the perils of AI enchantment seriously. As we have seen, the tech industry deliberately uses magical language (spellcasting, alchemy, invoking) to obscure how AI works, presenting AI as both magically inscrutable and yet infallibly deterministic. This can lead to miscalibrated trust, concealed risks, masked exploitative labour and hidden environmental costs.

Against this backdrop, Lupetti and Murray-Rust develop a taxonomy of design approaches that increase or decrease the perception of magic and enchantment. This taxonomy builds on Murray-Rust’s 2022 work with Nicenboim and Lockton on AI metaphors for designers (summarised in one of my AI metaphor posts) and extends it into the realm of enchantment dynamics and design practice.

Their taxonomy identifies seven design principles arranged along an enchantment/disenchantment spectrum derived from analysing 52 student design projects at TU Delft:

  • Enchanting approaches: Apply stage magic principles (seamless, invisible technology that “feels like magic”); Apply magic metaphors (explicitly borrowing supernatural language or imagery); Summon AI as supernatural entity (treating AI as a being with superhuman powers – the most common and often least critically grounded approach)
  • Disenchanting approaches: Manifest mechanisms (making AI’s inner workings visible, e.g. showing data flows or surveillance logic); Materialize beliefs (using provocative design to expose assumptions people hold about AI)
  • Ambiguous: Play with AI (curiosity-driven exploration that cycles between enchantment and disenchantment); Presume AI (treating AI as invisible background infrastructure without directly engaging with it)

A striking empirical finding is that most student projects fell on the enchanting side, and those who treated AI as a supernatural entity tended to engage less with the actual technology, relying on Wizard of Oz techniques and storytelling rather than technical development. Conversely, designers who engaged deeply with how AI works were better able to develop a conscious, articulate posture, whether critical or affirmative.

A practical toolkit then translates the seven design principles/taxonomy into “What if?” questions for design exploration and “Why?” questions for reflexivity, helping designers examine the values and assumptions embedded in their choices.

For example: What if a heater would summon AI as a supernatural entity? This asks what kind of superior heating ability would feel ‘supernatural’ – perhaps a heater that could see through walls and predict when you’ll feel cold. Why would a heater need supernatural AI? This question forces reflection on control, agency, and whether we want products making autonomous decisions about our comfort. These ‘what if’ and ‘why’ questions turn metaphor analysis from passive observation into active design practice, making explicit the values and assumptions that would otherwise remain invisible.

Instead of researchers letting themselves be charmed by AI magic, this paper breaks the spell and shows how designers and others can actively produce or dismantle magical perceptions of AI. This goes beyond just describing, like I did, an anthropomorphisation gradient, which captured the tendency to treat AI as godlike or superhuman but without showing how to overcome that framing.

What about designing AIs or chatbots themselves? I am not a specialist here, but I have certainly observed that over time Claude has become less ‘magical’ and perhaps moved into a phase of ‘magical realism’ if not realism itself. Are designers of AI moving from enchanting to disenchanting the mysterious black box that is AI? I don’t know, but I can tell an anecdote…

In one of my posts on AI and metaphors, I mentioned that when I asked Claude to describe itself, I got the rather deflating “I’m just text in a box” response. This contrasted with my earlier findings from a chatbot elicitation task, when most of the metaphors where magical.

That move from magical self-description to flat, mechanical self-description could be read through the Lupetti and Murray-Rust’s design taxonomy as a deliberate design choice to disenchant, to apply something like the ‘manifest mechanisms’ principle to the AI’s own self-presentation. Whether this shift reflects design choices by Anthropic, iterative refinement of safety guidelines, or simply different prompting contexts is difficult to determine from the outside, but it’s striking that the change maps so nicely onto the enchantment-disenchantment spectrum that Lupetti and Murray-Rust’s taxonomy describes.

Conclusion

In my AI metaphor trilogy, I have mapped the academic landscape (part 1), surveyed emerging themes and taxonomies (part 2), and tracked shifts in metaphor usage over time (part 3), for example from magical to material. In this companion post, I’ve tried to show how one can go beyond description and how metaphors and magic can be actively made and unmade through conscious design choices.

The Lupetti and Murray-Rust taxonomy offers not just another way to classify AI discourse, but a practical toolkit for designers, developers, and critics to navigate – and potentially reshape – the enchanted spaces around AI. It shows a way out of the black box.

Image: Pixabay


Discover more from Making Science Public

Subscribe to get the latest posts sent to your email.


Posted

in

,

by

Comments

Leave a comment

Discover more from Making Science Public

Subscribe now to keep reading and get access to the full archive.

Continue reading