Lines and lines of old books on shelves along a corridor: Duke Humfrey's Library Interior 3, Bodleian Library, Oxford, UK

Metaphors for AI: An overview of recent studies

In my previous post (part 1 of a trilogy) I called for an AI metaphor observatory to watch how people make sense (and sometimes nonsense) of generative artificial intelligence, or GenAI, through metaphors. I was pleased to see that many scholars are now collecting AI metaphors and studying them systematically and I provided a rough map of the emerging academic landscape.

In this post (part 2 of the trilogy) I try to chart the academic terrain a bit more (before surveying shifts in metaphor usage in part 3). I first wrote a long literature review of the papers I listed at the end of my previous post (based on a Google Scholar and Scopus search for papers published mainly in 2025) and then condensed what I found there into over-arching themes, categories and trends, similar those I sketched in the first post but more detailed (with some help from Claude).

Across dozens of new papers, blog posts and conference collections, I observed thematic patterns emerging which I’ll summarise here. Overall, academic AI metaphor studies seem to be evolving from descriptive catalogues of metaphors to critical analyses of what these metaphors do in society, to reflexive and pedagogical uses of metaphor itself. Let’s look at this evolution more closely.

Conceptual metaphor analysis

The articles I surveyed used various types of analysis tools to study GenAI metaphors. One of the most used methods employed was conceptual metaphor analysis, derived from the work of George Lakoff and Mark Johnson. Authors who use this method aim to reveal systematic, conventional mappings between conceptual source domains (e.g. war) and conceptual target domains (e.g. argument) that shape our thought and language. We might, for example say that somebody ‘attacked a position’, or ‘defended an argument’ and so on based on the conceptual metaphor ARGUMENT IS WAR. 

In the case of GenAI we might map characteristics of human cognition (source domain) onto LLMs or chatbots (target domain) and say ‘Claude came up with a really great idea’ relying on the underlying conceptual metaphor CHATBOTS ARE HUMANS.

Collections and categorisations

Many of the academic studies I have surveyed try to sort metaphors that have been collected or elicited (and especially their source domains) into categories (continuing work started on blog posts like those by Furze, Trott and myself after the the emergence of chatbots/LLMs like ChatGPT, Claude and so on at the end of 2022).

Several works (e.g., “Swiss Army Knives, Stochastic Parrots, Drunk Interns, and Overlords” by Oster, McCaleb and Mishra) map AI metaphors along a continuum from either machine to deity — tool → system → organism → human → superhuman → mythical.

This pattern reveals an ongoing anthropomorphisation gradient showing how metaphors reflect degrees of agency and autonomy attributed to AI (especially through personification and agency metaphors). The same gradient often implies increasing moral and social stakes: as metaphors move toward ‘overlord’ and ‘sorcerer’ they also convey anxiety about control, ethics, and accountability.

See footnote 1 for a full hierarchy along this anthropomorphism gradient that maps metaphors I collected in my literature review from ‘calculator’ at the minimal end through to ‘golems’ at the maximum.

One of the most comprehensive taxonomies of AI metaphors has just been published by Michelangelo Conoscenti (based on conversations with various LLMs) and I can only give a flavour of one of the taxonomies in footnote 2.

Another comprehensive framework or taxonomy was established by Matthijs Maas in his 2023 article “AI is Like… A Literature Review of AI Metaphors and Why They Matter for Policy”, which explicitly examines how metaphors shape AI governance across the entire policy cycle. Maas organises 55 AI analogies into five distinct categories based on what aspect of AI they emphasise – as you can see in footnote 3.

Societal sense-making and imaginaries

As always and so also in AI, metaphors serve sense-making functions across all societal domains. They are used to domesticate novelty and familiarise the uncanny. Studies rooted in social representation theory show that metaphors stabilise public discourse by turning ambiguity into conversation (“talking AI into existence”).

This aligns with a broader trend: AI metaphors are tools of social imagination, shaping collective ‘AI imaginaries‘ (of hope, control, creativity, fear), and being imagined in various ways, ‘as practical tools or existential threat’ or as technical support (high heals) or danger (drug). (See also taxonomies and scales of anthropomorphism) (See footnotes 1 and 2)

Metaphors can, of course, also be tools of what one may call metaphorically computational imagination (itself based on incorporating into training data what’s out there in terms of social imagination). I have tried to show this in in an old post on what metaphors ChatGPT would use for itself and Jim Shimabukuro has highlighted this in a more recent blog post. Michelangelo Conoscenti has now analysed AI self-generated metaphors more extensively.

Bidirectional and reflexive metaphors

There is interesting research emerging on AI-as-human and human-as-AI metaphors (“I am a stochastic parrot, and so r u” – a title that is itself indicative of the pervasiveness of Bender’s parrot metaphor). This reciprocal metaphorical framing blurs boundaries between cognition and computation. The article urges us to examine this blurring between humans and machines critically. The ‘mirror’ metaphor (Shannon Vallor’s The AI Mirror) and similar work show a reflexive trend: using AI metaphors to think about us and our relationship with AI.

Scholars increasingly explore how humans use AI metaphors to redefine human thought, creativity, and agency. A critical article by Olivia Guest and others argues that some anthropomorphising metaphors distort how we perceive these machines: “humanising them while dehumanising us”.

Core metaphors and their social functions

Certain metaphors have become conceptual anchors in AI discourse: Stochastic parrot (Emily Bender), hallucination, black box, genie, assistant, mirror (Shannon Vallor). Each metaphor carries out specific normative work — often used to justify, critique, or absolve responsibility (e.g., ‘hallucination’ shifting blame from developers to models).

In an older post I called some of these metaphors celebrity metaphors, that is, metaphors receiving sustained academic attention and becoming canonical in public discourse. Some of these core metaphors, particularly the stochastic parrot, become sites of contestation, shaping public understanding and policy.

Metaphors across contexts and discourses

Research into AI metaphors spans media, policy, education, and public perception, as I have hinted at in my previous post:

Media studies: Here researchers found recurring human, animal, and natural force metaphors and AI development framed as war, race, dance. Conceptual metaphors such as AI IS A HUMAN BEING, AI IS AN ANIMAL, AI IS A NATURAL FORCE, appear consistently across different studies.

Policy: Researchers studying the EU AI Act found high-level, bland metaphors (journey, war, tool) — indicating bureaucratic flattening, while those examining UNESCO’s Guidance for Generative AI in Education and Research found that the policy document uses personification metaphors to describe artificial intelligence and examined how these linguistic choices shape public understanding of AI’s educational role. The article breaks down UNESCO’s metaphors into three categories: biology (ingest data, hallucinates), reasoning (thinks, reasons), and leadership (coach, advisor, partner). (See above taxonomy) (see also footnote 3)

Public discourse (crowdsourced metaphors): Here findings show the use of metaphors along a tool vs thief polarity, reflecting underlying fears and distrust, which change over time and differ across demographic groups and linking metaphors to trust, emotion, and ethics. In terms of public perceptions there is a tension between seeing GenAI as a practical tool vs an existential threat; a job facilitator vs a risk to creativity; and a helper vs a competitor. (For a great summary, see here)

Education: In this context, researchers are discovering and using the richest metaphorical diversity — from AI as tutor to threat, mirror and autotune for knowledge and they are also trying to organise this proliferation.

Metaphors in teaching, learning and education

Metaphors in education deserve special attention, as this field is expanding fast, from single studies to a whole collection of articles discussing histories and discourses of metaphor use in education. It was indeed in the context of education that AI was first discussed as what one my call metaphorically, and borrowing from John Locke, a ‘perfect cheat’….

A qualitative study of Chinese EFL learners discovered, for example, the use of conceptual metaphors such HUMANS, TOOL/MACHINE, BRAIN, RESOURCES, FOOD/DRINK, and MEDICINE and revealed that “while some language learners perceived GenAI as supportive, helpful, and intelligent, others expressed concerns about over-reliance and potential loss of critical thinking skills.” Another paper revealed that Vietnamese teachers describe GenAI as a ‘magic key’ or a ‘diligent assistant’, but also as a ‘double-edged sword’.

In education research one can observe what one might call a metaphorical turn. Here metaphors have become research methodologies and metaphor analysis is used to study stakeholder perceptions of and attitudes to AI. Metaphors have been turned into pedagogical tools used in teaching critical AI literacy through metaphor selection and analysis. They have become diagnostic instruments revealing blind spots, biases and power dynamics. Researchers found for example “that anthropomorphic and social metaphors contribute to creating overtrust” in the technology.

They also found some constants. Perceptions and attitudes, captured by metaphors, consistently waver between: Practical tool ↔ Existential threat; Job facilitator ↔ Risk to creativity; Helper ↔ Competitor.

Metaphors as frameworks and critical literacy tools

Linked to AI metaphors in education, there is a strong new direction in research and practice: using metaphor analysis for critical AI literacy (CAIL), a trend that itself needs to be observed.

Educators use metaphors to teach how metaphors shape perceptions and attitudes. An article entitled ”Assistant, parrot or colonizing loudspeaker” established a framework along the lines of “functional / critical / rhetorical”  (with lots of illustrative examples). Another article used metaphor as a tool through which to make sense of generative AI in educational contexts and found that perceptions wavered between practical tool and existential threat.

This reflects a meta-turn: from describing metaphors to using them critically as part of ethical AI education, showing how educators are systematising metaphor analysis for teaching purposes. (For an overview some functional frameworks, see footnote 4 and for some CAIL metaphors see footnote 5)

For those interested in CAIL there is now a whole survey of the field available here, assembled by Olivia Guest – which would be worth studying itself!

Tania Duarte, the Founder of We and AI, a UK non-profit focusing on facilitating critical thinking and more inclusive decision making about AI through critical AI literacy, has leading a project on ‘Better images of AI’ which is worth following. She was part of a conference on “Exploring metaphors of AI: visualisations, narratives and perception”.

Meta-trend: From description to reflexive critique

Across the field, there is a clear move from cataloguing metaphors to analysing their epistemic and ethical consequences to using metaphor as intervention (education, policy, literacy). Scholars now treat metaphors as infrastructures of understanding, not just linguistic flourishes.

A group of researchers around Olivia Guest are now critiquing anthropomorphic metaphors as harmful tools of normalisation rather than neutral descriptive choices. In a recent article, which is central to CAIL studies, they stress that anthropomorphic metaphors (train, learn, reason, hallucinate) aren’t just descriptive choices, they’re actively harmful because they distort our perception. Metaphors are part of the problem, not just useful analytical tools. This distorts how we perceive these machines: “humanising them while dehumanising us” (see also my old blog post on this). (For a quick summary, see footnote 5)

Overall, the field of GenAI metaphors studies seems to have been evolving in three overlapping waves:

  • Descriptive/taxonomic (what metaphors exist?)
  • Analytic/critical (what do these metaphors do socially, politically?)
  • Reflexive/pedagogical (how can we use metaphor awareness to shape better AI literacies and imaginaries?)

An AI observatory should, however, not just track trends; it should also observe what is missing and think about what might be coming next.

What is missing, absent; where are the gaps?

Despite the explosion of critical metaphor and critical metaphor studies, some areas remain under-explored in academic literature:

Pollution metaphors: Despite widespread concern about AI ‘slop’, data pollution or contamination, training data poisoning, and model collapse, there is surprisingly little academic analysis of these metaphors’ cultural work. How does framing AI problems as ‘pollution’ shape our sense of solutions?

Economic metaphors: The phrase ‘AI bubble’ circulates widely in tech and financial discourse but hasn’t received sustained metaphor analysis. What does it mean to frame AI as a speculative bubble rather than, say, an infrastructure or a utility?

Agency metaphors: There is no detailed analysis of the ‘metaphor’ of AI agents. What is the difference between genuine AI ‘agents’ working for you and bots that are just metaphorical ‘agents’?

AI safety and security metaphors: There is no academic metaphor analysis homing in on the AI safety and security discourse aroundguardrails’or ‘alignment’. What are the mean metaphors for threats to safety and security and what do they entail? (Just after writing these lines I saw a Jack Stilgoe responding to a headline saying: “Anthropic CEO warns that without guardrails, AI could be on a dangerous path” with this quip: “It’s not academic hair-splitting to suggest we should be careful about metaphors. If we’re on a dangerous path, will guardrails help? Shouldn’t we move to a different path? Or is it that our path is through a dangerous place and we mustn’t fall off, which is why we need companies to protect us?”)

AI and public values: What public values are highlighted or hidden by metaphors for AI? Some answers to this question can be found in the articles surveyed here that implicitly or explicitly deal with biases and values (transparency, accountability, prevalence of Western values) inherent in AI metaphors (e.g. ‘black box’, ‘colonizing loudspeaker’), but there is, it seems, no overall systematic study yet linking research into public values to metaphor research.

Expert and lay metaphors: We should also ask whether there is a gap between expert metaphors (stochastic parrot, alignment) and public metaphors (magical key, genie) and we need to trace more clearly whether discourses and AI metaphors are moving from simpler to more complex metaphors as both understanding of and controversies around AI deepen and uses proliferate.

History of metaphors for AI: And finally, it would be great if somebody could trace a history of metaphors for AI from 1955 when the word ‘artificial intelligence’ was created, to now (or even before that, from, say, the Antikythera mechanism onwards) What changed after the advent of LLMs etc.? This book on AI narratives might be a good starting point).

What comes next?

Looking ahead, here are some patterns worth watching:

Will the critical phase persist or plateau? Do metaphors continue getting more critical and specific? Or do we reach a point of ‘metaphor fatigue’ where critical framings become clichéd and lose their force?

Are counter-metaphors emerging? As critical metaphors gain traction, will we see deliberate counter-metaphors from AI companies and advocates? Are they already trying to reclaim ‘assistant’ or introduce new positive framings?

How do metaphors travel across contexts? How do metaphors move from academic papers to policy documents to classrooms to public discourse? Which ones stick and which fade? The ‘stochastic parrot’ made this journey successfully – why?

What role do AI systems ‘themselves’ play? When asking current AI systems for metaphors (as I did in my footnote in my first post, when Claude called itself “text in a box”), we get strikingly humble, anti-magical responses. Is this trained caution? Genuine limitation? Strategic positioning?

Where are the new frontiers? What aspects of AI still lack good metaphors? The infrastructure behind AI (data centres, water use, carbon costs)? The labour of AI training (data annotation, content moderation)? The economics of AI development?

Concluding thoughts

When I first started picking up AI metaphors, they felt like seashells — small curiosities washed up by the waves of hype. Some were shiny, some broken, some I didn’t quite know what to do with. But the more I looked, the more I noticed patterns in their shapes and colours: how they clustered, how certain ones kept coming back, how new ones kept appearing with every tide. This is really what this little observatory is about: keeping an eye on the shoreline, sorting through what washes up, and noticing what these shells tell us about how people are making sense of AI and of themselves.

Not only are there still metaphors washing up on the beach. There are also more beach combers around to collect them – and each one of them sorts them in different ways, as the examples collected here in the footnotes demonstrate. We need to pay collective attention to metaphors as social, cultural and educational instruments and social representations and continue collecting and observing, both the metaphors and the collections.

What metaphors are you using for AI? This observatory is a collective project – share your observations in the comments below or tag me on Bluesky using the hashtag: #AIMetaphorObservatory

Image: Duke Humfrey’s Library Interior 3, Bodleian Library, Oxford, UK, Wikimedia Commons

Footnotes – Examples of taxonomies

Footnote 1: The anthropomorphisation gradient (organising metaphors by degree of human-likeness) (based on this article)

Chatbots/LLMs/AIs ARE…..

Minimal (pure tool/technology)

  • Maps
  • Calculator for words
  • Auto-complete
  • Swiss Army Knife

Low (infrastructure/system)

  • Libraries
  • Databases

Medium (natural/biological)

  • Gardens
  • Neural networks
  • Brain

Medium-High (animal/basic Intelligence)

  • Stochastic parrot
  • Octopus
  • Venus Fly Trap
  • Wolf in sheep’s clothing

High (human-like)

  • Smart drunk biased supremely confident intern
  • Clueless intern
  • Helper/Assistant
  • Study buddy
  • Genius in a room
  • AI-tutor, AI-coach, AI-mentor
  • Teammate
  • Mansplainer

Very High (human+)

  • Teachers
  • Geniuses
  • Coach
  • Advisor
  • Socratic opponent
  • Partner in learning

Maximum (godlike/mythical)

  • Sorcerers
  • Golems
  • Genie (in “invoking a genie to grant a wish”)
  • Terminator
  • Shoggoth

Footnote 2: Conoscenti’s framework (organised by function, process, impact, and communication)

Michelangelo Conoscenti’s recent article “Portrait of the AI as a Young Metaphorist” presents a comprehensive taxonomy derived from conversations with various LLMs. His framework organises metaphors into four major categories (which I have condensed here; if you want to read the article in full, please contact Michelangelo by email: michelangelo.conoscenti@unito.it):

Metaphors of function and purpose

These describe what AI does or how it serves users:

CHATBOTS/LLMs/AIs ARE…..

  • Tool/Instruments (Swiss army knife, assistant, butler, workhorse)
  • Partner/Companion (co-pilot, creative partner, advisor, mentor, teacher, colleague)
  • Engine/Factory (emphasising processing, productivity, efficiency)
  • Brain/Mind (neural networks, thinking machines, electronic brains)
  • Library/Repository (vast source of accessible information)
  • Mirror/Reflection (reflecting human behaviour, biases, or thinking)
  • Lens/Prism (sharpening or refracting information for new insights)
  • Map/Cartographer (guide through informational landscapes)
  • Translator/Interpreter (bridging communication gaps)
  • Guide/Tracker (educational role, facilitating learning)
  • Oracle/Predictor (predictive capacity, future insights)
  • Time Traveller/Timekeeper (using historical data, modelling trends)

Metaphors of processes and development

These focus on how AI evolves and is created:

  • Learning/Growth (dynamic, adaptive, improving)
  • Garden/Ecosystem (organic complexity, nurturing, interdependence)
  • Construction/Architecture (deliberate design and engineering)
  • Navigation/Exploration (journey of discovery into unknown domains)

Metaphors of potential and impact

These address AI’s effects and implications:

CHATBOTS/LLMs/AIs ARE….

  • Double-Edged Sword/Pandora’s Box (promise and peril)
  • Saviour/God (utopian hope, superhuman capabilities)
  • Monster/Virus/Time Bomb (dangers, loss of control, malicious misuse)
  • Wave/Tsunami/Storm/Earthquake (disruptive transformational force)
  • Frontier/Race (competitive endeavour, innovation boundaries)
  • Black Box (opacity, difficulty in understanding decision-making)
  • Neural Network/Algorithm (specific technological structures)
  • Embodied Cognition (linking AI to bodily interaction theories)

Metaphors of communication and language

These highlight AI’s linguistic capabilities:

  • Conversation/Dialogue (human-like communication exchange)
  • Linguistic Kaleidoscope/Ecosystem (diversity and complexity of language processing)
  • Symphony of Sentences/Weaver of Words (creative capacity in language composition)

Conoscenti’s taxonomy is particularly notable for explicitly including embodied cognition and for organising metaphors around functional dimensions (what AI does), developmental processes (how it evolves), impact assessment (what effects it has), and communicative capabilities (how it interacts). This multidimensional approach complements other taxonomies by stressing the processual and communicative aspects of AI alongside its instrumental functions.

Footnote 3: Policy-oriented taxonomy (Maas 2023)

Maas organises 55 AI metaphors/analogies into five distinct categories based on what aspect of AI they emphasise:

1. Essence (what AI is): field of science, IT technology, robots, software, black box, organism, brain, mind, alien, supernatural entity, intelligence technology, trick

2. Operation (how AI works): autonomous system, complex adaptive system, evolutionary process, optimization process, generative system, foundation model, agent, pattern-matcher, hidden human labor

3. Relation (how we relate to AI as subject): tool, animal, moral patient/agent, slave, legal entity, culturally revealing object (mirror), frontier, our creation, evolutionary successor

4. Function (how AI is used): companion, advisor, malicious actor tool, misinformation amplifier, weapon, critical strategic asset, labor enhancer/substitute, enabling technology, tool of power or empowerment

5. Impact (unintended consequences): source of unanticipated risks, environmental/societal pollutant, usurper of authority, generator of legal uncertainty, driver of value shifts, revolutionary technology, existential risk driver

What distinguishes Maas’s taxonomy is its explicit focus on the policy implications of each metaphor—showing how different framings foreground different regulatory responses, coalition-building opportunities, and governance challenges. This makes it particularly valuable for understanding how metaphors don’t just describe AI but actively shape how it gets governed.

Footnote 4: Educational frameworks (organised for pedagogical purposes)

Functional frameworks

The 4T pyramid model (from student survey)

  1. Technical Support: high-heeled shoes, everyday tools
  2. Text Development: compass, roadmap
  3. Transformative Potential: Spider-Man, bridges, spaceships
  4. Threat: drug, fast food, addictive substances

Functions, roles, qualities, agency framework

Functions (tasks/capabilities):

  • Swiss army knife
  • Bricks and mortar
  • Ideas generator

Roles (human-like relationships):

  • Helper
  • Study buddy
  • Frenemy

Qualities (characteristics):

  • Black box
  • Outer planet
  • Slippery slope
  • Black box magician

Agency (volition/autonomy):

  • Competitor
  • Invader
  • Sinister robot

Footnote 5: Educational frameworks (organised for pedagogical purposes)

Critical AI literacy metaphors

Four key CAIL metaphors

  • GenAI as echo chamber
  • GenAI as funhouse mirror
  • GenAI as black box magician
  • GenAI as map

Three types of literacies

Functional View:

  • Human: help/assistant, genius, neural network, tutor, coach, mentor, teammate, student, life saver
  • Nonhuman/inanimate: cake-making, blood transfusion, calculator for words, auto-complete, tool, simulator

Critical View:

  • Human: mansplainer, Fahlawi [friend]
  • Animal/plant: stochastic parrot, octopus, Shoggoth, wolf in sheep’s clothing, Terminator, Venus Fly Trap
  • Nonhuman/inanimate: Blurry JPEG, fast food, opium, plastic surgery, Western Museum, colonizing loudspeaker

Rhetorical View:

  • Human: clueless intern
  • Animal/plant/humanoid: double-edged sword, cute hapless robot
  • Nonhuman/inanimate: registry of power, atlas, levelling the playing field

Footnote 6 – a critique of metaphors for AI by Olivia Guest and colleagues

The article “Against the Uncritical Adoption of ‘AI’ Technologies in Academia” by Guest et al. is a recent contribution to critical AI studies. I will only quickly extract some issues it touches upon regarding metaphors.

1. “Genie back in the bottle”: The paper opens with this metaphor, comparing AI to past technologies (tobacco, combustion engines, social media) where “society struggles to put the genie back in the bottle” (p. 1) – emphasising the difficulty of reversing harmful technological adoption once it’s normalised.

2. Critical discussion of anthropomorphic/jargonistic metaphors: The paper explicitly critiques metaphorical phrases used to describe AI systems – “like train, learn, hallucinate, reason” (p. 11) – arguing these “result in distorting how we perceive these machines: humanising them while dehumanising us” (p. 11).

3. The Mirror metaphor critique: Guest et al. state: “A mirror — not even an AI mirror — is not what it reflects” – directly engaging with Vallor’s mirror metaphor and warning against over-identification (p. 11 and quoting from a post by Guest and Martin 2025, p. 8)

4. Imperial/Kingdom metaphor (historical quote): The paper starts with a striking 1985 quote: “The culture of AI is imperialist and seeks to expand the kingdom of the machine. The AI community is well organised and well funded, and its culture fits its dreams: it has high priests, its greedy businessmen, its canny politicians” (p. 2).

5. Historical comparisons as implicit metaphors: The framing of AI alongside tobacco and combustion engines works metaphorically – suggesting AI is another harmful product being uncritically adopted “under the banner of progress” (p. 1).


Discover more from Making Science Public

Subscribe to get the latest posts sent to your email.


Posted

in

, ,

by

Comments

5 responses to “Metaphors for AI: An overview of recent studies”

  1. Observing shifts in metaphors for AI: What changed and why it matters – Making Science Public Avatar

    […] my previous two posts I have made the case for an AI metaphor observatory and surveyed the recent academic landscape of studies dealing with metaphors for AI in the sense of GenAI and LLMs. In this post, the third […]

    Like

  2. Making the case for an AI metaphor observatory – Making Science Public Avatar

    […] case for an observatory by providing a brief survey of the landscape (‘the case’). The second post maps the terrain and explore emerging themes in the academic literature in more detail (‘the […]

    Like

  3. Hunting for AI metaphors – Making Science Public Avatar

    […] have now published a trilogy of research into metaphors for AI, a call for an AI observatory, a survey of academic literature and a survey of trends and shifts in metaphor […]

    Like

  4. AI, LLMs and an explosion of metaphors – Making Science Public Avatar

    […] I have now published a trilogy of research into metaphors for AI, a call for an AI observatory, a survey of academic literature and a survey of trends and shifts in metaphor […]

    Like

  5. Metaphors for AI: Three blog posts and a summary – Making Science Public Avatar

    […] Part 2 of the trilogy digs a bit deeper into this emerging academic work listed in part 1 and shows how AI metaphor studies have themselves evolved (and are they are still evolving!). The field is shifting from cataloguing metaphors to critically analysing their social functions, and increasingly to reflexive, pedagogical uses of metaphor in teaching and research.​ […]

    Like

Leave a reply to Metaphors for AI: Three blog posts and a summary – Making Science Public Cancel reply

Discover more from Making Science Public

Subscribe now to keep reading and get access to the full archive.

Continue reading