A reader of my blog (yes, there is one!) recently asked me whether I had heard of “Habsburg AI”. To my shame I had not. So, I looked it up and what I found made me think, both about AI and about metaphors for AI.
I have been blogging about metaphors for AI for a while. I pick them up, I look at them, I sort them. But I now realise I haven’t really sorted them as much as one could. There are single metaphors like ‘stochastic parrot’ or ‘blurry JPEG of the web’, but there are also families of metaphors, such as ‘teacher’, ‘coach’, ‘assistant’ and so on. As it turns out ‘Habsburg AI’ belongs to a family (and is of course based on seeing similarities between a family, indeed dynasty, and AI). Other family members are: ‘Bullshit AI’ or ‘AI slop’, but also ‘model collapse, ‘digital mad cow disease’, ‘model autophagy disorder’ and so on.
How does ‘Habsburg AI’ fit into that extended family of metaphors? I’ll first look at the origins of that phrase, then explore some of the family members and how they are related and finally come to a design project that ‘enacts’ the concept of ‘Habsburg AI’ in a visceral and visual way.
Origins
The term was coined, it seems, by academic Jathan Sadowski in February 2023 when he wrote in a tweet: “I coined a term on @machinekillspod that I feel like needs its own essay: Habsburg AI – a system that is so heavily trained on the outputs of other generative AI’s that it becomes an inbred mutant, likely with exaggerated, grotesque features. It joins the lineage of Potemkin AI”.
Sadowski expanded on the ‘Habsburg AI’ analogy or metaphor in 2024 in an in an interview with France 24 entitled “Inbred, gibberish or just MAD? Warnings rise about AI models”. He describes how AI systems decay when they are repeatedly fed their own data, thus introducing yet another metaphor into the mix, that of ‘autophagy’ or perhaps ‘cannibalism’ rather than dynastic ‘inbreeding’. The two metaphors are related, and we’ll explore that when examining the family of metaphors to which they belong.
Ed Zitron also wrote about ‘Habsburg AI’ in a post in 2024 and said: “As more internet content is created, either partially or entirely through generative AI, the models themselves will find themselves increasingly inbred, training themselves on content written by their own models” Quoting Sadowski he goes on to say: “a Habsburg AI will be one that is increasingly more generic and empty, normalized into a slop of anodyne business-speak as its models are trained on increasingly-identical content.” So, what about the Habsburgs that made them so suitable as a source domain to map onto AI as a metaphorical target domain?
The Habsburgs
The Habsburg dynasty, particularly the Spanish branch, practiced intense inbreeding for centuries to consolidate power and territory, resulting in the dynasty’s extinction in 1700. Intense inbreeding, including frequent uncle-niece and cousin marriages, caused a high incidence of infertility, infant mortality, and physical deformities, most famously the “Habsburg jaw” or “Habsburg chin”. The Habsburgs wanted to keep their bloodline pure but in doing so they amplified their flaws and the dynasty collapsed.
Family resemblances
As I have indicated at the beginning of this post, the concept or metaphor of ‘Habsburg AI’ is related to other similar concepts with which it shares what one might call family resemblances. I mentioned ‘Bullshit AI’, ‘AI slop’ but also ‘model collapse’ and ‘autophagy’. Let’s explore these concepts a bit more – all related, in the end to ‘knowledge pollution’, ‘epistemic collapse’ and also ‘enshittification’ – the results of the process metaphorically labelled by the family of concepts.
This family of metaphors has, it seems, two central members, namely ‘model collapse’ and ‘model autophagy disorder’ or ‘MAD’.
Let’s start with ‘model collapse’, which has its own Wikipedia entry and is well-documented. When generative AI models are repeatedly trained on their own outputs (including AI slop), the richness and diversity of generated content erodes or decays over time. Models lose the ability to accurately represent rare or unusual events, causing crucial but uncommon details to vanish. They lose sight of minority patterns, which are exactly the edge cases that matter most when AI is used to, for example, to “triage patients, screen loans, or detect fraud”.
Training AI on AI content causes a flattening cycle, similar to inbreeding. As Nick Farrell wrote in 2024, “AI could be wiped out by the same illness that killed off the Hapsburgs”. And as Jaspreet Bindra noted: “The famous ‘Habsburg Jaw’ was a physical manifestation of a gene pool that had become a closed loop.”
Let’s now turn to a related but slight different framing proposed by researchers at Rice and Stanford who labelled model collapse “Model Autophagy Disorder” (MAD) and compared it to mad cow disease, a fatal illness caused by feeding the remnants of dead cows to other cows. In the case of AI, AI is fed ‘AI slop’ in the form of unlabelled synthetic data. As pointed out in the France 24 article: “These researchers worry that AI-generated text, images and video are clearing the web of usable human-made data.” Again, this metaphor highlights that what is lost is the “diversity of the entire internet”.
With ‘model collapse’, the metaphorical focus is on ‘poisoning’ the ‘bloodline’ of AI through ‘inbreeding’, while with ‘MAD’ the focus is on ‘poisoning the well’ of human knowledge through what one could call ‘infeeding’. Generative AI becomes degenerative AI.
Myths and metaphors
This ‘infeeding’ or “self cannibalism of training data” is often represented visually by the image of the mythical ouroboros, the mythical serpent consuming its own tail. In his blog post on ‘Habsburg AI’, Jaspreet Bindra also mentions another Indian myth that of Bhasmasura, “the demon who was granted the power to turn anyone to ash by touching their head. In his hubris, he was tricked into touching his own head, consumed by the very power he sought to wield. AI, in its hunger for infinite data, risks becoming Bhasmasura, burning itself out by consuming its own tail.”
It seems that the metaphor of ‘Habsburg AI’ is surrounded by a host of myths and metaphors all exploring the danger of knowledge pollution through the ‘inbreeding’ or ‘infeeding’ of AI data.
Portraying the decaying
While researching the ancestry and family tree of ‘Habsburg AI’ I came across a really interesting design/PhD project by Martin Disley at the University of Edinburgh, entitled “Habsburg AI Portrait Studies”. As Martin Disley has pointed out in a forthcoming paper: “Working with the aesthetics of the grotesque portraiture, the image series visualises both literally and methodically Jathan Sadowski’s coinage of ‘Habsburg AI’”. What does this entail?
Habsburg portraits famously show what happened when you keep selecting for the same features; over time you get a caricature of the dynasty characterised by a “protruding chin, a thick lower lip, and a large drooping nose.” The project shows that AI models have their own equivalent of a ‘Habsburg chin’, a default aesthetic tendency that only becomes visible when it’s pathologically amplified.
To visually represent this change over time, the project rather ingeniously created cloth-printed generative portraits that appear to be ‘slowly collapsing’ or degenerating when installed. The cloth portraits echo portraiture as a genre of power but caricaturise it. Grand portraits typically hang in grand halls to project dynastic grandeur and legitimacy. Printing them on sagging cloth neatly inverts that.
To achieve this, Disley developed a bespoke diffusion model called “Aesthetic distillation” which recursively trains an AI model. “This autophagous training causes the model to collapse on itself, constricting the model’s distribution around the mean, forcing it to produce images in an amplified version of the model’s default style.” This reframes model collapse not just as degradation but as a kind of involuntary self-portrait. It demonstrates model collapse by actually performing recursive retraining. It also reveals something about MidJourneyV5’s default aesthetic by exaggerating it to the point of it becoming grotesque.
The collapsed model reveals what is really at stake when it is stripped of diversity. You can see in front of our eyes what’s happening to AI and human knowledge through the eyes of the Habsburgs, so to speak. In this way, the project makes concrete something rather abstract variously called ‘model collapse’, ‘MAD’, or ‘autophagy’. These concepts in turn are linked to other metaphors, such ‘AI slop’, ‘AI bullshit’ (what a model feeds on), the ‘pollution of knowledge’ and ‘epistemic collapse’ (what a polluted model results in) and they are surrounded by the myths of ouroboros and Bhasmasura, and of course the story of the Habsburg dynasty itself.
Overall, the metaphors and the design project paint a rather bleak picture of AI’s future. They frame the problem as inevitable decay rather than a solvable engineering problem. There are some voices that are not quite as pessimistic as the ‘Habsburg AI’ metaphor.
Stopping the decay
As Liv Skeete pointed out on Medium: “The historical analogy with the Habsburg dynasty powerfully illustrates the dangers posed by recursive synthetic training methods. As AI-generated content increasingly saturates the internet, the risk of model collapse grows ever more pressing. Responsible AI development demands ongoing research, transparency in data sourcing, and balanced, innovative training approaches.” Liv hints at what can potentially be done to avoid ‘Habsburg AI’. What has happened in this space since 2024 when the metaphor emerged and spread?
In 2025 Gary Kim wrote about how to avoid or reduce model collapse and stressed that “The most crucial strategy is to retain a significant portion of original, human-generated, ‘real’ data. By some estimates, even a small percentage of real data (10 percent) can significantly slow down or prevent model collapse.” He also recommends that we “ensure that the model always has access to a reliable source of ‘ground truth’ information, whether through direct inclusion of real data, or through careful curation and validation of synthetic data.” I am not an expert on this topic and would love to hear from readers whether that is realistic advice or not.
Image: Habsburg ‘portraits‘ designed by Martin Disley, reproduced with permission: Habsburg AI Portrait Studies is a series of cloth-printed generative portrait images produced using a bespoke diffusion model recursively retrained on its own output.

Leave a comment