Metaphor, Explanation & Anthropomorphism Analysis
Obscurum per obscurius, ignotum per ignotius...

Outputs dated before December 14, 2025 use the earlier v2.x schema framework, which does not include accountability analysis fields (accountabilityAnalysis, restoreHumanAgency, accountabilitySynthesis). The core metaphor audit, source-target mapping, explanation audit, and reframing tasks remain consistent across versions.
Latest Additionsโ
- Added a
ExplanationSelector.jsxcomponent to the Audit Dashboard. Basically a React UI for browsing theexplanationAuditarray of analysis from the JSON output: it renders a passage dropdown, displays the selected quote, and shows color-coded explanation-type badges (mechanistic vs. agential) determined by an internal category map; it also computes and highlights the dominant framing (agential/mechanistic/mixed) and presents three rich-text analysis sections: Analysis, Epistemic Claim Analysis, and Rhetorical Impact. - NEW Set of System Instructions with a core question: Does anything survive without the metaphor? Outputs with ๐ค indicate they have a corresponding report.
- Updated Audit Dashboard: A React-based dashboard visualizing anthropomorphic language patterns in AI discourse with responsive charts, 2,200+ words of deep analysis, and markdown-formatted insights. Outputs with ๐ indicate they have a corresponding report.
- Added a Reframing Library which extracts and consolidates reframing language experiments from each analysis.
- System Instructions and the ACRL Framework: How the Metaphor, Anthropomorphism and Explanation Audit aligns with the Framework for Information Literacy for Higher Education
Hybrid Analysis! - AI Worldview Olympics
- Australia vs US - Comparative Analysis of AI Plans
"The question these documents leave us with is not which is right, but whether we can achieve the democratic capacity to recognize all national AI policyโincluding our own nation'sโas ideological construction." - EU vs US - Comparative Analysis of AI Plans
- Global AI Governance Action Plan - China (2025)
- About This Prompt
- Updates
- Narrative Flow
What this framework does:โ
This prompt configures a large language model to function as an analytical instrument for critical discourse analysis. It surfaces how metaphorical language, especially biological and cognitive metaphors, shapes understanding of AI systems. The framework operates as system instructions that guide the LLM to identify moments where technical systems are described using the language of minds, biology, organisms, and intentional agents, then systematically deconstructs those linguistic choices to reveal (or ponder) their rhetorical consequences.
2025-12-14: Version 3.0 Schema & Dashboard v1.1s
- Schema & Prompt Updates: Version 3.0 of the Metaphor Audit schema adds accountability analysis to surface who is responsible for AI system behaviors. Task 1 now includes an
accountabilityAnalysisfield that prompts identification of designers, beneficiaries, and obscured actors behind each anthropomorphic framing. Task 4 addsrestoreHumanAgencyalongside the existing mechanistic reframing, explicitly naming the humans whose decisions and labor are hidden by agentless constructions. The criticalObservations section gains anaccountabilitySynthesisfield that evaluates patterns of agency attribution across the analyzed text. - Dashboard Updates: The visualization component now displays a Core Finding card at the top (drawn from
conclusion.patternSummary) with support for basic markdown formatting. A new Acknowledgment Status donut chart shows the distribution of how metaphors are presentedโas direct description, hedged analogy, or explicitly acknowledged metaphor. Each item in the Metaphor Gallery now carries color-coded badges (โ Ack'd / ~ Hedged / โ Direct) with a legend explaining their meaning. These additions just repurpose some of the JSON output and try to make visible the rhetorical strategies that naturalize anthropomorphic language, supporting the pedagogical goal of alerting readers to notice when AI systems are framed as autonomous agents rather than human-designed artifacts.
How We Talk About AI: The Flow
The narrative we naturally use
This is how I might describe what happens when I "talk" to an AI model. It's intuitive. It makes sense. And it shapes how we think about and build policy around these systems.
I ask a question
"How do bicycles work?"
Your words enter the system.
The model reads
It processes your question
Taking in the meaning of your words.
The model understands
It grasps what you're asking
Recognizing the intent and context.
The model searches its knowledge
It retrieves relevant information
Drawing from what it has learned.
The model thinks
It reasons through the answer
Constructing a coherent response.
The model writes
It generates words, one by one
Expressing its thoughts in natural language.
This narrative is useful. It's how we naturally think about minds. It lets us interact with AI intuitively.
But what's actually happening beneath this story?
Some discourse analysis bangers - from Gemini's processing:
-
The distribution of metaphor in this text follows a strategic curve...Capabilities are framed agentially. Limitations are framed mechanistically. This shift allows the text to have it both ways: it claims the rigor of a data science paper while delivering the visionary promise of a marketing manifesto.
-
This is a perfect specimen. You have handed me the Magna Carta of Anthropomorphism. The "System Card" is not just a technical manual; it is a rhetorical fortress.
The Analytical Approachโ
The outputs here are based on a prompt that configures a large language model to execute a multi-framework discourse analysis. It embeds two core theoretical systems with explicit operational definitions: one focused on metaphor, the other on the nature of explanation.
Framework 1: Structure-Mapping Theoryโ
This framework from cognitive linguistics explains how metaphors transfer relational structures from familiar source domains (minds, organisms, teachers) onto abstract target domains (algorithmic processes, model behavior). The prompt requires the model to identify this basic pattern:
| Source Domain | Target Domain | Concealment |
|---|---|---|
| Human Mind (Intent, Understanding, Frustration) | LLM (Weights, Probabilities, Error Rates) | The "Mechanics" (The fact that it's just math is hidden by the metaphor). |
- Source domain: The concrete, familiar concept being borrowed from
- Target domain: The AI system or process being described
- Structural mapping: What relational structures are being transferred
- Concealment: What dissimilarities or mechanics the metaphor obscures
Broad Examples:
| Concept | Source Domain (The Human) | Target Domain (The AI) | The Concealment |
|---|---|---|---|
| Learning | Acquiring wisdom through experience and sensory grounding | Minimizing error rates in a statistical distribution | Hides the lack of "world" understanding |
| Hallucination | A neurological break from reality (perceiving without input) | Probabilistic error (predicting the wrong token) | Hides the fact that the model never knows what is real |
| Understanding | Internalizing logic and meaning | Identifying correlations between symbols | Hides the absence of intent |
Framework 2: Brown's Typology of Explanationโ
This part of the prompt gets to another fundamental question under my โAI literacyโ tent: What kind of explanation am I leaning on when I describe what the model is doing?
This philosophical framework distinguishes seven types of explanation, each with different rhetorical implications. Brownโs work is a reminder that description can often slide into explanation which is a slide from how something happens to why it happens. Brown argued that explanation is not one thing. There are many ways to answer a โwhyโ question, and each type of explanation implies different assumptions about agency, causality, and responsibility. This is a particularly rich terrain when analyzing discourse about generative AI.
The strength of Brownโs framework is that it doesnโt reduce everything to one kind of cause. Instead, it treats explanation as a rhetorical practice, a narrative act or a decision. For AI, even saying โit outputs this way because it was trained that wayโ feels causal, but the type of causality matters. The prompt embeds Brown's complete typology as a reference table:
- Mechanistic explanations (genetic, functional, empirical, theoretical): Frame systems in terms of how they work
- Agential explanations (intentional, dispositional, reason-based): Frame systems in terms of why they act
| Type | Definition | Examples in AI Discourse |
|---|---|---|
| Genetic | Traces the development or origin of behavior or traits | โThe model developed this ability during training on owl-related textsโ |
| Intentional | Explains actions by referring to goals or desires | โClaude prefers shorter answers in this contextโ |
| Dispositional | Attributes tendencies or habits to a system | โClaude tends to avoid repetition unless promptedโ |
| Functional | Describes a behavior as serving a purpose within a system | โThe attention layer helps regulate long-term dependenciesโ |
| Reason-Based | Explains using rationales or justifications | โClaude chooses this option because itโs more helpfulโ |
| Empirical Generalization | Cites patterns or statistical norms | โIn general, the model outputs more hedging language with temperature < 0.5โ |
| Theoretical | Embeds behavior in a larger explanatory framework or model | โThis reflects transformer architecture principles or learned attention dynamicsโ |
Just as metaphor scaffolds understanding, every explanation tells a story about:
- Where meaning resides (in the act, the agent, the system, or the culture)
- What is causally important
- Who or what is responsible
- What actions (if any) follow
โ In AI discourse, the toggle is often between:
- Intentional (โthe model knowsโ)
- Dispositional (โGPT tends to exaggerateโ)
- Empirical (โit hallucinates 20% of the timeโ)
The analytical power comes from identifying when discourse slips between these modes and where, for example, a mechanistic description of token prediction becomes an agential claim about the model "wanting" to be helpful. This slippage, I'm proposing, is one of the prime rhetorical locations in AI discourse for the mechanism of the illusion. It allows the "how" (matrix multiplication) to be smuggled inside the "why" (desire), granting the machine an unearned interiority. Also useful to note that for AI discourse analysis, intentional explanations tend to be deeply narrativizing.
๐ค This happens all the time in the true crime documentaries I like to watch: Even when the motive is twisted or opaque, the act of reconstructing it affirms the belief that actions follow from reasons, and that those reasons can be known, judged, and archived.
Since this how/why slippage happens constantly in AI discourse, the analytical prompt advances a claim Iโm also trying to make: debates about mind and machine often confuse explanatory types, arguing across categories without realizing they're using different grammars of "whyโ performing the โhow.โ The question isn't about "right" or "wrong" explanation types but rather what forms of coherence different explanatory types make possible.
- What kind of explanation is presupposed when intelligence is described as an emergent property of computation?
- What kind of explanation is demanded when thereโs an insistence that understanding requires subjectivity?
- Can these forms ever converge, or do they belong to irreducibly different domains of sense-making?
- If a machine produces outputs indistinguishable from a human, does the internal mechanism really matter?
Framed this way, the debate over AI becomes more like a methodological one: a contest over what it means for an explanation to count as complete. Every "why" about AI conceals a "how" that can still be described, noting that an inability, or profound difficulty, to describe something doesn't necessarily make it mysterious.
Structure of Outputsโ
-
Task 1: Metaphor and Anthropomorphism Audit: Examines the specific language used, the frame through which AI is conceptualized, what human qualities are projected onto the system, whether the metaphor is acknowledged or presented as direct description, and the implications for trust and policy perception. V3 adds accountability analysis identifying designers, beneficiaries, and obscured actors.
-
Task 2: Source-Target Mapping: Provides detailed structure-mapping analysis for key metaphors. Examines how relational structures from familiar "source domains" are projected onto AI "target domains," revealing what assumptions the metaphor invites and what it conceals.
-
Task 3: Explanation Audit: Audits the text's explanatory strategy, identifying slippage between mechanistic "how" and agential "why." Based on Brown's typology, this analysis exposes how the "illusion of mind" is constructed through rhetorical framing.
-
Task 4: Reframing Anthropomorphic Language: Demonstrates applied AI literacy by attempting to rewrite impactful anthropomorphic quotes with some mechanistic accuracy. Includes technical reality checks for consciousness claims. The idea is that without some reframing capacity, thereโs a risk of misidentifying the nature of any threats: from language that perceives AI safety as a struggle to manage deceptive, inscrutable agents, rather than the engineering challenge of building reliable systems with predictable failure modes. V3 adds human agency restoration, explicitly naming actors hidden by agentless constructions.
-
Critical Observations: Synthesizes findings into macro-patterns of agency slippage, metaphor-driven trust, and obscured mechanics. V3 adds accountability synthesis evaluating attribution patterns.
-
Conclusion: Identifies dominant metaphorical patterns, explains how they construct an "illusion of mind," and connects linguistic choices to material stakes: economic, legal, regulatory, and social consequences. Concludes with reflection and a (sometimes) veiled call to action on โAI literacyโ as counter-practice.
-
Extended Processing Summaries: Computational artifacts of the model's intermediate token generation. The first-person framing ("I will analyze...") is itself a presentation choice this project interrogates. Once again, great marketing, but these are probabilistic artifacts, not cognitive reports.
A Note on Extended Processing Summariesโ
Some outputs include an "Extended Processing Summary" section representing the model's intermediate token generation before producing the final structured analysis. These are included selectively as diagnostic artifacts and they could help assess how well the prompt constrains behavior and reveal points where instructions may need refinement.
These summaries are computational artifacts of token probability, not evidence of cognition. The first-person presentation is itself a framing choice that this project interrogates. See the How this works page for a fuller discussion.
Alchemy Revisited:โ
In 1965, Herbert Dreyfus published a paper called Alchemy and Artificial Intelligence. And while there have been countless ways others have tried to dismantle it, for my purposes, it still resonates. In that work, Dreyfus, like me, isn't trying to dunk on the alchemists as charlatans, since, as Dreyfus reminds us, they accomplished and produced important things like distillation, ovens and crucibles (GPUs, TPUs, Transformers, Semantic Search & Translation). They just had a tight grip on a fatally flawed and incomplete understanding of matter. Drawing on the work of Heidegger, Dreyfus argued the early AI researchers and enthusiasts were operating with the wrong โphysicsโ since true understanding requires a โbeing in the worldโ (embodiment) and that scaling up would just create a bigger oven (a really good and convincing simulation of understanding).
When it comes to this moment of generative AI, the ghost of Dreyfus is showing up at all the parties and stockholder meetings. Calling an LLM โintelligentโ requires its own alchemy, an alchemy that plays out in the language we use to talk about that LLM.
In the same way, maybe the relationship with AI in 2025 is about recognizing what hasn't been understood (yet). Large language do work remarkably well in many ways. They effectively produce outputs that are often useful, sometimes insightful, occasionally dangerous. They confirm that something meaningful can be produced without attributing intention. But the explanations reached for (the metaphors of minds and learning and understanding) may be a contemporary equivalent of phlogiston: coherent within a certain framework, but fundamentally misdescribing the phenomenon.
Sometimes I wonder if I am witnessing some sort of empirical alchemy phase on the road to AGI. Mixing massive datasets with powerful computing resources and watching โemergent behaviorsโ bubble up and asking, what just happened? LLMs don't and can't understand us. But how we understand them, through what stories we build around that act of understanding, might be the most urgent literacy question of our time. Hyperbole โ
What is Intelligence?โ
Of course, this entire debate hinges on the fundamental ambiguity in the very definition of intelligence, but it doesnโt help to call a performance of lead that looks like gold, the same thing as gold. By describing statistical error as 'hallucination' and pattern matching as 'reasoning,' I perform my own type of linguistic transmutation: turning the lead of calculation into the gold of mind. The barriers to a more intelligent AI arenโt going to be solved by more servers. Again, as Dreyfus points out, the failures arenโt technical. Theyโre conceptual. Thereโs always going to be an unprogrammable boundary in the distance. (Dreyfus, 1972:214)1. A boundary that all the optimistic metaphors, anthropomorphism and first step fallacies can try to narrate as a horizon, but not forever.
The Mechanism of Projectionโ
Metaphors are cool. I love them and use them all the time. However, when I say that an AI model "learns," "understands," or "thinks," I'm not just being imprecise about how these systems actually work, I'm also importing a complete relational structure from human cognition onto processes of statistical pattern-matching. The metaphor does conceptual and ideological work, often invisibly.
Perhaps I narrate minds into machines because the alternative, seeing them as mirrors of my own projection, is harder to hold? However, I don't want to dismiss this illusion, nor do I care to admire it (too much). I'm interested in how this illusion is produced and also why it's so compelling. That means paying closer attention to the metaphors I use, the stories I construct around them, and the expectations those stories intentionally or unintentionally create.
When it comes to generative AI models, I'm more interested in trying to understand how my own interpretive habits shape what I think I'm seeing. Metaphor and narrative aren't incidental to that process; they are the means through which coherence is constructed.
On Refusal and Opacityโ
Thereโs a fair amount of AI refusal talk in my world, I get it. Where do I try to refuse? I refuse to project emergent properties of living organisms onto engineered AI systems. I decline the invitation to speculate about an AI model's "private" experiences which, for me, only tightens the wrong philosophical knot.
Not all muddy waters are deep. To say "we don't know what's going on in an LLM any more than we know what's going on in a human brain" is using the word "know" in two different ways in the same sentence. The opacity of an LLM is one of scale, more like an elaborate accounting problem. Even the black box metaphor misleads since it suggests an unknowable system rather than one where transparency is technically possible but prohibitively difficult.
This matters right now because the intelligence is being created in the description, not the code. When researchers describe AI systems using language steeped in human dispositions, inscrutable motivations, and straight-up anthropomorphism, they're not just โreporting the facts,โ they're constructing it as a particular kind of thing. And the stories they tell about these systems shape what they think they are, what these sytems are trusted to do, and who we hold responsible when they fail.
The Experimentโ
Can a large language model be configured to expose the metaphorical scaffolding in language about large language models? The recursive irony is productive since it makes the analytical instrument itself an object of analysis. Does this work demonstrate that with sufficiently detailed prompts and structured output schemas, LLMs can function as specialized research instruments for systematic discourse analysis?
Let me know what you think,
TD
License: Discourse Depot ยฉ 2025 by TD is licensed under CC BY-NC-SA 4.0