Skip to main content

Metaphor, Explanation & Anthropomorphism Analysis

Obscurum per obscurius, ignotum per ignotius...

An image from the static

Framework Versions

Outputs dated before December 14, 2025 use the earlier v2.x schema framework, which does not include accountability analysis fields (accountabilityAnalysis, restoreHumanAgency, accountabilitySynthesis). The core metaphor audit, source-target mapping, explanation audit, and reframing tasks remain consistent across versions.

Latest Additionsโ€‹

A Sample of Latest Additions
  • Added a ExplanationSelector.jsx component to the Audit Dashboard. Basically a React UI for browsing the explanationAudit array of analysis from the JSON output: it renders a passage dropdown, displays the selected quote, and shows color-coded explanation-type badges (mechanistic vs. agential) determined by an internal category map; it also computes and highlights the dominant framing (agential/mechanistic/mixed) and presents three rich-text analysis sections: Analysis, Epistemic Claim Analysis, and Rhetorical Impact.
  • NEW Set of System Instructions with a core question: Does anything survive without the metaphor? Outputs with ๐Ÿค” indicate they have a corresponding report.
  • Updated Audit Dashboard: A React-based dashboard visualizing anthropomorphic language patterns in AI discourse with responsive charts, 2,200+ words of deep analysis, and markdown-formatted insights. Outputs with ๐Ÿ“Š indicate they have a corresponding report.
  • Added a Reframing Library which extracts and consolidates reframing language experiments from each analysis.
  • System Instructions and the ACRL Framework: How the Metaphor, Anthropomorphism and Explanation Audit aligns with the Framework for Information Literacy for Higher Education

Hybrid Analysis! - AI Worldview Olympics


What this framework does:โ€‹

This prompt configures a large language model to function as an analytical instrument for critical discourse analysis. It surfaces how metaphorical language, especially biological and cognitive metaphors, shapes understanding of AI systems. The framework operates as system instructions that guide the LLM to identify moments where technical systems are described using the language of minds, biology, organisms, and intentional agents, then systematically deconstructs those linguistic choices to reveal (or ponder) their rhetorical consequences.


Some discourse analysis bangers - from Gemini's processing:

  • The distribution of metaphor in this text follows a strategic curve...Capabilities are framed agentially. Limitations are framed mechanistically. This shift allows the text to have it both ways: it claims the rigor of a data science paper while delivering the visionary promise of a marketing manifesto.

  • This is a perfect specimen. You have handed me the Magna Carta of Anthropomorphism. The "System Card" is not just a technical manual; it is a rhetorical fortress.


The Analytical Approachโ€‹

The outputs here are based on a prompt that configures a large language model to execute a multi-framework discourse analysis. It embeds two core theoretical systems with explicit operational definitions: one focused on metaphor, the other on the nature of explanation.

Framework 1: Structure-Mapping Theoryโ€‹

This framework from cognitive linguistics explains how metaphors transfer relational structures from familiar source domains (minds, organisms, teachers) onto abstract target domains (algorithmic processes, model behavior). The prompt requires the model to identify this basic pattern:

Source DomainTarget DomainConcealment
Human Mind (Intent, Understanding, Frustration)LLM (Weights, Probabilities, Error Rates)The "Mechanics" (The fact that it's just math is hidden by the metaphor).
  • Source domain: The concrete, familiar concept being borrowed from
  • Target domain: The AI system or process being described
  • Structural mapping: What relational structures are being transferred
  • Concealment: What dissimilarities or mechanics the metaphor obscures

Broad Examples:

ConceptSource Domain (The Human)Target Domain (The AI)The Concealment
LearningAcquiring wisdom through experience and sensory groundingMinimizing error rates in a statistical distributionHides the lack of "world" understanding
HallucinationA neurological break from reality (perceiving without input)Probabilistic error (predicting the wrong token)Hides the fact that the model never knows what is real
UnderstandingInternalizing logic and meaningIdentifying correlations between symbolsHides the absence of intent

Framework 2: Brown's Typology of Explanationโ€‹

This part of the prompt gets to another fundamental question under my โ€œAI literacyโ€ tent: What kind of explanation am I leaning on when I describe what the model is doing?

This philosophical framework distinguishes seven types of explanation, each with different rhetorical implications. Brownโ€™s work is a reminder that description can often slide into explanation which is a slide from how something happens to why it happens. Brown argued that explanation is not one thing. There are many ways to answer a โ€œwhyโ€ question, and each type of explanation implies different assumptions about agency, causality, and responsibility. This is a particularly rich terrain when analyzing discourse about generative AI.

The strength of Brownโ€™s framework is that it doesnโ€™t reduce everything to one kind of cause. Instead, it treats explanation as a rhetorical practice, a narrative act or a decision. For AI, even saying โ€œit outputs this way because it was trained that wayโ€ feels causal, but the type of causality matters. The prompt embeds Brown's complete typology as a reference table:

  • Mechanistic explanations (genetic, functional, empirical, theoretical): Frame systems in terms of how they work
  • Agential explanations (intentional, dispositional, reason-based): Frame systems in terms of why they act
TypeDefinitionExamples in AI Discourse
GeneticTraces the development or origin of behavior or traitsโ€œThe model developed this ability during training on owl-related textsโ€
IntentionalExplains actions by referring to goals or desiresโ€œClaude prefers shorter answers in this contextโ€
DispositionalAttributes tendencies or habits to a systemโ€œClaude tends to avoid repetition unless promptedโ€
FunctionalDescribes a behavior as serving a purpose within a systemโ€œThe attention layer helps regulate long-term dependenciesโ€
Reason-BasedExplains using rationales or justificationsโ€œClaude chooses this option because itโ€™s more helpfulโ€
Empirical GeneralizationCites patterns or statistical normsโ€œIn general, the model outputs more hedging language with temperature < 0.5โ€
TheoreticalEmbeds behavior in a larger explanatory framework or modelโ€œThis reflects transformer architecture principles or learned attention dynamicsโ€

Just as metaphor scaffolds understanding, every explanation tells a story about:

  • Where meaning resides (in the act, the agent, the system, or the culture)
  • What is causally important
  • Who or what is responsible
  • What actions (if any) follow

โ €In AI discourse, the toggle is often between:

  • Intentional (โ€œthe model knowsโ€)
  • Dispositional (โ€œGPT tends to exaggerateโ€)
  • Empirical (โ€œit hallucinates 20% of the timeโ€)

The analytical power comes from identifying when discourse slips between these modes and where, for example, a mechanistic description of token prediction becomes an agential claim about the model "wanting" to be helpful. This slippage, I'm proposing, is one of the prime rhetorical locations in AI discourse for the mechanism of the illusion. It allows the "how" (matrix multiplication) to be smuggled inside the "why" (desire), granting the machine an unearned interiority. Also useful to note that for AI discourse analysis, intentional explanations tend to be deeply narrativizing.

๐Ÿค” This happens all the time in the true crime documentaries I like to watch: Even when the motive is twisted or opaque, the act of reconstructing it affirms the belief that actions follow from reasons, and that those reasons can be known, judged, and archived.

Since this how/why slippage happens constantly in AI discourse, the analytical prompt advances a claim Iโ€™m also trying to make: debates about mind and machine often confuse explanatory types, arguing across categories without realizing they're using different grammars of "whyโ€ performing the โ€œhow.โ€ The question isn't about "right" or "wrong" explanation types but rather what forms of coherence different explanatory types make possible.

  • What kind of explanation is presupposed when intelligence is described as an emergent property of computation?
  • What kind of explanation is demanded when thereโ€™s an insistence that understanding requires subjectivity?
  • Can these forms ever converge, or do they belong to irreducibly different domains of sense-making?
  • If a machine produces outputs indistinguishable from a human, does the internal mechanism really matter?

Framed this way, the debate over AI becomes more like a methodological one: a contest over what it means for an explanation to count as complete. Every "why" about AI conceals a "how" that can still be described, noting that an inability, or profound difficulty, to describe something doesn't necessarily make it mysterious.


Structure of Outputsโ€‹

  • Task 1: Metaphor and Anthropomorphism Audit: Examines the specific language used, the frame through which AI is conceptualized, what human qualities are projected onto the system, whether the metaphor is acknowledged or presented as direct description, and the implications for trust and policy perception. V3 adds accountability analysis identifying designers, beneficiaries, and obscured actors.

  • Task 2: Source-Target Mapping: Provides detailed structure-mapping analysis for key metaphors. Examines how relational structures from familiar "source domains" are projected onto AI "target domains," revealing what assumptions the metaphor invites and what it conceals.

  • Task 3: Explanation Audit: Audits the text's explanatory strategy, identifying slippage between mechanistic "how" and agential "why." Based on Brown's typology, this analysis exposes how the "illusion of mind" is constructed through rhetorical framing.

  • Task 4: Reframing Anthropomorphic Language: Demonstrates applied AI literacy by attempting to rewrite impactful anthropomorphic quotes with some mechanistic accuracy. Includes technical reality checks for consciousness claims. The idea is that without some reframing capacity, thereโ€™s a risk of misidentifying the nature of any threats: from language that perceives AI safety as a struggle to manage deceptive, inscrutable agents, rather than the engineering challenge of building reliable systems with predictable failure modes. V3 adds human agency restoration, explicitly naming actors hidden by agentless constructions.

  • Critical Observations: Synthesizes findings into macro-patterns of agency slippage, metaphor-driven trust, and obscured mechanics. V3 adds accountability synthesis evaluating attribution patterns.

  • Conclusion: Identifies dominant metaphorical patterns, explains how they construct an "illusion of mind," and connects linguistic choices to material stakes: economic, legal, regulatory, and social consequences. Concludes with reflection and a (sometimes) veiled call to action on โ€œAI literacyโ€ as counter-practice.

  • Extended Processing Summaries: Computational artifacts of the model's intermediate token generation. The first-person framing ("I will analyze...") is itself a presentation choice this project interrogates. Once again, great marketing, but these are probabilistic artifacts, not cognitive reports.


A Note on Extended Processing Summariesโ€‹

Some outputs include an "Extended Processing Summary" section representing the model's intermediate token generation before producing the final structured analysis. These are included selectively as diagnostic artifacts and they could help assess how well the prompt constrains behavior and reveal points where instructions may need refinement.

These summaries are computational artifacts of token probability, not evidence of cognition. The first-person presentation is itself a framing choice that this project interrogates. See the How this works page for a fuller discussion.


Alchemy Revisited:โ€‹

In 1965, Herbert Dreyfus published a paper called Alchemy and Artificial Intelligence. And while there have been countless ways others have tried to dismantle it, for my purposes, it still resonates. In that work, Dreyfus, like me, isn't trying to dunk on the alchemists as charlatans, since, as Dreyfus reminds us, they accomplished and produced important things like distillation, ovens and crucibles (GPUs, TPUs, Transformers, Semantic Search & Translation). They just had a tight grip on a fatally flawed and incomplete understanding of matter. Drawing on the work of Heidegger, Dreyfus argued the early AI researchers and enthusiasts were operating with the wrong โ€œphysicsโ€ since true understanding requires a โ€œbeing in the worldโ€ (embodiment) and that scaling up would just create a bigger oven (a really good and convincing simulation of understanding).

When it comes to this moment of generative AI, the ghost of Dreyfus is showing up at all the parties and stockholder meetings. Calling an LLM โ€œintelligentโ€ requires its own alchemy, an alchemy that plays out in the language we use to talk about that LLM.

In the same way, maybe the relationship with AI in 2025 is about recognizing what hasn't been understood (yet). Large language do work remarkably well in many ways. They effectively produce outputs that are often useful, sometimes insightful, occasionally dangerous. They confirm that something meaningful can be produced without attributing intention. But the explanations reached for (the metaphors of minds and learning and understanding) may be a contemporary equivalent of phlogiston: coherent within a certain framework, but fundamentally misdescribing the phenomenon.

Sometimes I wonder if I am witnessing some sort of empirical alchemy phase on the road to AGI. Mixing massive datasets with powerful computing resources and watching โ€œemergent behaviorsโ€ bubble up and asking, what just happened? LLMs don't and can't understand us. But how we understand them, through what stories we build around that act of understanding, might be the most urgent literacy question of our time. Hyperbole โœ…

What is Intelligence?โ€‹

Of course, this entire debate hinges on the fundamental ambiguity in the very definition of intelligence, but it doesnโ€™t help to call a performance of lead that looks like gold, the same thing as gold. By describing statistical error as 'hallucination' and pattern matching as 'reasoning,' I perform my own type of linguistic transmutation: turning the lead of calculation into the gold of mind. The barriers to a more intelligent AI arenโ€™t going to be solved by more servers. Again, as Dreyfus points out, the failures arenโ€™t technical. Theyโ€™re conceptual. Thereโ€™s always going to be an unprogrammable boundary in the distance. (Dreyfus, 1972:214)1. A boundary that all the optimistic metaphors, anthropomorphism and first step fallacies can try to narrate as a horizon, but not forever.

The Mechanism of Projectionโ€‹

Metaphors are cool. I love them and use them all the time. However, when I say that an AI model "learns," "understands," or "thinks," I'm not just being imprecise about how these systems actually work, I'm also importing a complete relational structure from human cognition onto processes of statistical pattern-matching. The metaphor does conceptual and ideological work, often invisibly.

Perhaps I narrate minds into machines because the alternative, seeing them as mirrors of my own projection, is harder to hold? However, I don't want to dismiss this illusion, nor do I care to admire it (too much). I'm interested in how this illusion is produced and also why it's so compelling. That means paying closer attention to the metaphors I use, the stories I construct around them, and the expectations those stories intentionally or unintentionally create.

When it comes to generative AI models, I'm more interested in trying to understand how my own interpretive habits shape what I think I'm seeing. Metaphor and narrative aren't incidental to that process; they are the means through which coherence is constructed.

On Refusal and Opacityโ€‹

Thereโ€™s a fair amount of AI refusal talk in my world, I get it. Where do I try to refuse? I refuse to project emergent properties of living organisms onto engineered AI systems. I decline the invitation to speculate about an AI model's "private" experiences which, for me, only tightens the wrong philosophical knot.

Not all muddy waters are deep. To say "we don't know what's going on in an LLM any more than we know what's going on in a human brain" is using the word "know" in two different ways in the same sentence. The opacity of an LLM is one of scale, more like an elaborate accounting problem. Even the black box metaphor misleads since it suggests an unknowable system rather than one where transparency is technically possible but prohibitively difficult.

This matters right now because the intelligence is being created in the description, not the code. When researchers describe AI systems using language steeped in human dispositions, inscrutable motivations, and straight-up anthropomorphism, they're not just โ€œreporting the facts,โ€ they're constructing it as a particular kind of thing. And the stories they tell about these systems shape what they think they are, what these sytems are trusted to do, and who we hold responsible when they fail.


The Experimentโ€‹

Can a large language model be configured to expose the metaphorical scaffolding in language about large language models? The recursive irony is productive since it makes the analytical instrument itself an object of analysis. Does this work demonstrate that with sufficiently detailed prompts and structured output schemas, LLMs can function as specialized research instruments for systematic discourse analysis?

Let me know what you think,
TD

License

License: Discourse Depot ยฉ 2025 by TD is licensed under CC BY-NC-SA 4.0

Footnotesโ€‹

  1. Dreyfus, H. L. (1972). What computers can't do; a critique of artificial reason. United Kingdom: Harper & Row. โ†ฉ