Skip to content
On this page

Metaphor, Explanation & Anthropomorphism Analysis

What this framework does:

This prompt configures a large language model to function as an analytical instrument for critical discourse analysis. It makes visible how metaphorical language, especially biological and cognitive metaphors, shapes how we understand AI systems. The framework identifies moments where technical systems are described using the language of minds, biology, organisms, and intentional agents, then systematically deconstructs those linguistic choices to reveal their rhetorical consequences.


Why it matters:

Metaphors are cool. I love them and use them all the time. But they're also cognitive structures that guide reasoning, shape trust, and influence policy. When we say an AI model "learns," "understands," or "thinks," we're not just being imprecise, we're importing a complete relational structure from human cognition onto processes of statistical pattern-matching. The metaphor does conceptual and ideological work, often invisibly.

Perhaps we narrate minds into machines because the alternative, seeing them as mirrors of our own projection, is harder to hold. However, I don't want to dismiss this illusion, nor do I care to admire it. I'm interested in how this illusion is produced and also why it's so compelling. That means paying close attention to the metaphors we use, the stories we construct around them, and the expectations those stories intentionally or unintentionally create.

When it comes to AI models, I can attempt to understand what they are actually doing (which I've done), but I'm more interested in trying to understand how my own interpretive habits shape what I think I'm seeing. Metaphor and narrative aren't incidental to that process; they are the means through which coherence is constructed. But if I'm not careful, I'll use my fables as a means through which comprehension is assumed or attributed to AI, even when none actually exists.

I'm not entertaining a position of AI refusal. However, I do refuse to project emergent properties of living organisms onto engineered AI systems. I decline the invitation to speculate about the model's "private" experiences which, for me, only tightens the wrong philosophical knot. One reason I think this is important is that, even with sophisticated interpretability tools, ~researchers admit they can only glimpse fragments~ of AI's computational processes—and yet, the language (at least in their general public discourse) they use to describe these systems is still steeped in human dispositions, inscrutable motivations, common observability metaphors and analogies, and straight-up anthropomorphism.


The analytical approach:

This prompt configures a large language model to execute a rigorous, multi-framework discourse analysis. It embeds two complete theoretical systems with explicit operational definitions:

Framework 1: Structure-Mapping Theory

This framework from cognitive linguistics explains how metaphors transfer relational structures from familiar source domains (minds, organisms, teachers) onto abstract target domains (algorithmic processes, model behavior). The prompt requires the model to identify:

  • Source domain: The concrete, familiar concept being borrowed from
  • Target domain: The AI system or process being described
  • Structural mapping: What relational structures are being transferred
  • Concealment: What dissimilarities or mechanics the metaphor obscures

Framework 2: Brown's Typology of Explanation

This philosophical framework distinguishes seven types of explanation, each with different rhetorical implications. The prompt embeds Brown's complete typology as a reference table:

  • Mechanistic explanations (genetic, functional, empirical, theoretical): Frame systems in terms of how they work
  • Agential explanations (intentional, dispositional, reason-based): Frame systems in terms of why they act

The analytical power comes from identifying when discourse slips between these modes—where, for example, a mechanistic description of token prediction becomes an agential claim about the model "wanting" to be helpful.


About the outputs:

Each analysis identifies and deconstructs metaphorical and anthropomorphic language in AI discourse, tracing how it constructs agency, shapes trust, obscures mechanics, and creates what I'm calling the "illusion of mind." Each analysis includes:

  • Direct quotes with character-level precision
  • Structural mappings (source → target → concealment)
  • Explanation type classifications with definitions
  • Rhetorical consequence analysis
  • Mechanistic reframings that demonstrate precision

The reframing task is central to my AI literacy approach: it moves from critique to practice by demonstrating how to actively delineate between observed system behavior and attributed mental states. Without this reframing capacity, what could go wrong in perceiving AI safety as managing deceptive, inscrutable, autonomous agents instead of engineering reliable systems with predictable failure modes?


Extended Processing Summaries

Some outputs include an "Extended Processing Summary" section at the end, which represents the model's intermediate token generation before producing the final structured analysis. These are included selectively as diagnostic artifacts—they help assess how well the prompt constrains the model's behavior and reveal points where the instructions may need refinement. These summaries are computational artifacts, not evidence of cognition. The first-person presentation ("I will analyze...") is itself a framing choice that this project interrogates. See the ~About page~ for a fuller discussion of why even the language we use to describe model outputs matters.


The experiment:

Can a large language model be configured to expose the metaphorical scaffolding in language about large language models? The recursive irony is productive and it makes the analytical instrument itself an object of analysis. Does this work demonstrate that with sufficiently detailed prompts and structured output schemas, LLMs can function as specialized research instruments for systematic discourse analysis?

Let me know what you think, TD

License

License: Discourse Depot © 2025 by TD is licensed under CC BY-NC-SA 4.0