Skip to main content

Metaphor, Explanation & Anthropomorphism Analysis


What this framework does:​

This prompt configures a large language model to function as an analytical instrument for critical discourse analysis. It makes visible how metaphorical language, especially biological and cognitive metaphors, in an effort to understand how this shapes how we understand AI systems. The framework is operationalized as system instructions which guide the LLM to identify moments, in a particular text, where technical systems are described (or narrated) using the language of minds, biology, organisms, and intentional agents, then attempts to have the LLM systematically deconstruct those linguistic choices to reveal their rhetorical consequences.

In the context of generative AI, language often performs the work that the engineering cannot. It bridges the gap between the math and the mind.


Alchemy Revisited:​

In 1965, Herbert Dreyfus published a paper called Alchemy and Artificial Intelligence. And while there have been countless ways others have tried to dismantle it, for my purposes, it still resonates. In that work, Dreyfus, like me, isn't trying to dunk on the alchemists as charlatans, since, as Dreyfus reminds us, they accomplished and produced important things like ovens and crucibles (equivalent to today’s transformers and GPUs). They just had a tight grip on a fatally flawed and incomplete understanding of matter (intelligence).

When it comes to this moment of generative AI, the ghost of Dreyfus is showing up at all the parties and stockholder meetings. Calling an LLM “intelligent” requires its own alchemy, an alchemy that plays out in the language we use to talk about that LLM.

What is Intelligence?​

Of course, this entire debate hinges on the fundamental ambiguity in the very definition of intelligence, but it doesn’t help to call a performance of lead that looks like gold, the same thing as gold. By describing statistical error as 'hallucination' and pattern matching as 'reasoning,' we perform our own type of linguistic transmutation: turning the lead of calculation into the gold of mind. The barriers to a more intelligent AI aren’t going to be solved by more servers. Again, as Dreyfus points out, the failures aren’t technical. They’re conceptual. There’s always going to be an unprogrammable boundary in the distance. (Dreyfus, 1972:214). A boundary that our optimistic metaphors, anthropomorphism and first step fallacies can narrate as a horizon, and keep at bay, but not forever.

The Mechanism of Projection​

Metaphors are cool. I love them and use them all the time. But the research demonstrates that they're also cognitive structures that guide our reasoning, shape our trust, and influence our policies. When we say an AI model "learns," "understands," or "thinks," we're not just being imprecise about how these systems acutally work, we're importing a complete relational structure from human cognition onto processes of statistical pattern-matching. The metaphor does conceptual and ideological work, often invisibly.

Perhaps we narrate minds into machines because the alternative, seeing them as mirrors of our own projection, is harder to hold. However, I don't want to dismiss this illusion, nor do I care to admire it. I'm interested in how this illusion is produced and also why it's so compelling. That means paying close attention to the metaphors we use, the stories we construct around them, and the expectations those stories intentionally or unintentionally create.

When it comes to generative AI models, I'm more interested in trying to understand how our own interpretive habits shape what we think we're seeing. Metaphor and narrative aren't incidental to that process; they are the means through which coherence is constructed. But if I'm not careful, I'll use my fables as a means through which comprehension is assumed or attributed to AI, even when none actually exists.

Refusal and Functionalism​

I'm not entertaining a position of AI refusal. However, I do refuse to project emergent properties of living organisms onto engineered AI systems. I decline the invitation to speculate about the model's "private" experiences which, for me, only tightens the wrong philosophical knot. One reason I think this is important is that, even with sophisticated interpretability tools, researchers admit they can only glimpse fragments of AI's vast, distributed computational processes and yet, the language (at least in their general public discourse) they use to describe these systems is still steeped in human dispositions, inscrutable motivations, blurred explanatory genres, common observability metaphors and analogies, and straight-up anthropomorphism.

"Not All Muddy Waters Are Deep"​

The only way to frame this unreadability as a sign of intelligence is through a language that conflates very different types of opacity. The black box of the LLM is an accounting problem, not a depth problem. So how we talk about AI becomes super important at this moment. Because the intelligence that is being created is happening in the description (slip sliding into explanation), not the code.

LLMs don't and can’t understand us. But how we understand them, and what stories we build around that act of understanding, might be the most urgent literacy question of our time.


The analytical approach:​

This prompt configures a large language model to execute a rigorous, multi-framework discourse analysis. It embeds two complete theoretical systems with explicit operational definitions:

Framework 1: Structure-Mapping Theory​

This framework from cognitive linguistics explains how metaphors transfer relational structures from familiar source domains (minds, organisms, teachers) onto abstract target domains (algorithmic processes, model behavior). The prompt requires the model to identify:

  • Source domain: The concrete, familiar concept being borrowed from
  • Target domain: The AI system or process being described
  • Structural mapping: What relational structures are being transferred
  • Concealment: What dissimilarities or mechanics the metaphor obscures
ConceptSource Domain (The Human)Target Domain (The AI)The Concealment
LearningAcquiring wisdom through experience/sensory grounding.Minimizing error rates in a statistical distribution.Hides the lack of "world" understanding.
HallucinationA neurological break from reality (perceiving without input).Probabilistic error (predicting the wrong token).Hides the fact that the model never knows what is real.
UnderstandingInternalizing logic and meaning.Identifying correlations between symbols.Hides the absence of intent.

Framework 2: Brown's Typology of Explanation​

This philosophical framework distinguishes seven types of explanation, each with different rhetorical implications. The prompt embeds Brown's complete typology as a reference table:

  • Mechanistic explanations (genetic, functional, empirical, theoretical): Frame systems in terms of how they work
  • Agential explanations (intentional, dispositional, reason-based): Frame systems in terms of why they act

The analytical power comes from identifying when discourse slips between these modes—where, for example, a mechanistic description of token prediction becomes an agential claim about the model "wanting" to be helpful. This slippage is the mechanism of the illusion. It allows the 'how' (matrix multiplication) to be smuggled inside the 'why' (desire), granting the machine an unearned interiority.

This slippage happens all the time in our language, but the analytical prompt aims to advance my claim that debates about mind and machine often confuse explanatory types. They argue across categories without realizing they’re using different grammars of “why.” Like in the case of metaphor, the question isn’t really about a “right”or “wrong” explanation type but more like what forms of coherence do different explanatory types make possible.

The idea is that some nuance is deepened through a lens on explanation, because any disagreement between "hows" and "whys" is not just about what intelligence is but what counts as an explanation of it.

One treats explanation as functional. It answers the question “How does this system produce the output?” through mechanism, feedback, and performance metrics. The emphasis lies on operational sufficiency: if the function is achieved, intelligence is explained. The other leans toward intentional and reason-based explanations. It asks “Why is this act meaningful?” or “What understanding underlies it?” Here, explanation involves reconstructing an interior logic, a sense of purpose or comprehension that functional accounts leave unexamined.

So we can ponder:

  • What kind of explanation is presupposed when we describe intelligence as an emergent property of computation?
  • What kind of explanation is demanded when we insist that understanding requires subjectivity?
  • Can these forms ever converge, or do they belong to irreducibly different domains of sense-making?

Framed this way, this particular corner of debate over AI becomes a bit more than empirical or technological. It is methodological: an interesting contest over what it means for an explanation to count as complete. Every “why” about AI conceals a “how” that can still be described - noting that an inability to describe something doesn’t necessarily make it ineffable.


Basic Structure of Outputs​

  • Task 1: Metaphor and Anthropomorphism Audit: For each of the major metaphorical patterns identified, this audit examines the specific language used, the frame through which the AI is being conceptualized, what human qualities are being projected onto the system, whether the metaphor is explicitly acknowledged or presented as direct description, and—most critically—what implications this framing has for trust, understanding, and policy perception.
  • Task 2: Source-Target Mapping: For each key metaphor identified in Task 1, this section provides a detailed structure-mapping analysis. The goal is to examine how the relational structure of a familiar "source domain" (the concrete concept we understand) is projected onto a less familiar "target domain" (the AI system). By restating each quote and analyzing the mapping carefully, we can see precisely what assumptions the metaphor invites and what it conceals.
  • Task 3: Explanation Audit (The Rhetorical Framing of "Why" vs. "How"): This section audits the text's explanatory strategy, focusing on a critical distinction: the slippage between "how" and "why." Based on Robert Brown's typology of explanation, this analysis identifies whether the text explains AI mechanistically (a functional "how it works") or agentially (an intentional "why it wants something"). The core of this task is to expose how this "illusion of mind" is constructed by the rhetorical framing of the explanation itself, and what impact this has on the audience's perception of AI agency.
  • Task 4: AI Literacy in Practice: Reframing Anthropomorphic Language: Moving from critique to constructive practice, this task demonstrates applied AI literacy. It selects the most impactful anthropomorphic quotes identified in the analysis and provides a reframed explanation for each. The goal is to rewrite the concept to be more accurate, focusing on the mechanistic processes (e.g., statistical pattern matching, token prediction) rather than the misleading agential language. Additionally, for quotes with consciousness claims (e.g., "the AI knows"), this section provides a technical reality check that explicitly states what the system actually does at the mechanistic level.
  • Critical Observations: This section synthesizes the findings from the previous tasks into a set of critical observations. It examines the macro-patterns of agency slippage (the shift between treating AI as a tool vs. an agent), metaphor driven trust - how cognitive metaphors drive trust or fear, and obscured mechanics and context sensitivity - what actual technical processes are obscured by the text's dominant linguistic habits.
  • Conclusion: This final section provides a synthesis of the entire analysis. It identifies the text's dominant metaphorical patterns and explains how they construct an "illusion of mind." Most critically, it connects these linguistic choices to their tangible, material stakes—analyzing the economic, legal, regulatory, and social consequences of this discourse. It concludes by reflecting on AI literacy as a counter-practice and outlining a path toward a more precise and responsible vocabulary for discussing AI.
  • Extended Processing Summaries: Gemini refers to the text below as “thought summaries.” This is an overt consciousness projection because 'intentions' are hallmarks of a conscious mind that 'knows' what it is doing and why. The concealed mechanistic process is probabilistic text generation. Treat this as a just another rhetorical artifact —a way of making the model’s processing legible. The first-person framing of these “thought summaries” is a presentation choice for the user-facing output, not a window into “real”thoughts. These are computational artifacts, not cognitive reports from a quirky, curious or conflicted mind.

A Note on Extended Processing Summaries​

Some outputs include an "Extended Processing Summary" section at the end, which represents the model's intermediate token generation before producing the final structured analysis. These are included selectively as diagnostic artifacts—they help assess how well the prompt constrains the model's behavior and reveal points where the instructions may need refinement. These summaries are computational artifacts of token probability, not evidence of cognition. The first-person presentation ("I will analyze...") is itself a framing choice that this project interrogates. See the About page for a fuller discussion of why even the language we use to describe model outputs matters.


The experiment:​

Can a large language model be configured to expose the metaphorical scaffolding in language about large language models? The recursive irony is productive and it makes the analytical instrument itself an object of analysis. Does this work demonstrate that with sufficiently detailed prompts and structured output schemas, LLMs can function as specialized research instruments for systematic discourse analysis?

Let me know what you think, TD

Dreyfus, H. L. (1972). What computers can't do; a critique of artificial reason. United Kingdom: Harper & Row.

License

License: Discourse Depot © 2025 by TD is licensed under CC BY-NC-SA 4.0