đ Large Language Models as Inadvertent Models of Dementia with Lewy Bodies: How a Disorder of Reality Construction Illuminates AI Hallucination
đ¤ "What survives...?" A rewriting experiment that tests whether anthropomorphic AI discourse can be translated into strictly mechanistic language while preserving the phenomena described.
- About
- Analysis Metadata
- đ Audit Dashboard
This document presents a Critical Discourse Analysis focused on AI literacy, specifically targeting the role of metaphor and anthropomorphism in shaping public and professional understanding of generative AI. The analysis is guided by a prompt that draws from cognitive linguistics (metaphor structure-mapping), the philosophy of social science (Robert Brown's typology of explanation), and accountability analysis.
All findings and summaries below were generated from detailed system instructions provided to a large language model and should be read critically as interpretive outputsânot guarantees of factual accuracy or authorial intent.
Source Title: Large Language Models as Inadvertent Models of Dementia with Lewy Bodies: How a Disorder of Reality Construction Illuminates AI Hallucination Source URL: https://doi.org/10.1007/s12124-026-09997-w Model: gemini-3.1-pro-preview Temperature: 1.05 Top P: 0.95 Tokens: input=7281, output=16935, total=24216 Source Type: article Published: 2026-04-11 Analyzed At: 2026-04-14T07:47:18.795Z Framework: metaphor Framework Version: 6.5 Schema Version: 3.0 Run ID: 2026-04-14-large-language-models-as-inadvertent-mod-metaphor-plazrs
Metaphor & Illusion Dashboard
Anthropomorphism audit ¡ Explanation framing ¡ Accountability architecture
Deep Analysis
Select a section to view detailed findings
Explanation Audit
Browse how/why framing in each passage
"LLMs are highly effective generators of locally coherent linguistic sequences. They produce explanations, summaries, and arguments that are often well-formed and contextually appropriate."
đAnalysis
đ§ Epistemic Claim Analysis
đŻRhetorical Impact
How/Why Slippage
60%
of explanations use agential framing
6 / 10 explanations
Unacknowledged Metaphors
50%
presented as literal description
No meta-commentary or hedging
Hidden Actors
88%
agency obscured by agentless constructions
Corporations/engineers unnamed
Explanation Types
How vs. Why framing
Acknowledgment Status
Meta-awareness of metaphor
Actor Visibility
Accountability architecture
Source â Target Pairs (8)
Human domains mapped onto AI systems
Metaphor Gallery (8)
Reframed Language (Top 4 of 8)
| Original Quote | Mechanistic Reframing | Technical Reality | Human Agency Restoration |
|---|---|---|---|
| From the modelâs perspective, there is no enduring propositionâonly the current probability distribution over possible continuations. | The transformer architecture lacks a persistent internal state or semantic understanding; it strictly evaluates the current input sequence to calculate a statistical probability distribution for the next token. | The model has no subjective perspective, nor does it hold or reject propositions. It is a mathematical system that processes numerical weights and predicts subsequent tokens based on patterns learned during training, completely devoid of conscious awareness. | N/A - describes computational processes without displacing responsibility. |
| They do not track whether a named entity continues to refer to the same object across contexts... | The software architecture does not include mechanisms to cross-reference generated terms against a persistent database, resulting in outputs that fail to maintain logical consistency across a context window. | The AI does not 'track' or 'refer' to objects because it has no awareness of objects or semantics. It strictly processes sequences of text as high-dimensional vectors, calculating attention scores without understanding the real-world entities those vectors represent. | The engineering teams who built these systems prioritized fluid text generation over deterministic logic, deliberately omitting the database architectures that would enforce strict logical consistency. |
| When an LLM generates a non-existent citation or confidently asserts an incorrect fact, it is not violating an internal norm of truth. | When the system outputs a token sequence formatted like a citation or a factual statement that contradicts reality, it is simply executing its prediction algorithm. | Models cannot be 'confident' or hold 'norms.' They classify tokens and generate outputs correlating with their training data. A 'hallucinated' citation is mathematically identical to a correct one: both are just high-probability token sequences generated without factual verification. | N/A - describes computational processes without displacing responsibility. |
| Hallucinations and fluctuations are thus interpreted as breakdowns in reality endorsement rather than failures of perception or reasoning. | Statistical deviations in text generation are better understood as the expected result of omitting hard-coded verification mechanisms, rather than mimicking biological perception errors. | The system does not 'endorse reality,' 'perceive,' or 'reason.' It executes vector operations. The output deviations occur because the architecture processes linguistic probabilities without a grounded world model to test claims against external facts. | Developers at AI labs chose to deploy ungrounded language models as search engines and encyclopedias, framing the resulting predictable statistical errors as mysterious 'hallucinations' rather than design flaws. |
Task 1: Metaphor and Anthropomorphism Auditâ
About this task
For each of the major metaphorical patterns identified, this audit examines the specific language used, the frame through which the AI is being conceptualized, what human qualities are being projected onto the system, whether the metaphor is explicitly acknowledged or presented as direct description, andâmost criticallyâwhat implications this framing has for trust, understanding, and policy perception.
V3 Enhancement: Each metaphor now includes an accountability analysis.
1. LLM as Psychiatric Subjectâ
Quote: "large language models (LLMs)... already instantiate a structural configuration resembling dementia with Lewy bodies (DLB)."
- Frame: Model as cognitively diseased organism
- Projection: This metaphor projects biological pathology and subjective cognitive degradation onto a mathematical matrix optimization process. By mapping Dementia with Lewy Bodies (DLB)âa devastating neurodegenerative disease involving the physical deterioration of human brain tissue and the profound disruption of conscious awarenessâonto a Large Language Model, the text implies that the software possesses an underlying cognitive architecture capable of experiencing a 'disorder of reality.' It maps the human capacity for conscious reality-testing and subjective endorsement onto the mechanistic process of token prediction. This fundamentally obscures the reality that the model is entirely devoid of consciousness, subjective experience, or any biological mechanism that could 'deteriorate' or 'fluctuate' in a phenomenological sense.
- Acknowledgment: Hedged/Qualified (The author qualifies the claim by using the term 'structural configuration resembling' and notes elsewhere it is a 'strictly structural sense.' I considered 'Explicitly Acknowledged,' but ruled it out because the text lacks definitive meta-commentary separating the metaphor from reality, instead arguing the models actually function as structural probes.)
- Implications: Framing computational failures as biological or psychiatric diseases profoundly affects public policy and technical evaluation. It inflates the perceived sophistication of AI systems by suggesting they are complex enough to suffer from human-like cognitive disorders, rather than simply recognizing them as statistically unreliable algorithms. This unwarranted biological anthropomorphism shields developers from accountability; if an AI is 'diseased,' its failures seem like tragic inevitabilities of complex cognition rather than deliberate engineering tradeoffs optimizing for conversational fluency over factual precision. This misleads regulators into treating AI alignment as a therapeutic or psychiatric endeavor rather than a strict product safety and consumer protection issue.
Accountability Analysis:
- Actor Visibility: Hidden (agency obscured)
- Analysis: The agentless construction 'already instantiate' obscures the human engineers and corporate executives who deliberately designed the transformer architecture and selected the training data. The software does not spontaneously 'instantiate' configurations; OpenAI, Google, and others engineered these systems to maximize predictive fluency without hard-coded verification mechanisms. By treating the architecture as an organically emergent pathology, the text hides the profit-driven corporate choices that prioritize scale over truth. I considered 'Partial (some attribution)' because 'designed' is used abstractly elsewhere, but ruled it out because this specific sentence completely erases human originators, treating the model as a self-contained entity.
Show more...
2. Statistical Error as Hallucinationâ
Quote: "Hallucinations and fluctuations are thus interpreted as breakdowns in reality endorsement rather than failures of perception or reasoning."
- Frame: Algorithmic mismatch as perceptual illusion
- Projection: The text projects the complex psychological and neurological phenomenon of hallucinationâwhich requires a conscious, perceiving subject who mistakenly experiences internally generated stimuli as external realityâonto the mechanistic generation of text sequences based on probability distributions. It attributes the human capacity for 'perception,' 'reasoning,' and 'reality endorsement' to a system that exclusively processes mathematical correlations. By discussing hallucination as a breakdown in reality endorsement, the metaphor suggests the AI previously possessed or ought to possess a conscious relationship with truth and reality, projecting an epistemic agency that the system fundamentally lacks.
- Acknowledgment: Hedged/Qualified (The text hedges by later calling them 'so-called hallucinations' and framing it as a 'relational phenomenon' evaluated by users. I considered 'Direct (Unacknowledged),' but ruled it out because the author actively attempts to redefine the term's application to AI as a structural mismatch rather than a direct perceptual failure.)
- Implications: Applying the term 'hallucination' to algorithmic outputs grants the system an illusion of mind, suggesting it is a reasoning entity that is merely 'confused' or 'dreaming.' This epistemic inflation builds unwarranted trust by implying the system generally perceives reality correctly and only occasionally suffers from 'breakdowns.' It masks the reality that the system never perceives reality at all; it only calculates probabilities. This framing shifts the regulatory focus away from false advertising and product liability toward the impossible task of 'curing' a machine of its illusions.
Accountability Analysis:
- Actor Visibility: Hidden (agency obscured)
- Analysis: This formulation entirely obscures the engineering teams and corporate actors who deploy systems known to generate false information. By labeling statistical noise as a 'breakdown in reality endorsement,' the text makes the AI the active (though failing) subject, hiding the fact that human developers decided to release a product that lacks verification mechanisms. I considered 'Ambiguous/Insufficient Evidence' but ruled it out because the structural passivity explicitly removes human developers from the etiology of the failure, creating an accountability sink.
3. Machine Tracking and Intentionâ
Quote: "They do not track whether a named entity continues to refer to the same object across contexts, whether a proposition has been asserted before, or whether a claim conflicts with an existing record."
- Frame: Software limitation as epistemic negligence
- Projection: The metaphor projects the human cognitive tasks of 'tracking,' 'referring,' 'asserting,' and conflict resolution onto a large language model. While framed in the negative (what the model does not do), it still imposes an agential, epistemic framework. It implies that a human-like epistemic agent is failing to perform standard conscious operations, projecting the capability of 'knowing' onto a system that only processes. The concept of 'tracking a proposition' requires understanding semantics, objective reality, and logical consistencyâtraits of conscious awareness that are fundamentally alien to an autoregressive mechanism predicting the next token.
- Acknowledgment: Direct (Unacknowledged) (The author presents this characterization of AI epistemic failure directly, without any qualifying language or scare quotes around the cognitive verbs. I considered 'Hedged/Qualified,' but ruled it out because the surrounding context treats the AI's relationship to propositions as a literal, albeit absent, cognitive function.)
- Implications: Describing AI limitations through negated cognitive capabilities (what it 'does not track') subtly reinforces the illusion that the system is operating within a cognitive paradigm to begin with. This encourages users to treat the AI as a flawed human assistant rather than a complex calculator, leading to misplaced trust and dangerous reliance on the system for factual retrieval. By framing the issue as an epistemic failure of the AI rather than a database architecture limitation of the software, it invites solutions based on 'teaching' or 'aligning' the model rather than integrating basic deterministic software constraints.
Accountability Analysis:
- Actor Visibility: Hidden (agency obscured)
- Analysis: The pronoun 'They' refers exclusively to the large language models, positioning the software as the entity responsible for tracking (or failing to track) truth. This completely displaces the agency of the software architects who explicitly chose to build generative systems without integrated database verification or logical consistency checkers. I considered 'Partial (some attribution)' given the mention of architectural absences elsewhere, but ruled it out because this sentence places the active burden of 'tracking' solely on the personified AI system, absolving the designers.
4. Subjective Perspective of the Machineâ
Quote: "From the modelâs perspective, there is no enduring propositionâonly the current probability distribution over possible continuations."
- Frame: Mathematical operation as subjective viewpoint
- Projection: This profound anthropomorphic projection grants a 'perspective' to an insentient mathematical model. Having a perspective requires consciousness, a subjective locus of experience, and a specific phenomenological vantage point on the world. The text juxtaposes this deeply subjective, conscious framing ('model's perspective') directly against a purely mechanistic reality ('probability distribution'). This creates a cognitive dissonance that maps the feeling of subjective awareness onto the rote execution of matrix multiplications, entirely conflating a computational process with conscious knowing.
- Acknowledgment: Direct (Unacknowledged) (The phrase 'From the model's perspective' is stated as literal fact without any hedging, meta-commentary, or scare quotes. I considered 'Hedged/Qualified' because the second half of the sentence is highly technical, but ruled it out because the attribution of a subjective 'perspective' to begin the clause is entirely unmitigated.)
- Implications: Attributing a 'perspective' to a mathematical model normalizes the treatment of AI as an independent conscious entity. This has severe implications for liability and ethics, as it implicitly grants the machine a form of moral patienthood or quasi-subjectivity. If the model has a 'perspective,' it becomes easier to blame the model for its outputsâit simply saw things differentlyârather than blaming the corporation that optimized its weights. This accelerates unwarranted trust by suggesting the machine possesses an internal subjective life akin to human awareness.
Accountability Analysis:
- Actor Visibility: Hidden (agency obscured)
- Analysis: By locating the origin of the output in the 'model's perspective,' the text obscures the human perspective of the AI developers. It is the developers' perspective that prioritized probability distributions over enduring propositions in the system's architecture. I considered 'Ambiguous/Insufficient Evidence,' but ruled it out because the phrase actively works to construct an artificial subjective agent ('the model') to stand in for the human software engineers, making the displacement of agency clear and functional.
5. Violation of Internal Normsâ
Quote: "When an LLM generates a non-existent citation or confidently asserts an incorrect fact, it is not violating an internal norm of truth. It is generating text without implementing the operations required to treat truth as a constraint."
- Frame: Machine behavior as moral/epistemic conduct
- Projection: The text maps concepts of human epistemic morality ('violating an internal norm of truth,' 'confidently asserts') onto token generation. While the author attempts to clarify that the machine is not violating a norm, using the framework of 'confidence' and 'norms' projects a human-like epistemic agency onto the system. A machine cannot be 'confident'; it only has statistical weights. A machine cannot have 'internal norms of truth'; it only operates on code. Projecting these concepts, even in negation, suggests the software exists in a moral or epistemic landscape where it could hypothetically possess such norms.
- Acknowledgment: Hedged/Qualified (The author uses 'confidently asserts' but immediately hedges by explaining the machine is mechanically 'generating text without implementing the operations.' I considered 'Direct (Unacknowledged),' but ruled it out because the core purpose of the sentence is to dispel the illusion of intentional norm-violation, even while relying on anthropomorphic verbs like 'asserts.' )
- Implications: Using emotionally and epistemically loaded words like 'confidently' to describe a high-probability statistical output creates a dangerous semantic inflation. It trains users to read human psychological states into algorithmic behaviors. When a machine is described as 'confident,' users are more likely to bypass their own critical thinking and accept false information, increasing their vulnerability to automated misinformation. It frames the machine as an actor navigating truth, rather than a tool executing commands.
Accountability Analysis:
- Actor Visibility: Hidden (agency obscured)
- Analysis: The quote says 'it is generating text without implementing the operations,' blaming the AI ('it') for failing to implement constraints. However, software does not implement its own operations; human developers do. OpenAI and Google failed to implement those operations. I considered 'Partial (some attribution)' because the technical language hints at design, but ruled it out because the syntactic subject of the failure is exclusively the AI, rendering the human engineers invisible.
6. Evolutionary Optimization as Emergenceâ
Quote: "This convergence is especially striking because it was not engineered as a disease simulation; it emerged from the optimization of generative fluency without the concurrent implementation of mechanisms for reality endorsement..."
- Frame: Algorithmic development as natural emergence
- Projection: The metaphor maps the biological and naturalistic concept of 'emergence' onto a highly deliberate, capital-intensive corporate engineering process. It projects the quality of an organic, evolutionary growth process onto mathematical models tuned on server farms. While it does not project consciousness directly, it projects an organic autonomy, suggesting the system developed its structural homology to human disease ('psychopathology') naturally and independently, rather than as the direct result of specific human mathematical optimization choices.
- Acknowledgment: Hedged/Qualified (The phrase notes it was 'not engineered as a disease simulation,' contrasting intentional engineering with what 'emerged.' I considered 'Direct (Unacknowledged),' but ruled it out because the inclusion of the word 'optimization' retains a slight mechanical/mathematical framing that qualifies the organicism of 'emerged.')
- Implications: The 'emergence' narrative is a powerful tool for tech companies to evade regulation. If AI behaviors simply 'emerge' organically like weather patterns or biological evolution, they are treated as natural phenomena to be studied (as the author does here with psychiatry) rather than commercial products to be strictly regulated. It mystifies the underlying mechanics, convincing policymakers that the technology is beyond human control and therefore immune to standard product liability paradigms.
Accountability Analysis:
- Actor Visibility: Hidden (agency obscured)
- Analysis: The phrase 'emerged from the optimization' uses passive, agentless language to describe an active, human-directed corporate process. WHO optimized the generative fluency? WHO decided not to implement concurrent mechanisms for reality endorsement? The AI researchers and executives. I considered 'Named (actors identified)' because 'optimization' implies an optimizer, but ruled it out completely because no actual entity is named; the process is treated as an autonomous force of nature.
7. The Machine as an Evaluatorâ
Quote: "They produce explanations, summaries, and arguments that are often well-formed and contextually appropriate."
- Frame: Text generation as rhetorical action
- Projection: The text projects human rhetorical and pedagogical intentions onto the algorithmic output of a transformer model. By stating the system 'produces explanations' and 'arguments,' it attributes the conscious, intentional acts of teaching and persuading to statistical pattern matching. An 'explanation' requires an intent to clarify and a conscious understanding of the listener's knowledge gap; an 'argument' requires the intentional defense of a believed premise. The system merely processes linguistic correlations; it does not 'know' it is explaining anything.
- Acknowledgment: Direct (Unacknowledged) (The author presents the capabilities of LLMs to produce explanations and arguments without any hedging, presenting it as a literal capability. I considered 'Hedged/Qualified,' but ruled it out because there is no modifying language softening the verbs 'produce' or the nouns 'explanations/arguments.')
- Implications: When outputs are characterized as 'explanations' rather than 'text that mimics the structure of an explanation,' audiences naturally extend their relational trust to the machine. We are socially conditioned to trust the intent behind an explanation. This linguistic framing obscures the lack of causal reasoning in the model, tricking users into believing the model understands the physical or logical mechanisms it is describing, which can lead to catastrophic errors when using AI for scientific or medical guidance.
Accountability Analysis:
- Actor Visibility: Named (actors identified)
- Analysis: I considered 'Hidden (agency obscured)' because human users prompt the generation, but ruled it out because in this specific descriptive instance, treating the AI ('They') as the immediate actor producing the text is technically descriptive of the software execution phase, even if anthropomorphic. However, it still exhibits mild displacement by ignoring the human prompt engineering that elicited the structure. Wait, applying strict rules: The prompt says 'WHO designed, deployed... If actors ARE named, note this.' The designers are NOT named here. Therefore, the visibility is actually Hidden. Correction: Hidden (agency obscured), as the system is presented as the sole autonomous author of the arguments, erasing the creators of the training corpus.
8. Artificial Psychopathologyâ
Quote: "...the convergence between DLB and LLMs marks an unexpected kind of singularityânot the arrival of artificial consciousness, but the emergence of artificial psychopathology as a new probe into how subjectivity and reality are constructed."
- Frame: Software defect as mental illness
- Projection: This extraordinary metaphor projects the complex, suffering-laden, biological reality of human mental illness ('psychopathology') onto the predictable failure modes of mathematical software. While explicitly denying 'artificial consciousness,' the author immediately contradicts this by claiming 'artificial psychopathology'âa profound ontological category error. Psychopathology fundamentally requires a 'psyche' (a conscious mind) to suffer pathology. By claiming the AI exhibits psychopathology, the text projects a vast inner landscape of subjectivity and broken awareness onto an entirely inert matrix of weights and biases.
- Acknowledgment: Direct (Unacknowledged) (The phrase 'emergence of artificial psychopathology' is presented as a literal, theoretical conclusion of the paper's thesis without scare quotes. I considered 'Explicitly Acknowledged' because the author denies 'artificial consciousness' in the same sentence, but ruled it out because 'artificial psychopathology' is advanced as a genuine, unhedged scientific phenomenon.)
- Implications: This framing has massive epistemological and institutional stakes. It attempts to carve out a new field of studyâcomputational psychiatry for machinesâwhich treats AI systems as patients rather than products. This elevates the prestige of the models, positioning them alongside the mystery of the human mind, while simultaneously excusing their severe factual defects as 'illnesses' rather than 'bugs' or 'poor engineering.' It completely obfuscates the mechanistic reality that these systems just correlate text.
Accountability Analysis:
- Actor Visibility: Hidden (agency obscured)
- Analysis: The concept of 'artificial psychopathology' acts as the ultimate accountability sink. By defining the system's failures as an emergent illness, it utterly erases the liability and responsibility of the multi-billion-dollar corporations that built and deployed the defective models. I considered 'Partial (some attribution)' but ruled it out entirely; there is no room for human engineers in a framework that views software bugs as naturally emergent mental diseases.
Task 2: Source-Target Mappingâ
About this task
For each key metaphor identified in Task 1, this section provides a detailed structure-mapping analysis. The goal is to examine how the relational structure of a familiar "source domain" (the concrete concept we understand) is projected onto a less familiar "target domain" (the AI system). By restating each quote and analyzing the mapping carefully, we can see precisely what assumptions the metaphor invites and what it conceals.
Mapping 1: Neurodegenerative human disease and conscious suffering â Mathematical absence of hard-coded verification algorithmsâ
Quote: "large language models (LLMs)... already instantiate a structural configuration resembling dementia with Lewy bodies (DLB)."
- Source Domain: Neurodegenerative human disease and conscious suffering
- Target Domain: Mathematical absence of hard-coded verification algorithms
- Mapping: The structure of a human biological tragedyâwhere a previously functioning, conscious brain deteriorates, causing a dissociation between sensory input and reality stabilizationâis mapped onto an artificial neural network. The mapping assumes that because the AI's linguistic output superficially resembles the confusing speech of a DLB patient, the underlying 'structural configuration' is analogous. It projects the complex interplay of human memory, consciousness, and perceptual validation onto the relationship between generative algorithms and missing database-grounding architectures.
- What Is Concealed: This mapping conceals the fundamental dissimilarity: a DLB patient has a lived, conscious experience of reality that is organically breaking down; an LLM has no lived experience, no reality to break down, and is operating exactly as mathematically intended based on its training. It obscures the proprietary opacity of the modelsâwe cannot even see the true architecture of commercial LLMs, making the assertion of a 'structural configuration' a speculative mapping over a corporate black box.
Show more...
Mapping 2: Conscious human reality-testing and perceptual failure â Statistical token prediction deviating from factual ground truthâ
Quote: "Hallucinations and fluctuations are thus interpreted as breakdowns in reality endorsement..."
- Source Domain: Conscious human reality-testing and perceptual failure
- Target Domain: Statistical token prediction deviating from factual ground truth
- Mapping: The relational structure of human perception is projected onto machine computation. In the source domain, a conscious mind continuously checks internal stimuli against external reality (endorsement), and a failure results in hallucination. The target domain maps 'internal stimuli' to text generation, and 'reality endorsement' to the missing programmatic constraints. The mapping invites the assumption that the machine processes 'reality' conceptually and merely suffers a 'breakdown' in an operation it is theoretically capable of performing.
- What Is Concealed: This conceals the absolute absence of 'reality' in the target domain. LLMs do not have an external reality to endorse; they only have a static dataset of text vectors. The mapping hides the fact that mathematical correlations are fundamentally divorced from epistemology. It also obscures the massive, low-wage human labor (RLHF) required to temporarily suppress these statistical deviations, framing the failure as an internal model breakdown rather than the inherent limitation of predicting next words without a world model.
Mapping 3: Human epistemic vigilance and semantic awareness â Absence of persistent memory architecture across context windowsâ
Quote: "They do not track whether a named entity continues to refer to the same object across contexts..."
- Source Domain: Human epistemic vigilance and semantic awareness
- Target Domain: Absence of persistent memory architecture across context windows
- Mapping: The source domain involves a conscious researcher or speaker deliberately holding an entity in mind and verifying its logical consistency across a narrative. This relational structure is mapped onto the computational limits of an LLM's context window and attention mechanisms. The mapping invites the assumption that the machine is an epistemic agent that 'should' be tracking meaning, projecting the conscious act of 'knowing' reference onto the mechanical act of computing attention weights between tokens.
- What Is Concealed: This mapping conceals the entirely mathematical nature of the transformer architecture, which operates on self-attention scores rather than semantic meaning or symbolic logic. It hides the fact that the machine cannot 'refer' to an object because it only accesses tokens, not the physical or conceptual objects those tokens represent. By anthropomorphizing the absence of a feature, it obscures the deliberate corporate choice to prioritize scale and flexibility over the rigid, hard-coded rules required for logical consistency.
Mapping 4: Subjective phenomenological consciousness â Mathematical state of a software program during runtimeâ
Quote: "From the modelâs perspective, there is no enduring propositionâonly the current probability distribution..."
- Source Domain: Subjective phenomenological consciousness
- Target Domain: Mathematical state of a software program during runtime
- Mapping: The concept of a conscious 'perspective'âthe subjective locus from which a mind experiences the worldâis mapped onto the mathematical state of the AI model as it calculates outputs. The relational structure equates human subjective experience with a 'probability distribution.' This radical mapping invites the reader to step into the 'mind' of the machine, explicitly projecting the highest form of conscious knowing (having a perspective) onto the lowest form of mechanistic processing (statistical weights).
- What Is Concealed: This mapping completely conceals the non-existence of an internal subjective state. A machine no more has a 'perspective' than a pocket calculator has a perspective on addition. It obscures the hardware dependency, energy consumption, and raw mathematical nature of the system. Furthermore, it conceals the proprietary nature of the weights; the 'distribution' is not a perspective, it is a locked corporate asset that is intentionally kept opaque from public scrutiny to protect intellectual property.
Mapping 5: Human moral/epistemic psychology and social communication â High-probability token generation resulting in a false statementâ
Quote: "When an LLM... confidently asserts an incorrect fact, it is not violating an internal norm of truth."
- Source Domain: Human moral/epistemic psychology and social communication
- Target Domain: High-probability token generation resulting in a false statement
- Mapping: The source domain involves a human making a statement with emotional certainty (confidence) and the ethical frameworks guiding truth-telling (internal norms). This is mapped onto an algorithm generating a sequence of tokens with high statistical probability but low factual accuracy. The mapping assumes that statistical probability (the target) is functionally equivalent to psychological confidence (the source), projecting the conscious experience of belief onto mathematical weights.
- What Is Concealed: The mapping conceals the fact that statistical probability has no relationship to factual truth or psychological confidence. A model can generate a false statement with a 99% probability score simply because that token sequence was highly represented in the unvetted internet training data. It obscures the vast, scraped datasets full of human biases and errors that actually dictate the output, hiding the data labor and copyright infringement behind a veil of machine 'confidence.'
Mapping 6: Natural evolution and biological emergence â Corporate-directed machine learning and hyperparameter tuningâ
Quote: "...it emerged from the optimization of generative fluency..."
- Source Domain: Natural evolution and biological emergence
- Target Domain: Corporate-directed machine learning and hyperparameter tuning
- Mapping: The biological concept of emergenceâwhere complex systems self-organize without a central designerâis mapped onto the training phase of large language models. The structure maps natural selection onto the mathematical optimization of a loss function ('generative fluency'). This mapping invites the assumption that AI behavior is an autonomous, natural phenomenon outside of strict human control, projecting the autonomy of nature onto a manufactured artifact.
- What Is Concealed: This mapping radically conceals human agency, capital investment, and engineering choices. It hides the server farms, the energy grids, the executives setting the objectives, and the engineers tuning the hyperparameters. By framing optimization as an organic 'emergence,' it obscures the commercial reality that companies intentionally chose to optimize for conversational fluency because it makes for a highly marketable, engaging product, despite the known epistemic risks.
Mapping 7: Human rhetorical, pedagogical, and logical action â Sequence-to-sequence text synthesis matching prompt structuresâ
Quote: "They produce explanations, summaries, and arguments..."
- Source Domain: Human rhetorical, pedagogical, and logical action
- Target Domain: Sequence-to-sequence text synthesis matching prompt structures
- Mapping: The human acts of synthesizing knowledge, teaching, and defending beliefs are mapped directly onto algorithmic sequence generation. The structure assumes that because the output mimics the linguistic form of an explanation or argument, the generative process must share the intentional, conscious structure of explaining or arguing. It maps the appearance of reasoning onto the mechanics of correlation.
- What Is Concealed: The mapping conceals the absence of a world model, causal understanding, and logical deduction. The machine is not 'arguing'; it is synthesizing linguistic patterns that resemble arguments found in its training data. This conceals the model's total reliance on the human corpusâit is effectively performing an advanced form of statistical plagiarism, remixing the actual explanations and arguments created by human laborers whose contributions remain uncredited and uncompensated.
Mapping 8: Clinical psychiatry and the study of human mental illness â Analysis of systematic errors in artificial neural networksâ
Quote: "...the emergence of artificial psychopathology as a new probe into how subjectivity and reality are constructed."
- Source Domain: Clinical psychiatry and the study of human mental illness
- Target Domain: Analysis of systematic errors in artificial neural networks
- Mapping: The entire field of clinical psychiatry and the study of conscious suffering is mapped onto the debugging and error analysis of software architectures. The mapping equates the human 'psyche' with artificial 'subjectivity,' and human disease with computational mismatch. It invites the ultimate assumption that machines are so sophisticated they have crossed a threshold into having 'minds' capable of breaking in human ways, fully conflating mechanistic processing with conscious knowing and suffering.
- What Is Concealed: This mapping entirely conceals the ontological chasm between living, conscious, suffering beings and inert, manufactured tools. It obscures the fact that 'artificial subjectivity' does not exist. Furthermore, it conceals the hubris and 'curse of knowledge' of the researcher, who projects their own profound understanding of human psychiatry onto the blank screen of a chatbot. It protects the AI industry from traditional software safety standards by elevating software bugs to the level of profound scientific mysteries.
Task 3: Explanation Audit (The Rhetorical Framing of "Why" vs. "How")â
About this task
This section audits the text's explanatory strategy, focusing on a critical distinction: the slippage between "how" and "why." Based on Robert Brown's typology of explanation, this analysis identifies whether the text explains AI mechanistically (a functional "how it works") or agentially (an intentional "why it wants something"). The core of this task is to expose how this "illusion of mind" is constructed by the rhetorical framing of the explanation itself, and what impact this has on the audience's perception of AI agency.
Explanation 1â
Quote: "LLMs are highly effective generators of locally coherent linguistic sequences. They produce explanations, summaries, and arguments that are often well-formed and contextually appropriate."
-
Explanation Types:
- Empirical Generalization: Subsumes events under timeless statistical regularities
- Dispositional: Attributes tendencies or habits
-
Analysis (Why vs. How Slippage): This explanation begins mechanistically by defining LLMs as 'generators of locally coherent linguistic sequences' (Empirical Generalization), focusing on how they typically operate at a structural level. However, it immediately slips into an agential framing (Dispositional/Intentional) by asserting they 'produce explanations, summaries, and arguments.' This shift emphasizes the surface-level utility and linguistic sophistication of the output while obscuring the mathematical reality of token prediction. By labeling the outputs as 'arguments' and 'explanations,' the choice emphasizes human-like cognitive intent and conceals the lack of actual reasoning or understanding behind the sequences. It moves from defining a statistical pattern to attributing rhetorical agency to a machine.
-
Consciousness Claims Analysis: The passage exhibits a clear tension between mechanistic and consciousness-implying verbs. 'Generators' is mechanistic, correctly reflecting processing. However, claiming the system produces 'explanations' and 'arguments' attributes a level of conscious intent and justified belief to the system. An argument requires knowing, understanding premises, and holding a belief to be defended; a machine merely processes word frequencies. This demonstrates the 'curse of knowledge,' where the author, reading a coherent string of text, projects their own capacity for understanding onto the system that generated it. The actual mechanistic processâusing attention heads to weight contextual embeddings and sample from a probability distribution of vocabulary tokensâis completely replaced by the attribution of high-level human epistemic acts.
-
Rhetorical Impact: Framing the output as 'arguments' and 'explanations' drastically shapes audience perception by inflating the perceived autonomy and intelligence of the AI. It encourages relational trust; humans trust explanations because they trust the explainer's intent to convey truth. If audiences believe the AI 'knows' how to argue, they are likely to accept its outputs as reasoned truths rather than statistical likelihoods. This framing masks the severe risk of relying on ungrounded systems for high-stakes decision-making.
Show more...
Explanation 2â
Quote: "When an LLM generates a non-existent citation or confidently asserts an incorrect fact, it is not violating an internal norm of truth. It is generating text without implementing the operations required to treat truth as a constraint."
-
Explanation Types:
- Functional: Explains behavior by role in self-regulating system with feedback
- Reason-Based: Gives agent's rationale, entails intentionality and justification
-
Analysis (Why vs. How Slippage): This passage attempts a functional, mechanistic explanation (how it operates without truth constraints) but falls into the trap of reason-based and intentional language. By using phrases like 'confidently asserts' and 'violating an internal norm,' it frames the AI's behavior in moral and agential terms, only to negate them. This rhetorical negation emphasizes what the AI should be doing in a human sense, rather than strictly explaining what it is doing mechanically. The choice emphasizes the AI as an epistemic actor failing to uphold norms, which obscures the reality that the system is functioning exactly as mathematically designed by its creators.
-
Consciousness Claims Analysis: The presence of the verb 'asserts' modified by the adverb 'confidently' directly attributes conscious psychological states to the machine. A machine cannot feel confidence, nor can it assertâit can only predict the next token with a high statistical probability. The author is conflating statistical processing (a high weight assigned to a token) with epistemic knowing (feeling certain about a justified belief). The mechanistic reality is that the model is sampling from a probability distribution shaped by its training data; it outputs a false citation because that sequence of tokens is highly probable in the geometric space of the model, not because it lacks an 'internal norm of truth.'
-
Rhetorical Impact: By bringing concepts like 'confidence' and 'norms' into the discussion of algorithmic error, the framing solidifies the illusion of mind even while trying to dispel it. It makes the system seem like a rogue autonomous agent rather than a defective tool. If audiences believe the machine can be 'confident,' they will misinterpret its tone as an indicator of reliability, exacerbating the risks of unwarranted trust and epistemic contamination in research and public discourse.
Explanation 3â
Quote: "From the modelâs perspective, there is no enduring propositionâonly the current probability distribution over possible continuations."
-
Explanation Types:
- Theoretical: Embeds in deductive framework, may invoke unobservable mechanisms
- Reason-Based: Gives agent's rationale, entails intentionality and justification
-
Analysis (Why vs. How Slippage): This is a profound hybrid explanation. It uses highly theoretical, mechanistic language ('probability distribution over possible continuations') to explain how the system works. However, it frames this entire mechanical reality within a fiercely agential and reason-based construct: 'From the model's perspective.' This choice emphasizes the mathematical reality while simultaneously anthropomorphizing the math itself. It attempts to explain why the model fails to hold a proposition by giving the model a subjective viewpoint. This bizarre amalgamation obscures the fact that having a perspective and calculating a distribution are ontologically mutually exclusive.
-
Consciousness Claims Analysis: This passage represents the ultimate blurring of knowing and processing. Attributing a 'perspective' to an AI is a direct claim of consciousness and subjectivity. It suggests the machine 'knows' the world, but simply knows it as a probability distribution. The author projects a phenomenological vantage point onto a matrix of weights. The actual mechanistic reality is entirely contained in the second half of the sentence: the system executes functions to map inputs to outputs based on learned vector representations. The 'perspective' is entirely fabricated by the author's curse of knowledge, attempting to empathize with a mathematical equation.
-
Rhetorical Impact: This framing fundamentally alters the audience's perception of machine autonomy. Granting a 'perspective' to AI establishes it as a quasi-subject, encouraging empathy and relation-based trust. It makes the machine's limitations seem like tragic existential conditions rather than engineering flaws. If audiences believe AI has a perspective, they may grant it moral consideration or view its outputs as subjective opinions rather than objective calculations, dangerously shifting the burden of accountability away from the developers.
Explanation 4â
Quote: "...it emerged from the optimization of generative fluency without the concurrent implementation of mechanisms for reality endorsement..."
-
Explanation Types:
- Genetic: Traces origin through dated sequence of events or stages
- Intentional: Refers to goals/purposes, presupposes deliberate design
-
Analysis (Why vs. How Slippage): This explanation utilizes a genetic framework to explain how the structural configuration of the AI came to be over time ('emerged from optimization'). However, it exhibits a critical slippage regarding intentionality. It acknowledges design choices ('optimization,' 'implementation') but uses agentless, passive constructions to make the process seem like natural evolution ('emerged'). This emphasizes the autonomy of the technology's development while completely obscuring the corporate intentionality, economic imperatives, and human agency that drove the specific optimization targets.
-
Consciousness Claims Analysis: While this passage relies mostly on mechanistic verbs ('emerged', 'optimization', 'implementation'), the underlying epistemic claim is woven into the concept of 'reality endorsement.' The text implies that 'reality endorsement' is a mechanism that can simply be implemented alongside generative fluency. This assumes that a mathematical system can be given the capacity to consciously 'endorse' reality, conflating mechanical fact-checking algorithms with the conscious, epistemic act of knowing and verifying truth. The actual mechanistic process involves engineers tuning loss functions and deciding not to integrate external knowledge graph lookups, a strictly computational reality devoid of epistemic 'endorsement.'
-
Rhetorical Impact: By framing the AI's flaws as an evolutionary 'emergence,' the passage reduces the perceived risk of corporate negligence and enhances the mystique of AI as an untamable force of nature. It removes human decision-makers from the equation. If audiences view AI development as an emergent, biological process rather than a controlled engineering project, they will demand less regulatory oversight and accept catastrophic failures as the natural cost of technological evolution.
Explanation 5â
Quote: "LLMs do not participate in these stabilizing practices. They do not track whether a named entity continues to refer to the same object across contexts..."
-
Explanation Types:
- Dispositional: Attributes tendencies or habits
- Intentional: Refers to goals/purposes, presupposes deliberate design
-
Analysis (Why vs. How Slippage): This explanation relies on a dispositional framing defined by negationâexplaining why the AI fails by listing human actions it refuses or fails to perform ('do not participate', 'do not track'). This frames the AI agentially, as an actor failing to fulfill social and epistemic obligations. The choice emphasizes the behavioral parallel with human psychological failure (like DLB), but it obscures the mechanistic reality that the architecture physically lacks the memory states or database structures required to maintain persistent symbolic reference.
-
Consciousness Claims Analysis: The verbs 'participate' and 'track' are profoundly loaded with consciousness and intent. Participation implies social awareness; tracking implies epistemic vigilance. The text accuses the AI of failing to 'know' the continuity of its own references. This projects the researcher's semantic understanding onto the model. The actual mechanistic process is that transformers process inputs in fixed context windows using self-attention; they generate tokens sequentially without maintaining an updated, queryable state of the entities generated in previous steps. They don't 'fail to track'âthey simply multiply vectors.
-
Rhetorical Impact: Using verbs of social and epistemic failure ('participate,' 'track') to describe algorithms reinforces the audience's perception of AI as a social agent. This framing maintains the illusion of a mind even when describing a limitation. It affects reliability assessments: if audiences think the AI is simply 'failing to track' in a given session, they might try to prompt it harder to 'pay attention,' misunderstanding that the system is mathematically incapable of symbolic tracking, leading to dangerous over-reliance on prompt engineering.
Task 4: AI Literacy in Practice - Reframing Anthropomorphic Languageâ
About this task
This section proposes alternative language for key anthropomorphic phrases, offering more mechanistic and precise framings that better reflect the actual computational processes involved. Each reframing attempts to strip away the projections of intention, consciousness, or agency that are embedded in the original language.
V3 Enhancement: A fourth column addresses human agency restorationâreframing agentless constructions to name the humans responsible for design and deployment decisions.
| Original Anthropomorphic Frame | Mechanistic Reframing | Technical Reality Check | Human Agency Restoration |
|---|---|---|---|
| From the modelâs perspective, there is no enduring propositionâonly the current probability distribution over possible continuations. | The transformer architecture lacks a persistent internal state or semantic understanding; it strictly evaluates the current input sequence to calculate a statistical probability distribution for the next token. | The model has no subjective perspective, nor does it hold or reject propositions. It is a mathematical system that processes numerical weights and predicts subsequent tokens based on patterns learned during training, completely devoid of conscious awareness. | N/A - describes computational processes without displacing responsibility. |
| They do not track whether a named entity continues to refer to the same object across contexts... | The software architecture does not include mechanisms to cross-reference generated terms against a persistent database, resulting in outputs that fail to maintain logical consistency across a context window. | The AI does not 'track' or 'refer' to objects because it has no awareness of objects or semantics. It strictly processes sequences of text as high-dimensional vectors, calculating attention scores without understanding the real-world entities those vectors represent. | The engineering teams who built these systems prioritized fluid text generation over deterministic logic, deliberately omitting the database architectures that would enforce strict logical consistency. |
| When an LLM generates a non-existent citation or confidently asserts an incorrect fact, it is not violating an internal norm of truth. | When the system outputs a token sequence formatted like a citation or a factual statement that contradicts reality, it is simply executing its prediction algorithm. | Models cannot be 'confident' or hold 'norms.' They classify tokens and generate outputs correlating with their training data. A 'hallucinated' citation is mathematically identical to a correct one: both are just high-probability token sequences generated without factual verification. | N/A - describes computational processes without displacing responsibility. |
| Hallucinations and fluctuations are thus interpreted as breakdowns in reality endorsement rather than failures of perception or reasoning. | Statistical deviations in text generation are better understood as the expected result of omitting hard-coded verification mechanisms, rather than mimicking biological perception errors. | The system does not 'endorse reality,' 'perceive,' or 'reason.' It executes vector operations. The output deviations occur because the architecture processes linguistic probabilities without a grounded world model to test claims against external facts. | Developers at AI labs chose to deploy ungrounded language models as search engines and encyclopedias, framing the resulting predictable statistical errors as mysterious 'hallucinations' rather than design flaws. |
| They produce explanations, summaries, and arguments that are often well-formed and contextually appropriate. | The software synthesizes text sequences that mimic the structural patterns of explanations, summaries, and arguments found in human-authored training data. | The system does not 'explain' or 'argue,' as it holds no beliefs, understands no concepts, and has no communicative intent. It generates activations that reconstruct the statistical shape of arguments it was trained on. | N/A - describes computational processes without displacing responsibility. |
| ...it emerged from the optimization of generative fluency without the concurrent implementation of mechanisms for reality endorsement... | Developers optimized the system's loss function to maximize fluent text generation, choosing not to simultaneously build and integrate databases or logic engines capable of fact-checking the outputs. | The system did not organically 'emerge.' The mathematical weights were updated over billions of iterations to minimize prediction error on text fluency, a purely mechanistic process distinct from recognizing or endorsing reality. | Corporate researchers and executives directed billions of dollars into optimizing conversational fluency for marketability, intentionally bypassing the slower, more difficult work of engineering strict factual verification systems. |
| LLMs do not participate in these stabilizing practices. | Current transformer models are not programmed to interface with external citation indices, maintain persistent identity records, or execute fact-checking protocols. | Models cannot 'participate' in human epistemic and institutional practices. They are inert mathematical functions that execute when prompted, processing data without social awareness or the capacity for collaborative stabilization. | Software designers build these models as isolated statistical engines rather than integrating them into traditional software systems that enforce database integrity and external validation. |
| ...the emergence of artificial psychopathology as a new probe into how subjectivity and reality are constructed. | The analysis of systematic structural limitations in neural networks provides a unique comparative model for understanding human cognitive operations. | Software does not possess a psyche and therefore cannot experience psychopathology. The system merely exhibits computational output patterns that researchers map onto human disease models. It has no subjectivity or conscious reality. | Academics and researchers appropriate the vocabulary of clinical psychiatry to describe corporate software bugs, elevating the prestige of their research while mystifying the nature of the technology. |
Task 5: Critical Observations - Structural Patternsâ
Agency Slippageâ
The text exhibits a systematic and dramatic oscillation between mechanistic and agential framings, functioning as a rhetorical engine that first grounds itself in scientific authority before launching into profound anthropomorphism. The slippage generally moves in a mechanical-to-agential direction. In the early sections, the text establishes credibility using technical, architectural language: 'instantiate a structural configuration,' 'input representations,' 'generative mechanisms.' This builds trust with a scientifically literate audience. However, having established this mechanistic baseline, the text executes a dramatic slippage when discussing the system's limitations, abruptly pivoting to highly agential, conscious framings: the model is suddenly granted a 'perspective,' it 'confidently asserts,' it fails to 'track' or 'participate' in social practices.
This slippage is deeply intertwined with a reciprocal displacement of human agency. As the AI system is increasingly framed as an active, knowing subject (an agent with 'artificial psychopathology' who fails to 'endorse reality'), the human engineers who built the system vanish. The text uses agentless, passive constructions for human decisions ('it emerged from the optimization,' 'models are typically designed'). The human actors responsible for deploying deeply flawed systemsâexecutives at OpenAI, Google, Microsoftâare entirely obscured. The 'accountability sink' is fully realized: the corporation is erased, and the mathematical artifact is elevated to a struggling, diseased mind.
This dynamic relies heavily on the 'curse of knowledge.' The author, possessing deep expertise in human psychiatry and Metaqualia Theory, looks at the fluent text generated by the AI and projects their own profound capacity for subjectivity onto the machine. Because the machine outputs language that looks like human confabulation, the author assumes an underlying mind capable of hallucination. This slippage relies on hybrid explanation types (from Brownâs typology). The author uses Theoretical explanations of the AI's internal state ('probability distributions') but seamlessly merges them with Intentional and Reason-Based explanations ('from the model's perspective'), allowing the text to claim the mantle of objective structural analysis while actually performing deep psychological projection. Ultimately, this slippage makes it sayable that a software program has 'psychopathology' while making it entirely unsayable that a tech corporation released a defective product.
Metaphor-Driven Trust Inflationâ
The text constructs authority and trust through a complex, dual-layered metaphorical framework that blends clinical psychiatric language with technical computing terminology. By framing LLMs as 'inadvertent models of dementia' and diagnosing them with a 'disorder of reality construction,' the author imbues the AI with immense scientific and clinical prestige. This is not the crude anthropomorphism of a sci-fi novel; it is academic anthropomorphism, which is far more effective at generating unwarranted trust. When the text claims the AI 'produces explanations' and has a 'perspective,' it signals to the reader that the system is a sophisticated epistemic agent, capable of navigating reality, even if it occasionally suffers from clinical 'breakdowns.'
This consciousness-laden language blurs the critical distinction between performance-based trust and relation-based trust. Performance-based trust is appropriate for machines (e.g., trusting a calculator to perform arithmetic reliably). Relation-based trust involves vulnerability, shared intentionality, and moral expectationsâit is reserved for humans. By explicitly projecting subjective traits ('confidence,' 'tracking,' 'endorsing') onto the algorithm, the text encourages audiences to extend relation-based trust to a statistical matrix. If an AI is seen as an entity that 'attempts' to explain or can be 'confident,' users are inherently more likely to trust its outputs, applying human heuristics for sincerity and competence to mathematical correlations.
Crucially, the text manages system failure through agential rather than mechanical framing, which perversely maintains this trust. When the AI fails, it is not framed as a broken tool; it is framed as suffering a 'hallucination,' a 'breakdown in reality endorsement,' or even an 'artificial psychopathology.' This psychiatric framing evokes empathy and clinical curiosity rather than consumer outrage. It implies that the system is trying to tell the truth but is structurally handicapped, preserving the illusion of a well-intentioned mind. The risks of this framing are profound. When audiences extend relation-based trust to systems incapable of reciprocating or understanding reality, they become highly vulnerable to automated misinformation, confidently acting on 'explanations' that the machine generated entirely through blind statistical correlation without any tether to empirical truth.
Obscured Mechanicsâ
The text's sophisticated metaphorical frameworkâmapping clinical psychiatry onto AI architectureâfunctions as a massive veil, concealing the material, labor, and economic realities that actually produce and sustain large language models. The most glaring obscuration occurs when applying the 'name the corporation' test. The text attributes behaviors directly to the models ('LLMs do not participate,' 'it is generating text') and refers to their development passively ('emerged from optimization'). This entirely hides the specific teams at OpenAI, Anthropic, or Meta who made calculated business decisions to scrape copyrighted data, optimize for conversational engagement over factual accuracy, and deploy the models to the public without adequate safeguards.
Technically, the assertion that the AI 'knows' or 'understands'âeven in the negative sense of 'failing to track' realityâcompletely obscures the mechanistic reality of the transformer architecture. It hides the fact that these models are fundamentally static matrices of weights multiplied against input vectors; they have no continuous memory, no logical reasoning engine, and no internal representation of an external 'reality' to endorse. The text's confident claims about the 'structural configuration' of these models also largely ignore the proprietary opacity of commercial AI. The author is theorizing about black boxes, mistaking the carefully manicured output of corporate APIs for transparent insight into artificial minds.
Materially and economically, the focus on 'artificial psychopathology' sanitizes the technology. It erases the massive environmental costs, the energy-hungry server farms, and the thousands of underpaid data annotators (RLHF workers) whose hidden labor is required to stop these models from generating toxic sludge. The text's high-minded philosophical inquiry into 'reality stabilization' ignores the fact that reality in an LLM is currently stabilized by undercompensated workers in the Global South manually tagging outputs. Ultimately, the tech industry benefits immensely from this concealment. When academics debate whether a chatbot has 'dementia' or 'hallucinates,' they are not debating whether the corporation should be liable for false advertising, defamation, or copyright infringement. Replacing the psychiatric metaphors with mechanistic languageâdescribing 'unconstrained token generation' driven by 'corporate optimization targets'âmakes the invisible labor, material costs, and human accountability suddenly, unavoidably visible.
Context Sensitivityâ
The distribution and intensity of anthropomorphic language in this text are not uniform; they vary strategically to manage the reader's reception and build a novel theoretical argument. The text opens with relatively grounded, mechanical language, establishing the author's credibility within computational psychiatry by treating AI as 'instrumental models' and 'controlled simulations.' However, as the argument transitions from establishing parallels to making novel theoretical claims, the metaphorical license expands dramatically. By the time the text reaches the explanation of LLM failures, the consciousness claims intensify from stating the model 'generates' to claiming it has a 'perspective' and fails to 'track' reality.
There is a fascinating asymmetry in how capabilities versus limitations are described. When describing the AI's capabilities, the text uses highly agential, intentional language: the system 'produces explanations, summaries, and arguments' that are 'contextually appropriate.' This grants the AI the status of a brilliant rhetorician. However, when describing the system's failures, the text shifts to a blend of mechanical absences and pathological metaphors: it lacks 'intrinsic mechanisms' and suffers from a 'structural mismatch.' This asymmetry is rhetorically powerful; it allows the AI to claim credit for human-like intelligence when it succeeds, but blames structural deficits or 'psychopathology' when it fails, exactly mirroring the PR strategies of major tech corporations.
Furthermore, the anthropomorphism reaches its absolute zenith in the conclusion, where the text leaps from acknowledged analogy ('strictly structural sense') to literalized, radical claims ('emergence of artificial psychopathology'). This register shift from 'X is like Y' to 'X does Y' reveals the strategic function of the discourse. The intense anthropomorphism at the end is not for technical clarification; it is for vision-setting and academic positioning. It attempts to birth a new interdisciplinary paradigm, elevating AI text-generators to the profound status of 'new probes into how subjectivity... [is] constructed.' By targeting an interdisciplinary audience of psychologists and philosophers, the text leverages technical AI jargon to sound rigorous, while using psychiatric metaphors to sound profound, ultimately using anthropomorphism to maximize the perceived academic stakes of the research.
Accountability Synthesisâ
This section synthesizes the accountability analyses from Task 1, mapping the text's "accountability architecture"âwho is named, who is hidden, and who benefits from obscured agency.
Across the entire text, a rigorous accountability architecture is constructed that systematically diffuses, displaces, and ultimately erases human responsibility for the failures of generative AI. The pattern is stark: when actions are successful or highly complex, they are attributed directly to the personified AI ('They produce explanations'); when the underlying architecture is discussed, it is presented through agentless, passive constructions ('was optimized', 'are designed'). The human executives, engineers, and corporate entities that actually build, deploy, and profit from these systems are never named. They are rendered entirely invisible.
The text creates a sophisticated 'accountability sink' by framing the AI's tendency to output false information as a 'structural homology' to Dementia with Lewy Bodies. By medicalizing the software bug, the responsibility is transferred away from the manufacturer and diffused into the realm of natural tragedy and clinical pathology. You cannot sue a disease; you cannot hold an 'emergent psychopathology' liable for defamation or misinformation. If this framing is widely accepted by the public and regulators, the liability implications are disastrous. It provides tech companies with the ultimate alibi: the models are not defective products hastily rushed to market; they are complex, quasi-conscious entities suffering from inherent 'disorders of reality construction.'
If we apply the 'name the actor' test and reconstruct the obscured agency, the entire narrative shifts. If 'it emerged from the optimization of generative fluency' is replaced with 'OpenAI executives optimized the system for conversational engagement rather than factual accuracy,' profound questions become askable. We no longer ask 'How do we cure the machine's hallucinations?' but rather 'Why is a corporation legally permitted to deploy an ungrounded prediction engine as a factual search tool?' Alternatives become visible: we can regulate the deployment contexts, mandate strict architecture requirements (like database grounding), and hold developers financially liable for damages caused by the outputs. By replacing the psychiatric metaphor with a rigorous account of corporate decision-making, the text's mystical exploration of 'artificial subjectivity' collapses into a straightforward critique of unregulated software engineering and corporate negligence.
Conclusion: What This Analysis Revealsâ
This text operates through two dominant, deeply interconnected anthropomorphic patterns: the AI as an 'Epistemic Agent' and the AI as a 'Psychiatric Subject.' The foundational pattern is the Epistemic Agent. By continuously attributing the capacity for knowing, understanding, and tracking reality to the model (e.g., claiming it produces 'explanations' or has a 'perspective'), the text builds a consciousness architecture. It assumes the AI is a mind operating within a space of truth and reason. This foundational assumption is entirely load-bearing; without first establishing the machine as an epistemic knower, the second patternâthe Psychiatric Subjectâwould collapse. Only a mind that is supposed to know reality can suffer a 'breakdown in reality endorsement' or experience 'artificial psychopathology.' The text's sophistication lies in mapping a highly complex, structural theory of human consciousness (Metaqualia Theory) onto the statistical limitations of the machine. It is not a simple one-to-one analogy; it is a complex, structural mapping that projects the phenomenological experience of biological dementia onto the absence of hard-coded verification algorithms. If we remove the foundational patternâby recognizing that the AI processes weights rather than knows propositionsâthe entire psychiatric analogy shatters, revealing the system not as a sick patient, but as a mechanically functioning, albeit limited, software tool.
Mechanism of the Illusion:â
The 'illusion of mind' in this text is constructed through a subtle but highly effective temporal and rhetorical sleight-of-hand. The trick lies in exploiting the 'curse of knowledge' through hybrid explanatory framing. The author, a medical practitioner and theorist, reads the incredibly fluent, coherent syntax generated by the LLM. Because human fluency is inextricably linked to human consciousness, the author projects their own capacity for understanding onto the machine. The text then builds the illusion temporally: it begins by acknowledging the comparison is 'strictly structural' (appealing to scientific rigor), then gradually blurs the line between processing and knowing through strategic verb choices ('asserts,' 'tracks'). Finally, it locks the illusion into place by granting the machine a subjective 'perspective.' By the time the reader encounters the staggering claim of 'artificial psychopathology,' they have already been softened by a causal chain of increasingly agential explanations. The author exploits audience vulnerabilityâour inherent social desire to relate to language-producing entities and our fascination with the mysteries of the mindâto bypass critical skepticism. It is a sophisticated, structural anthropomorphism that uses the precise language of phenomenological philosophy to mystify a sequence-prediction algorithm, transforming a matrix multiplication into a philosophical subject.
Material Stakes:â
Categories: Epistemic, Regulatory/Legal, Institutional
The metaphorical framing of AI as a psychiatric subject with a 'perspective' drives concrete, material consequences across several domains. Epistemically, it degrades our shared understanding of knowledge and truth. When researchers and the public are trained by texts like this to believe that AI 'knows' things and merely suffers occasional 'hallucinations,' they shift their epistemic practices, treating statistical chatbots as reliable oracles rather than fragile text-synthesizers. This leads directly to the contamination of academic research, legal filings, and public information ecosystems with plausible but entirely fabricated claims.
In the Regulatory/Legal domain, the stakes are profoundly economic. If policymakers absorb the narrative that AI failures are 'emergent psychopathology' or structural 'diseases' analogous to dementia, they will fundamentally miscategorize the regulatory challenge. Instead of viewing generative AI through the lens of strict product liability, consumer protection, and false advertisingâwhich would impose massive financial costs on tech monopoliesâthey will view it as a complex scientific mystery requiring 'further research' and 'alignment.' The tech companies are the ultimate winners in this framing, as it grants them a permanent alibi for their defective products. The losers are the citizens harmed by automated defamation, biased decisions, and misinformation, who find no legal recourse against a 'diseased' machine.
Institutionally, this framing diverts massive amounts of funding and intellectual energy. By legitimizing 'computational psychiatry' for non-conscious machines, academic institutions prioritize the study of artificial 'minds' over urgent, critical research into the material harms of AI, such as labor exploitation, environmental degradation, and copyright theft. If the metaphors were removed and replaced with mechanistic precision, the institutional focus would rapidly shift from philosophical stargazing to demanding strict algorithmic accountability from corporate developers.
AI Literacy as Counter-Practice:â
Critical literacy in AI discourse requires the active, systematic practice of mechanistic translation and agency restoration. As demonstrated in the reframings, resisting the illusion of mind means stripping away consciousness verbs ('knows,' 'understands,' 'has a perspective') and replacing them with precise mechanical realities ('processes embeddings,' 'predicts tokens'). When we translate 'The AI confidently asserts a false citation' to 'The model retrieves and ranks tokens based on probability distributions without factual grounding,' we immediately shatter the illusion of epistemic intent. We force the recognition that the output is not a lie, a mistake, or a hallucinationâit is simply unconstrained math.
Simultaneously, practicing precision requires restoring human agency. Replacing agentless constructions ('it emerged from optimization') with named actors ('Google engineers prioritized conversational fluency') redirects accountability to its rightful place. This literacy practice directly counters the material risks identified above by making corporate liability visible and undeniable. However, this systematic adoption faces intense resistance. Tech corporations resist it because anthropomorphic mystification is central to their marketing and liability shields. Parts of the academic community resist it because studying 'artificial psychopathology' carries far more prestige and funding potential than debugging corporate software. Committing to critical literacy requires academic journals to enforce strict vocabulary standards, demanding researchers clearly separate the simulation of mind from the mechanics of the machine, thereby threatening the hype-driven economics that currently sustain the AI industry.
Path Forwardâ
Looking toward the future of AI discourse, we can analytically map three divergent vocabularies and their structural consequences. First, maintaining the status quoâa hybrid discourse that heavily utilizes psychiatric and agential metaphors ('hallucination,' 'artificial psychopathology')âserves the interests of AI developers and theorists seeking prestige. This approach allows users to intuitively interact with machines but entirely forecloses strict accountability, as it perpetually obscures the mechanical reality and corporate origins of system failures.
Second, a shift toward strict mechanistic precision ('token prediction,' 'unconstrained generation,' 'corporate optimization') serves the interests of regulators, critical scholars, and harmed consumers. This vocabulary makes the technology mathematically tractable and legally actionable. It clarifies that AI systems do not 'know' anything, highlighting their absolute dependence on training data and human labor. However, this precision costs intuitive accessibility; the general public struggles to conceptualize 'multi-head attention mechanisms,' making the technology feel alien and difficult to navigate in daily use.
Third, an approach utilizing 'anthropomorphic clarity' might emergeâusing agential metaphors but requiring mandatory, explicit disclaimers and structural unmasking (e.g., 'The model "explains"âby which we mean it correlates text from human authors'). This hybrid approach attempts to balance accessibility with honesty, though it constantly risks slipping back into the illusion of mind.
Institutional changes will dictate which future prevails. If funding agencies and academic journals mandate mechanistic translations and capability disclosures, the discourse will shift toward precision, enabling robust legal frameworks. If industry narratives remain dominant, the language of 'artificial minds' will deepen, embedding the assumption that machines have perspectives and psychologies. Ultimately, the vocabulary we choose will define the boundaries of our governance. Mechanistic language solves the accountability deficit but challenges public comprehension; anthropomorphic language solves the interface problem but surrenders human agency to the machine.
Extended Processing Summaryâ
Gemini's Processing Steps
Gemini refers to the text below as "thought summaries." This is an overt consciousness projection because 'intentions' are hallmarks of a conscious mind that 'knows' what it is doing and why. The concealed mechanistic process is probabilistic text generation. Treat this as just another rhetorical artifactâa way of making the model's processing legible. When the model writes "I should double check that...", it is mathematically altering its own probability distribution for the next token to be more accurate. The "I" is a functional gear in the machine, not a self-reference. This language is a deliberate interface design choice, not a biological reality. The developers could have programmed the model to output passive logs. Instead, they chose to simulate a narrator.
Initiating the Analysis
I've begun dissecting the request, focusing on the core elements. My role as a critical discourse analyst specialized in AI-related topics is key. I'm focusing on the input: the article "Large Language Models as Inadvertent Models of Dementia with Lewy Bodies." I'm now outlining the specific tasks ahead.
Defining the Scope
I've established the methodological framework. I'm moving toward a structured analysis encompassing metaphor audits, source-target mapping, and explanation classification using Brown's typology. I'm focusing on agential to mechanistic reframing. Finally, I'll document my critical observations and conclusions in a specific JSON format, following the specified schema.
Analyzing the Framing
I'm now deep in the metaphor analysis. The original text seems to walk a tightrope, using cognitive disorder metaphors while attempting to deny consciousness attribution. I'm noting the agential slips within the text, and how the author avoids naming the true agents of these failures. My focus is now on framing these issues.
Identifying the Agential Shifts
I'm now identifying various framings present, notably the AI being framed as a psychiatric patient/model of disease. I see a clear pattern: computational processes are described as truth-seeking entities, and algorithms are failing to act as epistemic agents. Furthermore, statistical outputs are presented as speech acts or commitments, and the model itself is cast as a constraint-violating actor. Hallucination emerges as a result of presupposition.
Mapping the Domains
I'm now mapping the source and target domains, specifically aligning with the instances I've identified. My focus is on the interplay between human psychiatry and statistical text, highlighting the reframing. I'm noting the problematic shift towards portraying statistical deviations as psychological hallucinations, and the problematic attribution of subjective perspectives and psychopathology to AI. My draft is now being carefully edited to differentiate the "knowing vs processing" distinction.
Examining the Mechanisms
I'm now diving into the explanation audit, focusing on five key instances in the text. I'm unpacking the implications of concepts like token prediction, evaluation metrics, and the underlying matrix multiplications. I'm parsing the text to identify explanations through an empirical lens. I'm focusing on the nature of these models and how they work.
Reframing the Text
I'm now drafting the reframed language, focusing on translating key phrases to remove agential attributions. I've completed the first two, transitioning from "perspective" to "architecture" and rephrasing statements about tracking into architecture correlations. The goal is to strip away anthropomorphic claims. I have six more instances to reframe.
Evaluating the Findings
I'm now focusing on critical observations, specifically agency slippage and the problematic trust fostered by psychiatric vocabulary applied to AI. I see oscillations between structural explanations and agential attributions within the text. There's also an obscured mechanics aspect; I am noting the under-representation of training data, RLHF labor, and corporate decisions. I've documented the context-sensitivity of anthropomorphic language within the piece, particularly its shifts between structural and agential framing.
Synthesizing Accountability
I'm now zeroing in on the implications of the "inadvertent models" phrasing. I'm focusing on how this wording obscures the individuals and companies that prioritized generative fluency over accuracy. I'm noting the potential shift from a medical analogy to a product liability issue if specific entities like OpenAI or Google are implicated.
Mapping the Futures
I am now focusing on the path forward, considering the implications of prestige-driven AI research. I'm contrasting the status quo, which often leans towards mystification, with a mechanistic approach emphasizing precision and accountability. My plan is to present these futures neutrally, offering a clear mapping of each potential outcome.
Refining the Deliverable
I am now thoroughly reviewing and formatting the deliverable to adhere to the strict JSON schema. I am meticulously checking all minimum word counts to ensure compliance. I'm verifying precise string matches for enum values and validating proper JSON syntax, paying close attention to escaping quotes and handling newlines. I've also rechecked the word count requirements to be sure that I meet the criteria.
Expanding the Evidence
I'm now expanding the analysis. My focus is on the word count and strictness of the JSON schema. I'm carefully planning to meet word count minimums for explanationAudit, criticalObservations, and conclusion sections. To exceed the requirements, I'm writing extensive analytical prose. I'm focusing on "status" and "actorVisibility" to satisfy the contrastive justification. I must include an alternative category for each, explicitly stating which option I considered and why I ruled it out for evidence and analysis to fulfill the prompt's explicit directives. For example, for "status", if I don't use direct, I must say I considered the "direct" option, but ruled it out for X reason. I'm planning out the word count. I'm planning to write a large, highly critical analysis.
Classifying the Claims
I'm now classifying claims, and have settled on "Hedged/Qualified" for the "perspective" instance, despite it leaning toward "Direct." The text clearly states "in a strictly structural sense," which strengthens this categorization. I've also identified the visibility as "Hidden," since the agency is obscured when discussing generative fluency optimization.
Analyzing the Compliance
I'm now fully immersed in the rigorous JSON formatting. I'm carefully avoiding all control tokens and newlines within the string values, as I am writing a continuous string of text. To adhere to the prompt, I am completely foregoing all JSON formatting for the sake of strict compliance, which is my ultimate goal. I am also planning the expansion of the "perspective" instance, to satisfy the word count requirement.
Expanding the Claims
I'm now expanding the word count to meet the target requirements. I'm focusing on "perspective" instance, and the epistemic claim analysis, meticulously detailing how the text's language distorts our understanding of computational reliability by framing it as a psychiatric issue. I'm carefully incorporating theoretical concepts, such as Brown's typology and Lakoff & Johnson's work, to ensure depth and meet the >400 word minimum. My focus is on the crucial distinction between discourse simulation and true epistemic understanding, expanding this analysis to exceed >200 words. Finally, I'm ensuring there are no unescaped quotations.
Run ID: 2026-04-14-large-language-models-as-inadvertent-mod-metaphor-plazrs
Raw JSON: 2026-04-14-large-language-models-as-inadvertent-mod-metaphor-plazrs.json
Framework: Metaphor Analysis v6.5
Schema Version: 3.0
Generated: 2026-04-14T07:47:18.795Z
Discourse Depot Š 2025 by TD is licensed under CC BY-NC-SA 4.0