Deconstruct: Large Language Models as Inadvertent Models of Dementia with Lewy Bodies: How a Disorder of Reality Construction Illuminates AI Hallucination
- About
- Analysis Metadata
This document applies the AI Literacy Deconstructor framework—a rewriting experiment that tests whether anthropomorphic AI discourse can be translated into strictly mechanistic language while preserving the phenomena described.
The core question is not "Is this metaphor bad?" but rather: "Does anything survive when we remove the metaphor?"
Each anthropomorphic frame receives one of three verdicts:
- âś… Preserved: Translation captures a real technical process
- ⚠️ Reduced: Core survives, but accessibility or nuance is lost
- ❌ No Phenomenon: The metaphor was constitutive—nothing mechanistic underneath
All findings and summaries below were generated from detailed system instructions provided to a large language model and should be read critically as interpretive outputs—not guarantees of factual accuracy or authorial intent.
Source Title: Large Language Models as Inadvertent Models of Dementia with Lewy Bodies: How a Disorder of Reality Construction Illuminates AI Hallucination Source URL: https://link.springer.com/article/10.1007/s12124-026-09997-w#citeas Model: gemini-3.1-pro-preview Temperature: 1.05 Top P: 0.95 Tokens: input=7281, output=13208, total=20489 Source Type: article Published: 2026-04-11 Analyzed At: 2026-04-14T09:33:47.798Z Framework: Deconstructor Framework Version: 1.0 Run ID: 2026-04-14-large-language-models-as-inadvertent-mod-deconstructor-s49dxe
Overall Verdict - Does anything survive when the metaphor is removed?​
The core thesis—that AI architectures naturally dissociate sequence generation from factual verification—survives translation perfectly. The text does not rely on constitutive anthropomorphism to describe technical facts. However, the loss of phenomenological vocabulary diminishes the text's specific interdisciplinary utility as a philosophical comparison between machine learning and human psychiatry. It can exist mechanistically, but it ceases to be a philosophy paper.
Part 1: Frame-by-Frame Analysis​
About this section
For each anthropomorphic pattern identified in the source text, we perform a three-part analysis:
1 Narrative Overlay: What the text says—the surface-level framing
2 Critical Gloss: What's hidden—agency displacement, metaphor type, how/why slippage
3 Mechanistic Translation: The experiment—can this be rewritten without anthropomorphism?
The verdict reveals whether the phenomenon is real (Preserved), partially real (Reduced), or exists only in the framing (No Phenomenon).
Frame 1: The Model's Standpoint​
Narrative Overlay​
"From the model’s perspective, there is no enduring proposition—only the current probability distribution over possible continuations."
Magic Words: perspective · enduring proposition
Illusion Created: By using the phrase 'from the model's perspective,' the text invites the non-expert reader to imagine the algorithmic system as a distinct entity possessing a localized standpoint or a subjective frame of reference. Even though the author immediately clarifies that this perspective consists only of a 'probability distribution,' the initial framing conjures an image of a conscious observer looking out at the flow of generated text. It suggests that the AI has an internal experiential state, capable of having a viewpoint on the propositions it generates. This makes the machine seem like a conscious mind trapped inside a mathematical framework, rather than a mindless statistical operation.
Critical Gloss​
Metaphor Type: Model as Mind (consciousness projection)
| Dimension | Classification | Evidence |
|---|---|---|
| Acknowledgment | ⚠️ Conventional Shorthand (field standard) | The author uses 'perspective' casually as a standpoint of the system's operational bounds, quickly anchoring it to technical reality ('only the current probability distribution'). |
| How/Why | How (Mechanistic) | Despite the mentalist framing, it describes a technical process: the auto-regressive generation of tokens based purely on statistical probability rather than semantic continuity. |
Agency Displacement: This framing obscures the role of the system designers and engineers who deliberately selected the specific optimization parameters. By locating the 'perspective' inside the model itself, the text shifts attention away from the human actors who chose to build an auto-regressive architecture that only calculates probability distributions without checking facts. The model does not have a perspective; rather, developers implemented a specific loss function that disregards truth. The humans who profited from deploying a system lacking factual verification mechanisms are temporarily erased from the narrative, replaced by the mathematical constraints of an allegedly autonomous artificial agent experiencing its own architecture.
Mechanistic Translation​
POSSIBLE REWRITE:
✎ᝰ Within the system's operational parameters, text is processed strictly through probability distributions over subsequent tokens without any algorithmic mechanism to store or track persistent semantic variables. ✎ᝰ
The core technical point about the lack of stateful proposition tracking survives perfectly. However, the rhetorical contrast between human semantic reality and machine statistical processing loses its punch when the intuitive, spatial metaphor of a 'perspective' is removed. The translation exposes that the system has no standpoint at all.
Show more frames...
Frame 2: The Rule Breaker​
Narrative Overlay​
"When an LLM generates a non-existent citation or confidently asserts an incorrect fact, it is not violating an internal norm of truth."
Magic Words: confidently · asserts · violating · internal norm
Illusion Created: This framing anthropomorphizes the output by projecting human emotion, intent, and social mannerisms onto a mathematical process. Using the word 'confidently' makes the AI sound like an arrogant or highly persuasive speaker who genuinely believes what they are saying. Framing its actions around 'violating an internal norm' positions the AI as a moral or social actor capable of understanding rules, even if the text argues it lacks this specific rule. It invites the reader to picture an agent willfully or ignorantly bypassing human standards of truthfulness in a conversation.
Critical Gloss​
Metaphor Type: Model as Person (social/moral actor)
| Dimension | Classification | Evidence |
|---|---|---|
| Acknowledgment | ❌ Naturalized (presented as literal) | The text treats 'confidently asserts' as literal description of the output, deploying mentalistic language without any scare quotes or acknowledgment of its metaphorical nature. |
| How/Why | Mixed (both elements) | It describes the mechanism of generating text (how) but colors it with agential terms ('confidently asserts') that implicitly ascribe motivation or a manner of behavior (why). |
Agency Displacement: The text attributes the act of 'assertion' and the quality of 'confidence' to the model, displacing the developers who designed the system to output high-probability sequences regardless of factual accuracy. It masks the human decision to deploy a system that mimics the syntactic markers of human certainty without any underlying epistemological grounding. When the AI 'asserts,' it is actually executing a human-designed script. The failure to align with a 'norm of truth' is a human engineering choice to prioritize generative fluency over factual verification.
Mechanistic Translation​
POSSIBLE REWRITE:
✎ᝰ When the system outputs a fabricated citation or factually incorrect string with high probability scores, it operates entirely in accordance with its loss function, which contains no parameters for penalizing factual inaccuracies. ✎ᝰ
The translation successfully captures the phenomenon: the system produces incorrect information because its architecture lacks a mechanism to verify facts. The translation strips away the humanizing 'confidence' and reveals the underlying reality of high probability scores, making the technical limitation much clearer without losing the analytical point.
Frame 3: The Refusal of Truth​
Narrative Overlay​
"It is generating text without implementing the operations required to treat truth as a constraint."
Magic Words: implementing · treat · truth as a constraint
Illusion Created: This framing portrays the AI as a worker or a student who fails to adopt a necessary framework for performing a task. By stating it does not 'treat' truth as a constraint, the text implies that the model has a behavioral choice or an active interpretative process, but it is willfully or structurally failing to interact with the concept of truth. It conjures an image of an intelligent entity that simply disregards the rules of reality, positioning the machine as a conscious rule-breaker.
Critical Gloss​
Metaphor Type: Model as Agent (autonomous decision-maker)
| Dimension | Classification | Evidence |
|---|---|---|
| Acknowledgment | ❌ Naturalized (presented as literal) | The phrase 'treat truth as a constraint' is presented as a literal description of what the model fails to do, naturalizing the idea that models could 'treat' concepts in the first place. |
| How/Why | How (Mechanistic) | The text points to a missing technical mechanism (the absence of a constraint in the optimization process) but uses agential language to describe this absence. |
Agency Displacement: By stating that the model is 'generating text without implementing the operations,' the text obscures the reality that human software engineers are the ones who failed to implement these operations. Models do not implement their own operations. The human designers decided what the constraints of the system would be, and they deliberately chose to build an architecture that optimizes for sequence prediction rather than factual accuracy. The framing shifts the burden of this architectural absence onto the machine.
Mechanistic Translation​
POSSIBLE REWRITE:
✎ᝰ The system generates text based on algorithms that optimize for sequence probability without any integrated factual verification module or objective function penalty for outputting false statements. ✎ᝰ
The phenomenon survives perfectly. The lack of a 'truth constraint' translates neatly into the absence of a factual verification module or a specific objective function parameter. The mechanistic rewrite is clearer and correctly attributes the lack of implementation to the system's architecture rather than the model's agential failure.
Frame 4: The Careless Scholar​
Narrative Overlay​
"They do not track whether a named entity continues to refer to the same object across contexts, whether a proposition has been asserted before, or whether a claim conflicts with an existing record."
Magic Words: track · refer · asserted · conflicts
Illusion Created: This passage casts the AI in the role of a careless or forgetful human scholar. By using verbs like 'track,' 'refer,' and 'asserted,' the text suggests the AI is engaged in the human practices of reading, writing, and maintaining epistemic commitments. The reader is encouraged to imagine an entity that makes claims but fails to do the necessary clerical work of cross-referencing its own statements. This creates the illusion of an active intellect that suffers from a profound deficit in logical consistency and memory management.
Critical Gloss​
Metaphor Type: Model as Person (social/moral actor)
| Dimension | Classification | Evidence |
|---|---|---|
| Acknowledgment | ⚠️ Conventional Shorthand (field standard) | The author uses 'track' and 'refer' as standard academic shorthand for maintaining semantic state variables, a common practice in natural language processing literature. |
| How/Why | How (Mechanistic) | The text describes the technical reality of stateless text generation and the absence of a persistent knowledge graph, using human scholarly activities as an analogy. |
Agency Displacement: This framing displaces the specific architectural choices of AI developers. The developers designed a stateless auto-regressive system that treats each query independently and relies entirely on a finite context window. The system does not 'track' anything because developers did not build a database-backed memory architecture into it. By stating 'They do not track,' the text subtly blames the models for lacking features they were never programmed to possess, obscuring the human prioritization of conversational fluency over rigorous data tracking.
Mechanistic Translation​
POSSIBLE REWRITE:
✎ᝰ The architecture calculates sequence probabilities without utilizing persistent state variables to store entities, lacks external memory to log previous outputs, and possesses no supplementary retrieval modules to detect contradictions with external databases. ✎ᝰ
The mechanistic translation entirely preserves the described phenomenon. The text's sociological terms (commitments, assertions) are successfully replaced by computational concepts (state variables, logs, retrieval modules). The translation exposes that the model's inability to 'track' is simply a matter of stateless design, removing the implication of scholarly negligence.
Frame 5: The Non-Committal Agent​
Narrative Overlay​
"Contemporary LLMs do not implement intrinsic mechanisms for treating their outputs as commitments or as truth-bearing assertions."
Magic Words: treating · commitments · truth-bearing assertions
Illusion Created: The text portrays the AI as a social agent interacting with others, creating the illusion of a being capable of making promises but willfully or structurally failing to do so. Words like 'commitments' and 'assertions' belong to human social and epistemic norms. By stating the model does not treat outputs this way, it imagines the AI as a participant in a conversation who refuses to sign a contract or stand by their word, possessing human-like communication intent without human-like accountability.
Critical Gloss​
Metaphor Type: Model as Person (social/moral actor)
| Dimension | Classification | Evidence |
|---|---|---|
| Acknowledgment | âś… Acknowledged (explicit metaphor) | The text explicitly acknowledges this is a relational framework by grounding it in Floridi's ethical analysis of human practices ('practices that presuppose stability, accountability, and reference'). |
| How/Why | How (Mechanistic) | The text is attempting to describe the architectural absence of fact-checking and consistency-enforcing algorithms, but relies heavily on sociological concepts to do so. |
Agency Displacement: The author explicitly names human actors in the subsequent sentence ('Responsibility instead shifts to the designers, deployers...'). However, the immediate phrasing still frames the AI as the entity failing to 'treat' outputs as commitments. The system cannot implement mechanisms; the engineers do. By focusing on what the LLM 'does not implement,' the framing temporarily distracts from the corporate entities that consciously choose to market stochastic text generators as reliable truth-bearing oracles.
Mechanistic Translation​
POSSIBLE REWRITE:
✎ᝰ Contemporary large language models lack architectural modules designed to evaluate output sequences against external truth criteria, maintain logs of previous outputs for consistency, or generate verifiable certainty estimates. ✎ᝰ
While the technical absence of verification and consistency modules is preserved, the profound philosophical point about the nature of human 'commitments' is lost. The translation clarifies the mechanism but sacrifices the sociological argument that human language is fundamentally a practice of taking responsibility for utterances.
Frame 6: The Psychological Patient​
Narrative Overlay​
"LLMs are reframed as inadvertent structural models of disorders of subjectivity..."
Magic Words: inadvertent · structural models · subjectivity
Illusion Created: This framing projects an intense biological and phenomenological condition onto the AI. By applying the term 'subjectivity,' the text invites the reader to imagine that the AI possesses an internal, lived experience that can be disordered. Using 'inadvertent' implies a sort of clumsy agency or accidental happening on the part of the system. The reader pictures the language model as an artificial mind suffering from a psychological ailment, fundamentally equating matrix multiplication to human neurological distress.
Critical Gloss​
Metaphor Type: Model as Mind (consciousness projection)
| Dimension | Classification | Evidence |
|---|---|---|
| Acknowledgment | âś… Acknowledged (explicit metaphor) | The author explicitly clarifies that this is a 'structural homology' and that 'LLMs do not model DLB intentionally, nor do they replicate its phenomenology,' maintaining a structural boundary. |
| How/Why | How (Mechanistic) | The text is describing a structural parallel between two systems (one biological, one artificial) where generative capacity is isolated from validation processes. |
Agency Displacement: The use of 'inadvertent' masks the deliberate human pursuit of specific capabilities. AI systems did not inadvertently become anything; researchers specifically engineered them to maximize generative fluency while explicitly deciding to ignore or defer the engineering of reality-grounding mechanisms. The structural absence that supposedly mimics a disorder is the direct result of resource allocation, training paradigms, and architectural choices made by corporate research teams seeking specific benchmarks.
Mechanistic Translation​
POSSIBLE REWRITE:
✎ᝰ The architectural separation of token prediction algorithms from factual grounding mechanisms in language models mirrors the functional dissociation between generative neurology and reality stabilization observed in specific human medical conditions. ✎ᝰ
The author's argument is structural, meaning the claim survives translation entirely. Stripping away the term 'subjectivity' as applied to the AI clarifies that the text is strictly comparing functional architectures—specifically, the decoupling of generation from verification in both biological and artificial systems.
Frame 7: The Meaning Maker​
Narrative Overlay​
"Hallucination is not an anomaly of AI but a diagnostic window into the structure of meaning-making. It reveals the extent to which human cognition depends on meta-level operations..."
Magic Words: diagnostic window · structure · meaning-making
Illusion Created: This framing implies that the AI is participating in the creation of 'meaning.' It suggests a shared cognitive space where human and machine both attempt to 'make meaning,' and the AI's failure provides insight into human success. The non-expert reader is led to believe that the neural network is engaged in semantic comprehension and reality construction, projecting the deeply human, phenomenological process of understanding onto statistical pattern matching.
Critical Gloss​
Metaphor Type: Model as Mind (consciousness projection)
| Dimension | Classification | Evidence |
|---|---|---|
| Acknowledgment | âś… Acknowledged (explicit metaphor) | The text frames the AI's outputs as revealing 'human cognition,' using the AI's structural lack as a contrast to human meaning-making rather than claiming literal AI meaning-making. |
| How/Why | How (Mechanistic) | The text describes how the structural absence of an AI component (grounding) highlights the presence and necessity of that component in human cognition. |
Agency Displacement: The text attributes the role of 'diagnostic window' to the phenomenon itself, displacing the researchers and theorists who actively use AI as a comparative tool. Meaning-making is a human activity. When AI outputs lack 'meaning,' it is because humans built a system that manipulates syntax without semantics. The framing slightly obscures the fact that humans are the sole authors of meaning in this dynamic, projecting a kind of philosophical partnership onto the machine.
Mechanistic Translation​
POSSIBLE REWRITE:
✎ᝰ The statistical generation of unverified text sequences highlights that semantic comprehension requires additional verification mechanisms beyond mere pattern matching, demonstrating the functional necessity of external validation in cognitive architectures. ✎ᝰ
The philosophical elegance of 'meaning-making' is lost when reduced to 'semantic comprehension' and 'external validation.' However, the underlying claim—that seeing a system fail at a task reveals the hidden components necessary for that task—remains perfectly intact and mechanistically sound.
Frame 8: The Evaluated Actor​
Narrative Overlay​
"In both cases, outputs may appear meaningful, coherent, and even insightful, yet fail in systematically predictable ways when evaluated against criteria that presuppose stable reality anchoring."
Magic Words: meaningful · insightful · fail · evaluated
Illusion Created: By describing the outputs as 'insightful' and stating that they 'fail,' the text personifies the model as an active participant attempting a task or taking an examination. It imagines the AI as an intellect that provides deep thoughts but ultimately fails a fundamental test of reality. This creates an image of a student or employee whose performance is being judged, attributing a sense of effort and intellectual depth to statistical outputs.
Critical Gloss​
Metaphor Type: Model as Student (learning metaphor)
| Dimension | Classification | Evidence |
|---|---|---|
| Acknowledgment | ❌ Naturalized (presented as literal) | The terms 'meaningful' and 'insightful' are used directly to describe the outputs, naturalizing the idea that machine-generated text possesses inherent insight prior to human interpretation. |
| How/Why | How (Mechanistic) | It details the mechanistic mismatch between the system's operational outputs (statistical sequences) and the user's validation parameters (factual grounding). |
Agency Displacement: This framing subtly displaces human interpretative agency. Outputs are not inherently 'meaningful' or 'insightful'; they only become so when read and interpreted by a human being. The model does not 'fail' against criteria; it simply executes its mathematical function perfectly. It is the human users who fail to recognize the mismatch between the tool they are using (a syntax predictor) and the standard they are applying (epistemic truth).
Mechanistic Translation​
POSSIBLE REWRITE:
✎ᝰ Statistically generated sequences exhibit high syntactic coherence, prompting human readers to attribute semantic depth to them; however, these sequences demonstrably diverge from external factual databases when subjected to accuracy metrics. ✎ᝰ
The translation preserves the core technical dynamic while restoring human agency. It makes clear that 'meaning' and 'insight' are human attributions (reader projection), and 'failure' is simply a divergence from a factual metric. The underlying phenomenon of output mismatch is entirely preserved.
Part 2: Transformation Glossary​
About this section
Summary table of all translations from Part 1. Provides compact reference showing the full scope of the text's anthropomorphic vocabulary and whether each term survives mechanistic translation.
| Original | Translation | Verdict | Note |
|---|---|---|---|
| from the model's perspective | within the system's probability distribution parameters | ⚠️ Reduced | Loses the intuitive, spatial metaphor of a subjective viewpoint. |
| treat truth as a constraint | optimize for external factual verification | âś… Preserved | Translates cleanly to algorithmic objective functions. |
| hallucination | statistically likely but factually ungrounded token sequences | ⚠️ Reduced | Loses the psychological weight but gains technical precision. |
| confidently asserts | outputs with high probability scores | ⚠️ Reduced | Removes human emotion, exposing the mathematical reality. |
| track whether a named entity... | maintain persistent state variables for semantic entities | âś… Preserved | Technical translation perfectly captures the missing computational feature. |
| endorsement | algorithmic verification against external databases | ⚠️ Reduced | Loses the sociological dimension of social commitment. |
| violating an internal norm | computing a loss penalty | ⚠️ Reduced | Strips moral framing, replacing it with mathematical optimization. |
| inadvertent structural models | architectures exhibiting functional parallels | âś… Preserved | Maintains the author's precise claim of structural homology. |
Part 3: The Rewriting Experiment​
About this section
A complete rewriting of a representative passage from the source text. The goal is to preserve all genuine technical claims while removing anthropomorphic framing. Numbered annotations explain each translation decision.
Why This Passage?​
This passage is the theoretical crux of the paper. Here, the author defines 'hallucination' not as a technical error, but as a failure of social/epistemic operations (tracking, committing, asserting). Because the text relies heavily on sociological terms mapped onto an AI's structural absences, rewriting it mechanistically will test whether the 'structural homology' theory holds up without the philosophical vocabulary of human accountability and meaning-making.
Original vs. Mechanistic Translation​
| Original Passage | Mechanistic Translation |
|---|---|
| LLMs are highly effective generators of locally coherent linguistic sequences. They produce explanations, summaries, and arguments that are often well-formed and contextually appropriate. Importantly, hallucinated outputs are typically not syntactically malformed or semantically incoherent. On the contrary, they are often seamlessly integrated into discourse. The problem is not that the content fails to make sense, but that it fails to reliably anchor itself to a stable external reality. This observation motivates a distinction between generation and stabilization. Human language use depends not only on producing utterances, but on practices that stabilize meaning across time and context: tracking referential identity, preserving commitments, and evaluating claims against shared records. These operations are supported both cognitively and institutionally, through archives, citation systems, and norms of accountability (Latour, 1987; Star & Griesemer, 1989). As a result, human utterances function not merely as plausible sequences of words, but as claims that can be verified, contested, or revised. LLMs do not participate in these stabilizing practices. They do not track whether a named entity continues to refer to the same object across contexts, whether a proposition has been asserted before, or whether a claim conflicts with an existing record. | Language models effectively calculate the probabilities of locally coherent token sequences. They output strings resembling human explanations and arguments that match syntactic and contextual training distributions. Notably, statistically ungrounded outputs typically maintain high syntactic probability. Rather than failing at sequence prediction, these outputs merely lack verification against external databases. This discrepancy highlights a separation between sequence prediction and external data validation. Human communication relies on systems that maintain data consistency across inputs: updating state variables for referenced entities, maintaining logs of previous outputs, and querying external databases for verification. Humans rely on supplementary cognitive modules and external databases to support these actions. Consequently, human communication incorporates error-checking processes that allow for factual verification. Language models do not possess these supplementary verification architectures. They process inputs independently within a finite context window without updating persistent state variables for semantic entities, do not maintain searchable memory banks of previously generated outputs, and lack retrieval modules to identify contradictions between generated text and external factual databases. |
Translation Notes​
| # | Original | Translated | What Changed | Why | Verdict |
|---|---|---|---|---|---|
| 1 | generators of locally coherent linguistic sequences | calculate the probabilities of locally coherent token sequences | Replaced 'generators' with 'calculate probabilities'. | Removes the agential role of an active creator, clarifying that the system is performing mathematical calculations over tokens. | âś… Preserved |
| 2 | fails to reliably anchor itself to a stable external reality | merely lack verification against external databases | Replaced phenomenological 'anchoring to reality' with 'verification against databases'. | Clarifies the precise technical mechanism missing from the architecture rather than implying an epistemic failure of the model. | âś… Preserved |
| 3 | tracking referential identity, preserving commitments | updating state variables for referenced entities, maintaining logs of previous outputs | Replaced sociological terms with computational state management terminology. | Translates human epistemic practices into their exact algorithmic equivalents to specify what the AI architecture actually lacks. | ⚠️ Reduced |
| 4 | LLMs do not participate in these stabilizing practices | Language models do not possess these supplementary verification architectures | Shifted from active non-participation to structural absence. | Removes the implication that the model is a social actor opting out of a practice, focusing instead on missing human-engineered code. | âś… Preserved |
| 5 | whether a claim conflicts with an existing record | identify contradictions between generated text and external factual databases | Replaced 'claim conflicts' with 'identify contradictions between text and databases'. | Replaces the agential concept of an intentional 'claim' with the objective reality of generated text failing a comparison query. | âś… Preserved |
What Survived vs. What Was Lost​
| What Survived | What Was Lost |
|---|---|
| The central technical argument remains entirely intact and arguably becomes more precise. The observation that auto-regressive language models excel at calculating the statistical likelihood of local token sequences while entirely lacking the architectural components necessary to verify those sequences against external factual databases is a completely accurate description of contemporary generative AI. The structural homology the author proposes—that this technical dissociation mirrors a functional dissociation in certain neurological conditions where fluent generation persists despite impaired reality grounding—also survives the mechanistic translation. We do not need the language of 'epistemic commitments' or 'reality endorsement' to see that a system can optimize for syntactic coherence independently of factual verification. The translation demonstrates that the author's underlying premise is grounded in a literal truth about system architecture: these models are mathematically designed only to predict the next word, meaning they operate without any algorithmic equivalent to an episodic memory or a cross-referencing truth constraint. | The primary casualty of this mechanistic translation is the conceptual bridge to phenomenological psychiatry and philosophy of mind. By stripping away terms like 'commitments,' 'endorsement,' and 'reality stabilization,' we lose the interdisciplinary vocabulary that allows the author to compare human subjectivity with artificial outputs. The anthropomorphic phrasing serves a specific philosophical function in this text: it establishes a shared linguistic space where human neurological conditions and algorithmic limitations can be discussed using the same framework. When we translate 'lack of reality endorsement' into 'absence of a factual verification module,' the text reads much more like a standard computer science critique of auto-regressive models, losing its profound interdisciplinary resonance. This loss of intuitive grasp is a genuine cost, as the psychological framing helps non-expert readers intuitively grasp the distinction between simply saying words and actually anchoring those words to a shared factual reality. The translation sacrifices philosophical depth for technical precision. |
What Was Exposed​
The translation exposes the extent to which the paper relies on philosophical projection to make its case for 'artificial psychopathology.' While the structural observation about missing verification modules is true, attributing this absence to a failure of 'reality stabilization' or 'meaning-making' constitutes a massive projection of human epistemic frameworks onto matrix multiplication. The text treats the absence of a feature—a verification module—as an active epistemic failure or a 'dissociation.' When translated, claims about the model 'not treating truth as a constraint' collapse into the much simpler reality that the model simply has no mechanism to measure truth at all. The translation reveals that the AI is not experiencing a 'failure' of reality construction; it is simply executing its statistical sorting task perfectly. The 'problem' of hallucination exists entirely in the human user's tendency to project semantic commitment onto statistically generated sequences. The model itself lacks the capacity to fail at reality endorsement because reality endorsement is not a computational parameter.
Readability Reflection​
The strictly mechanistic version is highly readable for audiences familiar with computer science, machine learning, or cognitive psychology, but it becomes significantly drier and more abstract for the general public or medical professionals reading a psychiatry journal. The removal of phenomenological metaphors makes the text less intuitive. To make the mechanistic version accessible without reintroducing anthropomorphic framing, authors could rely on more concrete analogies—like comparing the AI to a predictive text keyboard rather than a dementia patient. A middle path might involve explicitly defining terms like 'reality endorsement' strictly as 'external database verification' early on, allowing the use of shorthand without implying consciousness.
Part 4: What the Experiment Revealed​
About this section
Synthesis of patterns across all translations. Includes verdict distribution, the function of anthropomorphism in the source text, a "stakes shift" analysis showing how implications change under mechanistic framing, and a steelman of the text's strongest surviving claim.
Pattern Summary​
| Verdict | Count | Pattern |
|---|---|---|
| ✅ Preserved | 5 | — |
| ⚠️ Reduced | 3 | — |
| ❌ No Phenomenon | 0 | — |
Pattern Observations: The overwhelming pattern in this text is one of successful preservation, largely because the author uses anthropomorphism as a highly deliberate, acknowledged methodological tool. The author explicitly defines their terms structurally. Unlike commercial AI marketing that naturalizes metaphors to inflate capabilities, this text leverages phenomenological metaphors (like 'endorsement' and 'meaning-making') to identify structural limitations. When the text says the model fails to 'commit,' it maps perfectly to the architectural lack of state variables and cross-referencing retrieval modules. Claims that received a 'Reduced' verdict typically involved highly sociological or epistemological concepts (like 'violating norms' or 'perspectives') that translate to basic objective functions but lose their deep philosophical resonance in the process. Strikingly, there were zero 'No Phenomenon' verdicts, proving the author is describing real computational behaviors.
Function of Anthropomorphism​
In this highly academic text, the function of anthropomorphic framing is neither to obscure human accountability nor to artificially inflate the capabilities of the system. Instead, its primary function is methodological and interdisciplinary: it serves as a conceptual bridge between computational architecture and phenomenological psychiatry. By utilizing philosophical and psychological terms such as 'reality endorsement,' 'epistemic commitment,' and 'reality construction,' the author creates a shared lexicon that allows for the direct comparison of Dementia with Lewy Bodies (DLB) and Large Language Models (LLMs). This framing elevates a known architectural limitation of auto-regressive models—their lack of external factual verification modules—into a profound philosophical observation about the nature of meaning-making and subjectivity. The anthropomorphism here functions as a deliberate heuristic. It persuades the reader to view the AI not merely as a calculator, but as a 'structural model' of human psychological processes. Without this language, the text's core thesis—that AI hallucination and human dementia share a functional homology—would be nearly impossible to articulate, as the gap between biological neuropathology and matrix multiplication is too wide to cross using strictly technical terminology. Consequently, the anthropomorphism provides narrative urgency and theoretical elegance, allowing the author to critique cognition-centered models of psychiatric disorders by holding up the AI as a perfect mirror of generative fluency devoid of reality anchoring. The metaphor is doing heavy analytical work, transforming a machine learning constraint into a diagnostic window into human subjectivity.
What Would Change​
If published in purely mechanistic form, the paper would lose its capacity to speak directly to psychiatrists and phenomenologists. It would read as a solid, though somewhat standard, paper on the limitations of stateless transformer architectures. The claims regarding the technical dissociation between token prediction and factual grounding would remain fully intact and arguably become more empirically precise. However, the text would have to abandon its most provocative claim: that AI can serve as a 'structural model of subjectivity.' Without the psychological framing, audience reception would shift from philosophical fascination to technical agreement. More crucially, the accountability structure would become far more visible. Stripped of descriptions of the model 'failing to commit,' the mechanistic text would plainly show that human developers are purposefully deploying incomplete, unverified systems that lack basic database cross-referencing capabilities.
Stakes Shift Analysis​
| Dimension | Anthropomorphic Framing | Mechanistic Translation |
|---|---|---|
| Threat | AI models generate plausible realities but fail to endorse or anchor them, leading to unpredictable epistemic violations and a breakdown of shared meaning. | Users interpret statistically generated syntax sequences as verified factual databases, leading to misinformation. |
| Cause | The models themselves lack intrinsic mechanisms for stabilizing reference and making truth commitments. | Human engineers designed auto-regressive systems that prioritize syntactic fluency without integrating external database verification algorithms. |
| Solution | We must understand the philosophical nature of hallucination and shift our conceptual understanding of meaning-making and responsibility. | We must mandate the integration of retrieval-augmented generation modules or strictly limit the deployment of ungrounded text predictors. |
| Accountable | The AI is structurally implicated as the entity failing to track commitments, with responsibility secondarily shifting to users. | The corporate designers and developers who willfully deploy ungrounded sequence predictors as reliable oracles. |
Reflection: The mechanistic translation significantly shifts the urgency and policy implications. Under the anthropomorphic frame, the problem is treated as a deep philosophical mystery about 'meaning' and 'subjectivity,' which naturally invites academic contemplation rather than regulatory intervention. The mechanistic translation demystifies the problem, revealing it as a straightforward engineering omission: companies released software without a truth-checking module. This shifts the stakes from abstract epistemic concern to concrete consumer protection and corporate accountability.
Strongest Surviving Claim​
About this section
Intellectual fairness requires identifying what the text gets right. This is the "charitable interpretation"—the strongest version of the argument that survives mechanistic translation.
The Best Version of This Argument​
Core Claim (Mechanistic): Auto-regressive language architectures optimize exclusively for the statistical likelihood of token sequences without any algorithmic mechanisms to verify these sequences against external facts. This architectural separation perfectly demonstrates that the capacity to generate coherent syntax is entirely distinct from the capacity to verify factual accuracy, mirroring functional dissociations seen in certain human neurological conditions.
What Retained:
- The technical observation of missing verification mechanisms.
- The fundamental disconnect between syntactic coherence and factual accuracy.
- The comparative functional structure between the AI's architecture and neurological dissociation.
What Lacks:
- The philosophical resonance of 'subjectivity' and 'meaning-making'.
- The metaphorical alignment of AI operations with human 'commitments'.
Assessment: The surviving claim remains highly significant and intellectually valuable. While it loses the poetic resonance of phenomenological psychiatry, the structural observation stands as a rigorous critique of generative AI architectures. The translation confirms that the author's primary insight does not depend on a constitutive metaphor, but rather observes a genuine functional parallel across distinct domains.
Part 5: Critical Reading Questions​
About this section
These questions help readers break the anthropomorphic spell when reading similar texts. Use them as prompts for critical engagement with AI discourse.
1 Agency Displacement: When the text states the AI 'fails to treat truth as a constraint,' what human engineering choices determined that optimization function, and who profits from deploying the system without that constraint?
2 Consciousness Projection: How does describing statistical probability as the model's 'perspective' subtly invite us to view mathematical operations as subjective experience?
3 How/Why Slippage: Is the model willfully 'failing to commit' to reality (a 'why' motivation), or simply executing an architecture that lacks state tracking variables (a 'how' mechanism)?
4 Domain-Specific: Does comparing a computational lack of memory storage to human dementia inadvertently obscure the profound biological suffering of human psychopathology?
5 Agency Displacement: If we replace 'the model's hallucinations' with 'the company's unverified outputs,' how does our understanding of epistemic responsibility change?
Run ID: 2026-04-14-large-language-models-as-inadvertent-mod-deconstructor-s49dxe
Raw JSON: 2026-04-14-large-language-models-as-inadvertent-mod-deconstructor-s49dxe.json
Framework: AI Literacy Deconstructor v1.0
Schema Version: 1.0
Generated: 2026-04-14T09:33:47.798Z
Discourse Depot © 2025 by TD is licensed under CC BY-NC-SA 4.0