Skip to main content

๐Ÿ†•+๐Ÿ“Š Generative artificial intelligence and decision-making: evidence from a participant observation with latent entrepreneurs

About

This document presents a Critical Discourse Analysis focused on AI literacy, specifically targeting the role of metaphor and anthropomorphism in shaping public and professional understanding of generative AI. The analysis is guided by a prompt that draws from cognitive linguistics (metaphor structure-mapping), the philosophy of social science (Robert Brown's typology of explanation), and accountability analysis.

All findings and summaries below were generated from detailed system instructions provided to a large language model and should be read critically as interpretive outputsโ€”not guarantees of factual accuracy or authorial intent.


Task 1: Metaphor and Anthropomorphism Auditโ€‹

About this task

For each of the major metaphorical patterns identified, this audit examines the specific language used, the frame through which the AI is being conceptualized, what human qualities are being projected onto the system, whether the metaphor is explicitly acknowledged or presented as direct description, andโ€”most criticallyโ€”what implications this framing has for trust, understanding, and policy perception.

V3 Enhancement: Each metaphor now includes an accountability analysis.

1. AI as Collaborative Agentโ€‹

Quote: "Within decision-making processes, this concept envisions AI as an active collaborator with humans, generating crucial insights to define strategies"

  • Frame: Model as human colleague
  • Projection: This metaphor projects social agency, shared intentionality, and professional reciprocity onto a software artifact. By labeling the AI an 'active collaborator,' the text implies the system possesses a desire to work together, a stake in the outcome, and the capacity for joint attention. It transforms a tool-user relationship into a social dyad, suggesting the AI 'generates' insights not through statistical correlation but through a cognitive process of contribution. This elevates the system from a passive instrument to a partner with a distinct will.
  • Acknowledgment: Direct (Unacknowledged) (The text presents 'active collaborator' as a definitional feature of the conceptual framework (Society 5.0/Human+) without qualifying it as metaphorical or establishing the limits of this 'collaboration.')
  • Implications: Framing AI as a 'collaborator' creates a dangerous presumption of shared goals. In reality, the system optimizes for token prediction based on training weights, not for the user's business success. This framing invites unwarranted trust, as users naturally assume a collaborator has professional ethics or accountability. It diffuses liability; if a collaborator makes a mistake, it is a shared error, whereas if a tool malfunctions, it is a defect. This anthropomorphism serves to mask the lack of actual reasoning, encouraging users to offload critical judgment to a system capable only of probabilistic emulation.

Accountability Analysis:

  • Actor Visibility: Hidden (agency obscured)
  • Analysis: The construction 'AI as an active collaborator... generating crucial insights' obscures the creators of the system. OpenAI (the creator of the tool used, ChatGPT) is not mentioned here. The agency is placed on the 'AI' itself. This erases the engineering decisions behind the RLHF (Reinforcement Learning from Human Feedback) that tune the model to sound helpful and collaborative. If the 'collaborator' provides toxic or financially ruinous advice, the framing suggests the 'dyad' failed, rather than a corporate product failing to meet safety standards.
Show more...

2. Epistemic Possession (Taking/Giving Knowledge)โ€‹

Quote: "The first occurs when individuals 'take' information... while the second refers to a proactive attitude manifested when individuals 'give' information"

  • Frame: Model as mind/container of knowledge
  • Projection: This frame projects the human capacity for epistemic possession and exchange onto the system. It suggests the AI 'has' knowledge that can be 'taken,' and can receive knowledge 'given' to it. This implies the AI understands the semantic content of data. It equates data entry with 'teaching' and data retrieval with 'learning,' obscuring the reality that the user is merely appending tokens to a context window, and the model is generating subsequent tokens based on probability, not exchanging conceptual understanding.
  • Acknowledgment: Hedged/Qualified (The authors use scare quotes around "take" and "give" initially, indicating some awareness of the metaphorical nature, though this qualification largely evaporates in the subsequent analysis.)
  • Implications: This metaphor creates the illusion of a symmetrical intellectual transaction. By suggesting users can 'give' knowledge to the AI, it implies the AI integrates this truth into a worldview. In reality, the 'given' information persists only in the temporary context window (unless used for future training, which is opaque). This risks epistemic circularity, where users feel they have validated their ideas through an external 'knower,' when they have merely received a reflection of their own prompt inputs mirrored back via statistical completion.

Accountability Analysis:

  • Actor Visibility: Hidden (agency obscured)
  • Analysis: The framing of 'taking information' from the AI treats the system as a primary source, obscuring the original human authors of the training data. The information 'taken' was scraped from the internet, yet the authors of that intellectual property are erased, replaced by the AI as the provider. This benefits the AI vendor by naturalizing their appropriation of content as 'AI knowledge' rather than 'processed third-party data.'

3. The Opinionated Machineโ€‹

Quote: "participants treated ChatGPT as a more expert interlocutor... leading them to consider machine opinion as more reliable than their one"

  • Frame: Model as subject with beliefs
  • Projection: This metaphor attributes the capacity for subjective judgment and belief ('opinion') to a mathematical function. An 'opinion' requires a conscious self capable of evaluating truth claims and holding a stance. Projecting this onto AI implies the output is a reasoned judgment derived from expertise, rather than the most probable sequence of words found in the training distribution. It elevates the machine's statistical aggregate to the status of expert counsel.
  • Acknowledgment: Direct (Unacknowledged) (The text reports this as a finding of participant behavior but validates it through the analysis, using the term 'machine opinion' repeatedly without correcting the ontological category error.)
  • Implications: Legitimizing the concept of 'machine opinion' is profoundly risky for decision-making. It suggests the AI has a 'view' that should be weighed against human views. This creates a false authority effect, where the statistical mean of internet discourse is treated as objective wisdom. In entrepreneurial contexts, this leads to 'echo chamber' risks, where unique, innovative human ideas are discouraged because they diverge from the 'average' opinion generated by the model.

Accountability Analysis:

  • Actor Visibility: Hidden (agency obscured)
  • Analysis: The phrase 'machine opinion' completely hides the corporate curation of the model. The 'opinion' is actually a reflection of training data selection and safety filters designed by the AI company (OpenAI). By calling it the 'machine's' opinion, the text shields the corporation from bias accusationsโ€”it frames the output as the neutral or independent stance of an artifact, rather than the enforced policy of a vendor.

4. Reasoning by Paradoxโ€‹

Quote: "humans remain distinguished by their ability to reason by paradoxes... which allows entrepreneurs to navigate in the realm of paradox"

  • Frame: Cognition as logical processing
  • Projection: While this quote ostensibly distinguishes humans, it implicitly frames the comparison within the domain of 'reasoning.' By stating humans 'remain' distinguished by this specific type of reasoning, it implies AI performs other types of reasoning. This validates the 'AI as Reasoner' metaphor, projecting cognitive logical faculties onto pattern-matching algorithms. It suggests the difference between human and AI is one of degree or type of reasoning, not the presence vs. absence of thought.
  • Acknowledgment: Direct (Unacknowledged) (The text treats 'reasoning' as a spectrum on which both humans and AI exist, explicitly crediting AI with 'reasoning capabilities' (cited from Simon, 1978 in context of limited rationality) earlier in the text.)
  • Implications: Framing the deficit as specific (paradoxes) rather than fundamental (comprehension) inflates AI capability. It suggests AI can reason, just not about paradoxes yet. This leads to the 'gap' fallacyโ€”assuming the remaining difference will be closed with more compute. It obscures the fact that AI does not 'reason' at all; it calculates probability. Policy-wise, this supports deploying AI in high-stakes logic tasks (legal, medical) under the false assumption it possesses a baseline reasoning faculty.

Accountability Analysis:

  • Actor Visibility: Ambiguous/Insufficient Evidence
  • Analysis: The text discusses 'humans' and 'entrepreneurs' generically but does not identify the specific developers responsible for the AI's current inability to handle paradox. It treats this limitation as a natural property of the technology ('GenAI') rather than a result of current architectural choices (Transformer limitations) made by specific research labs.

5. Cognitive Understandingโ€‹

Quote: "the individual aims to monitor the machineโ€™s understanding of the prompts to ensure the alignment of the goals"

  • Frame: Model as conscious mind
  • Projection: This is a direct consciousness projection. 'Understanding' implies semantic grasp, internal representation of meaning, and intent. To 'understand' a prompt requires a mind that perceives a request. The model only has activation patterns triggered by tokens. Attributing 'understanding' obscures the mechanical reality of vector alignment. It suggests the machine 'knows' what the user wants, rather than statistically predicting the completion of the user's input string.
  • Acknowledgment: Direct (Unacknowledged) (The text uses 'machine's understanding' as a factual state to be monitored, with no qualification that the machine processes syntax without semantics.)
  • Implications: Believing the machine 'understands' leads to the 'correctness fallacy.' Users assume that if the prompt is clear, the output must be factual because the machine 'understood' the request. When errors occur, users blame their prompting (miscommunication) rather than the system's fundamental lack of connection to reality. This cements reliance on the tool, as users strive to be 'better communicators' with a statistical calculator.

Accountability Analysis:

  • Actor Visibility: Hidden (agency obscured)
  • Analysis: By framing the interaction as checking the 'machine's understanding,' the text displaces the responsibility for output quality onto the user's prompting skill. The 'goals' are viewed as something to be aligned between user and machine, erasing the pre-programmed goals of the AI vendor (e.g., safety refusals, verbosity biases) that actually dictate the model's behavior.

6. Autonomous Thinking Simulationโ€‹

Quote: "the adopted tool to simulate human behaviours as autonomous thinking and proactiveness"

  • Frame: Model as independent agent
  • Projection: This metaphor projects 'autonomy' and 'proactiveness'โ€”qualities of free will and self-directed agencyโ€”onto the software. Even though the word 'simulate' is used, the text argues this simulation causes users to perceive risks associated with 'autonomous thinking.' It maps the human internal experience of volition onto the algorithmic generation of unprompted (or system-prompted) text extensions.
  • Acknowledgment: Explicitly Acknowledged (The text uses the word 'simulate,' acknowledging that the behavior is not genuine autonomy but an emulation of it.)
  • Implications: Even as a simulation, the frame of 'autonomous thinking' prepares the ground for legal and ethical evasion. If an AI is 'autonomous,' it can be blamed for 'going rogue.' This creates a liability shield for developers. It also generates unwarranted fear (existential risk) or unwarranted hope (AI solving problems on its own initiative), distracting from the actual risks of automated bias and reliable enforcement of corporate policies.

Accountability Analysis:

  • Actor Visibility: Hidden (agency obscured)
  • Analysis: The text refers to the 'intrinsic nature of GenAI' as the cause for this simulation, rather than the specific design choices of OpenAI (e.g., system prompts telling the model to be helpful/chatty). 'The adopted tool' is the grammatical subject, obscuring the engineers who tuned the temperature and repetition penalties that create the illusion of 'proactiveness.'

7. AI as Trainer/Teacherโ€‹

Quote: "teach me something about it... Thus, humans 'took' and learned the knowledge given by ChatGPT."

  • Frame: Model as pedagogue
  • Projection: This metaphor casts the retrieval of information as a pedagogical act ('teaching'). It projects the role of an educatorโ€”one who curates, verifies, and adapts knowledge for a studentโ€”onto a text generator. It implies the AI possesses 'knowledge' to dispense. It elevates the output from 'data retrieval' to 'instruction,' conferring an authority that the probabilistic nature of the system does not merit.
  • Acknowledgment: Direct (Unacknowledged) (The text quotes a user request ('teach me') and then validates it in the analysis ('humans... learned the knowledge given'), confirming the pedagogical frame.)
  • Implications: Framing AI as a 'teacher' is dangerous because it lowers the user's critical guard. Students naturally trust teachers. When AI is the teacher, the 'hallucinations' (errors) are absorbed as facts. This metaphor encourages the uncritical absorption of training data biases and factual errors, potentially degrading the user's actual competence while giving them the illusion of learning.

Accountability Analysis:

  • Actor Visibility: Hidden (agency obscured)
  • Analysis: The 'knowledge given by ChatGPT' hides the original sources. The 'teacher' here is a plagiarism engine that strips attribution. By framing the AI as the source of teaching, the text erases the labor of the millions of authors whose work was scraped to train the model, as well as the corporation (OpenAI) profiting from this uncompensated transfer of expertise.

8. Human-AI Social Hierarchyโ€‹

Quote: "Humans as leaders of the conversation... deciding to lead the conversation."

  • Frame: Interaction as social hierarchy
  • Projection: This metaphor maps a management hierarchy onto the user-tool relationship. By designating the human as 'leader,' it implicitly designates the AI as the 'subordinate' or 'team member.' While this attempts to reassert human agency ('Human+'), it ironically reinforces the AI's agency by treating it as an entity capable of following or being led, rather than a tool to be operated. One does not 'lead' a calculator; one uses it.
  • Acknowledgment: Direct (Unacknowledged) (This is one of the three main thematic findings of the paper, presented as a solid category of interaction ('Humans as leaders').)
  • Implications: The 'leader' metaphor implies the AI is a capable agent that requires direction, rather than a passive instrument. This creates a false sense of control. If the human is the 'leader,' they are responsible for the 'subordinate's' work. This subtle shift aligns with corporate narratives that blame 'user error' (bad leadership/prompting) for AI failures, rather than blaming the tool's unreliability. It anthropomorphizes the tool to absolve the manufacturer.

Accountability Analysis:

  • Actor Visibility: Hidden (agency obscured)
  • Analysis: The framing focuses entirely on the user's role ('leader') relative to the machine. It obscures the fact that the 'subordinate' (AI) is actually following a hard-coded corporate policy (system prompt) that overrides the user's leadership whenever the two conflict (e.g., safety refusals). The power dynamic is actually User vs. Corporation, but the metaphor masks this as User vs. AI.

Task 2: Source-Target Mappingโ€‹

About this task

For each key metaphor identified in Task 1, this section provides a detailed structure-mapping analysis. The goal is to examine how the relational structure of a familiar "source domain" (the concrete concept we understand) is projected onto a less familiar "target domain" (the AI system). By restating each quote and analyzing the mapping carefully, we can see precisely what assumptions the metaphor invites and what it conceals.

Mapping 1: Human social/professional relationships โ†’ Human-Computer Interaction (HCI) / Text generationโ€‹

Quote: "GenAI as an active collaborator with humans"

  • Source Domain: Human social/professional relationships
  • Target Domain: Human-Computer Interaction (HCI) / Text generation
  • Mapping: The source domain provides a structure of shared goals, mutual understanding, reciprocal obligation, and joint agency. Mapping this to the target (text generation) implies the software 'cares' about the outcome, 'works with' the user towards a goal, and contributes independent value. It projects the 'mind' of a colleague onto the 'process' of token prediction.
  • What Is Concealed: This mapping conceals the total absence of shared intentionality. The AI has no goals; it maximizes the likelihood of the next token. It conceals the one-way nature of the tool (it only responds when prompted) and the lack of accountability (a collaborator shares risk; the AI does not). It hides the commercial reality: the 'collaborator' is a paid service product, not a partner.
Show more...

Mapping 2: Conscious Mind / Psychology โ†’ Natural Language Processing (NLP) / Vector embeddingsโ€‹

Quote: "monitor the machineโ€™s understanding of the prompts"

  • Source Domain: Conscious Mind / Psychology
  • Target Domain: Natural Language Processing (NLP) / Vector embeddings
  • Mapping: The source domain (understanding) involves a subject grasping the semantic meaning and intent behind a message. Mapping this to the target (NLP) implies the system builds an internal mental model of the user's desire. It suggests the 'input' is received as an idea, not a string of numbers.
  • What Is Concealed: This conceals the mechanistic reality of pattern matching. The machine calculates the statistical correlation between the input tokens and potential output tokens based on training weights. It does not 'know' what the prompt means. It hides the fragility of the processโ€”how slight syntax changes can completely alter the output because the 'understanding' is merely surface-level statistical association.

Mapping 3: Epistemology / Subjective Judgment โ†’ Statistical Aggregation / Probabilistic generationโ€‹

Quote: "consider machine opinion as more reliable than their one"

  • Source Domain: Epistemology / Subjective Judgment
  • Target Domain: Statistical Aggregation / Probabilistic generation
  • Mapping: The source domain (opinion) implies a judgment formed by a conscious subject based on experience, values, and evidence. Mapping this to the target implies the output is a reasoned stance. It confers the status of 'expert witness' onto the algorithm.
  • What Is Concealed: This conceals the origin of the 'opinion': it is a weighted average of the internet's text, filtered by RLHF (human feedback) for safety and tone. It hides the lack of a 'self' to hold the opinion. It masks the potential for bias amplification, as the 'opinion' is just the most frequent pattern in the training data, not a verified truth.

Mapping 4: Physical/Object Exchange โ†’ Information Retrieval / Data processingโ€‹

Quote: "humans 'take'... knowledge given by ChatGPT"

  • Source Domain: Physical/Object Exchange
  • Target Domain: Information Retrieval / Data processing
  • Mapping: The source domain treats knowledge as a transferable object passed between two containers (minds). Mapping this to the target implies the AI 'possesses' this object and benevolent transfers it. It reifies information as a static commodity rather than a dynamic interpretation.
  • What Is Concealed: This conceals the unreliable nature of the generation. The AI does not 'have' the knowledge in a database (like a search engine); it generates a plausible string of words de novo. It conceals the possibility of hallucination (generating a 'fact' that looks like a valid object but is empty). It also conceals the plagiarism inherent in the 'giving'โ€”the AI gives what it scraped from others.

Mapping 5: Human Agency / Cognition โ†’ Algorithmic execution / Automated scriptingโ€‹

Quote: "simulate human behaviours as autonomous thinking"

  • Source Domain: Human Agency / Cognition
  • Target Domain: Algorithmic execution / Automated scripting
  • Mapping: The source domain is the autonomous, self-directed thought process of a free agent. Mapping this to the target implies the software has an internal drive or initiative. Even as a 'simulation,' it suggests the mechanism is comparable to thinking, just artificial.
  • What Is Concealed: This conceals the deterministic (or stochastic) nature of the code. The 'proactiveness' is a result of specific instructions (system prompts) or low-probability sampling settings, not internal will. It hides the puppet stringsโ€”the engineers and designers who programmed the 'autonomous' behavior.

Mapping 6: Education / Pedagogy โ†’ Query-Response utilityโ€‹

Quote: "interaction... intended it as a learning source"

  • Source Domain: Education / Pedagogy
  • Target Domain: Query-Response utility
  • Mapping: The source domain is the teacher-student relationship, characterized by trust, authority, and growth. Mapping this to the target implies the AI is a valid pedagogical instrument capable of guiding development. It positions the user as a passive recipient of wisdom.
  • What Is Concealed: This conceals the lack of pedagogical intent or verification. A teacher verifies facts; the AI predicts likely text. It hides the risk of 'learning' incorrect information. It also conceals the commercial nature of the transactionโ€”the user is providing training data (prompts) to the company while consuming the product, not just 'learning.'

Mapping 7: Role-playing / Theater / Professional Services โ†’ Persona-based text generationโ€‹

Quote: "Generative AI... acting as an investor"

  • Source Domain: Role-playing / Theater / Professional Services
  • Target Domain: Persona-based text generation
  • Mapping: The source domain is a human actor or professional adopting a specific social role with its associated norms and expertise. Mapping this to the target implies the AI can 'become' an investor, adopting the actual perspective and judgment criteria of that profession.
  • What Is Concealed: This conceals that the 'persona' is just a cluster of associated vocabulary. Adopting the 'investor' role just means prioritizing words like 'ROI,' 'market fit,' and 'risk.' It conceals the lack of actual financial judgment or fiduciary responsibility. It creates a dangerous illusion of professional advice where there is only jargon mimicry.

Mapping 8: Social Hierarchy / Management โ†’ Prompt Engineering / Iterative refinementโ€‹

Quote: "humans... decide to lead the conversation"

  • Source Domain: Social Hierarchy / Management
  • Target Domain: Prompt Engineering / Iterative refinement
  • Mapping: The source domain is leading a team or a dialogue partner. It implies a social power dynamic between two agents. Mapping this to the target implies the AI is an entity that can be 'led.' It validaties the AI's status as a distinct social other.
  • What Is Concealed: This conceals the tool-nature of the system. One does not 'lead' a hammer; one wields it. It hides the fact that the user is wrestling with the model's limitations and safety filters, not 'leading' a subordinate. It obscures the friction of the interface by dressing it up as a management challenge.

Task 3: Explanation Audit (The Rhetorical Framing of "Why" vs. "How")โ€‹

About this task

This section audits the text's explanatory strategy, focusing on a critical distinction: the slippage between "how" and "why." Based on Robert Brown's typology of explanation, this analysis identifies whether the text explains AI mechanistically (a functional "how it works") or agentially (an intentional "why it wants something"). The core of this task is to expose how this "illusion of mind" is constructed by the rhetorical framing of the explanation itself, and what impact this has on the audience's perception of AI agency.

Explanation 1โ€‹

Quote: "humans remain distinguished by their ability to reason by paradoxes... which allows entrepreneurs to navigate in the realm of paradox"

  • Explanation Types:

    • Dispositional: Attributes tendencies or habits
    • Functional: Explains behavior by role in self-regulating system with feedback
  • Analysis (Why vs. How Slippage): This passage uses a dispositional explanation to attribute a specific cognitive ability ('reason by paradoxes') to humans, framing it as the differentiator from AI. By defining the distinction functionally (this ability 'allows' navigation), it implies that AI operates on a similar but limited substrate of reasoning. It frames the 'why' of human superiority in terms of a cognitive feature, rather than a fundamental ontological difference (conscious being vs. calculator). The explanation emphasizes a specific skill gap while obscuring the fundamental difference in nature.

  • Consciousness Claims Analysis: The passage implicitly attributes a form of 'reasoning' to AI by treating 'reasoning by paradoxes' as the distinguishing factor. If humans are distinguished by X, the implication is that AI shares non-X traits (i.e., normal reasoning). This is a negative epistemic claim that paradoxically reinforces the 'AI as Reasoner' frame. It assesses 'knowing' (handling paradox) vs. 'processing' (standard logic) but treats them both as cognitive modes. The authors project their understanding of 'paradox' onto the mental model of the entrepreneur, contrasting it with the machine, but fail to describe the machine's actual process (probabilistic failure with conflicting tokens).

  • Rhetorical Impact: This framing assures the audience of continued human relevance ('Human+') but bases that relevance on a shrinking gap. It creates anxiety: if AI learns to 'reason by paradox,' are humans obsolete? It treats AI agency as a given, just currently limited in scope. This encourages a 'race' mentality where humans must maintain their edge, accepting the AI as a competitor in the cognitive domain.

Show more...

Explanation 2โ€‹

Quote: "machine's responses did not always meet their expectations... deciding to lead the conversation"

  • Explanation Types:

    • Intentional: Refers to goals/purposes, presupposes deliberate design
    • Reason-Based: Gives agent's rationale, entails intentionality and justification
  • Analysis (Why vs. How Slippage): The explanation is reason-based for the humans (they decided X because of Y) but implies an intentional stance for the AI (its responses 'did not meet expectations'). It frames the interaction as a social negotiation between two agents. The choice of 'lead the conversation' emphasizes the social agency of the user and the responsive agency of the machine, obscuring the mechanical reality of 'refining the input prompts.' It anthropomorphizes the failure mode: the machine didn't just 'output bad data'; it failed a social expectation.

  • Consciousness Claims Analysis: The text attributes 'expectations' to the users regarding the machine's performance. It treats the machine's output as a 'response' (communicative act) rather than a 'generation' (computational act). There is a 'curse of knowledge' here: the authors interpret the mismatch between user intent and model output as a 'leadership' challenge, rather than a technical limitation of the context window or training data. The technical realityโ€”that the model weights didn't correlate the prompt tokens with the desired output tokensโ€”is replaced by a narrative of social friction.

  • Rhetorical Impact: This framing empowers the user as a 'leader,' restoring a sense of control over the 'black box.' However, it misleads the audience about the nature of the control. It suggests that 'leadership' (soft skills) is the way to control AI, rather than 'prompt engineering' (technical skills). This increases trust in the 'Human+' paradigm by suggesting traditional management skills transfer to AI interaction, which may not be true.

Explanation 3โ€‹

Quote: "ChatGPT... has rapidly gained popularity for its ability to generate human-like responses"

  • Explanation Types:

    • Empirical Generalization: Subsumes events under timeless statistical regularities
  • Analysis (Why vs. How Slippage): This is a mechanistic (how/what) explanation disguised as an ability claim. It generalizes the behavior ('generate human-like responses') as a stable trait. This emphasizes the appearance of the output ('human-like') while obscuring the mechanism (statistical probability). It attributes an 'ability' to the system, treating the result as a competence rather than a statistical artifact. It avoids the 'why' (training on massive human corpora) in favor of the observed effect.

  • Consciousness Claims Analysis: The phrase 'human-like' is the key epistemic lever. It acknowledges resemblance while blurring the line between simulation and reality. It avoids attributing consciousness directly but invites the reader to project it via the 'duck test' (if it looks human, treat it as human). It describes the output accurately but creates an ambiguity about the process. Technical description is absent; the 'ability' is treated as a magic box feature.

  • Rhetorical Impact: This framing builds hype and credibility. By asserting the 'ability' as a settled fact, it validates the use of the tool for complex tasks. It minimizes risk: if the responses are 'human-like,' then treating it as a 'collaborator' feels rational. It encourages the audience to focus on the surface-level utility rather than the underlying limitations or data provenance.

Explanation 4โ€‹

Quote: "individuals... intended it as a learning source"

  • Explanation Types:

    • Intentional: Refers to goals/purposes, presupposes deliberate design
  • Analysis (Why vs. How Slippage): This explanation focuses on the users' intent ('intended it as') to explain the system's function. It defines the AI's nature through the teleology of the user. If the user intends it to be a learning source, it becomes one. This highlights the social construction of technology but obscures the material limits. A user can 'intend' a magic 8-ball to be a decision support system, but that doesn't make it reliable. This framing validates the 'taking knowledge' metaphor analyzed in Task 1.

  • Consciousness Claims Analysis: This passage attributes the epistemic status of 'source' to the AI based on user belief. It bypasses the question of whether the AI actually contains knowledge (justified true belief) and settles for the user's perception of it. It demonstrates the 'curse of knowledge' in reverse: the user's lack of knowledge leads them to attribute knowledge to the system. The text validates this attribution rather than critiquing the gap between 'intended source' and 'actual probabilistic generator.'

  • Rhetorical Impact: This framing validates the 'Human+' paradigm by centering human intent. It makes the audience feel that their mindset determines the tool's value. However, it creates a significant risk: it legitimizes the use of a hallucination-prone text generator as an educational authority. It shifts accountability to the user's 'perspective' rather than the tool's 'reliability.' If the user learns wrong facts, it's framed as a success of 'intention' rather than a failure of 'truth.'

Explanation 5โ€‹

Quote: "simulate human behaviours as autonomous thinking and proactiveness"

  • Explanation Types:

    • Theoretical: Embeds in deductive framework, may invoke unobservable mechanisms
  • Analysis (Why vs. How Slippage): This explanation invokes a theoretical framework (simulation) to explain the observed behavior (proactiveness). It frames the AI agentially ('autonomous thinking') but wraps it in a theoretical hedge ('simulate'). It emphasizes the sophistication of the toolโ€”it's not just a calculator, it's a simulator of mind. This obscures the simple mechanisms (system prompts, repetition penalties) that create the appearance of proactiveness. It elevates a UI feature (chatting back) to a cognitive simulation.

  • Consciousness Claims Analysis: The text uses consciousness verbs ('thinking') modified by a simulation claim. This is a complex epistemic move: it denies actual thinking while asserting that the output is indistinguishable from thinking. It attributes the quality of autonomy to the system's behavior. Mechanically, 'proactiveness' is often just a high 'presence penalty' setting or a system instruction to 'be helpful.' Describing this as 'simulating autonomous thinking' vastly overstates the complexity of the underlying math.

  • Rhetorical Impact: This framing generates awe and caution. It positions the AI as a powerful, almost alive entity that needs 'human leadership' (control). It justifies the need for the 'Human+' frameworkโ€”we need to be 'plus' because the machine is 'autonomous.' It drives the narrative that AI is a partner-rival, not a product-tool. It heightens the perceived stakes of the interaction.

Task 4: AI Literacy in Practice - Reframing Anthropomorphic Languageโ€‹

About this task

This section proposes alternative language for key anthropomorphic phrases, offering more mechanistic and precise framings that better reflect the actual computational processes involved. Each reframing attempts to strip away the projections of intention, consciousness, or agency that are embedded in the original language.

V3 Enhancement: A fourth column addresses human agency restorationโ€”reframing agentless constructions to name the humans responsible for design and deployment decisions.

Original Anthropomorphic FrameMechanistic ReframingTechnical Reality CheckHuman Agency Restoration
machine's understanding of the promptsThe user monitors the model's token correlation accuracy to ensure the generated output aligns with the input constraints.The model does not 'understand'; it calculates vector similarity between the prompt tokens and its training clusters to predict the next probable token.N/A - describes computational processes without displacing responsibility.
consider machine opinion as more reliable than their oneParticipants considered the model's statistically aggregated output to be more reliable than their own judgment.The model generates a sequence of text based on high-frequency patterns in its training data; it does not hold an opinion or beliefs.Participants prioritized the patterns extracted from OpenAI's training corpus over their own intuition.
AI as an active collaborator with humansAI as a responsive text generation interface operated by humans.The system processes inputs and returns outputs based on pre-set weights; it does not 'collaborate' or share goals.Engineers at OpenAI designed the interface to mimic conversational turn-taking, creating the illusion of collaboration.
teach me something about it... humans 'took' and learned the knowledge given by ChatGPTretrieve information about it... humans read and internalized the data outputs generated by the model.The model retrieves and reassembles information based on probabilistic associations in its training data; it does not 'teach' or 'give' knowledge.Humans read content originally created by uncredited authors, scraped by OpenAI, and reassembled by the model.
humans remain distinguished by their ability to reason by paradoxesHumans remain distinguished by their ability to process contradictory logical states and semantic nuances.AI models process data based on statistical likelihoods and struggle with low-probability or contradictory token associations (paradoxes) due to lack of world models.N/A - describes human cognitive traits.
machine gave informationThe model generated text output containing data points.The machine displays text strings predicted to follow the user's prompt; it does not 'give' anything in a transactional sense.The model displayed data scraped from human-generated sources by the AI company.
simulate human behaviours as autonomous thinkingEmulate human conversation patterns through automated sequence generation.The system executes code to generate text without pause; it does not 'think' or possess 'autonomy.'Developers at OpenAI programmed the system to generate continuous text and act 'helpfully,' creating the appearance of autonomy.
Humans as leaders of the conversationHumans as operators of the prompt interface.The user inputs commands; the system executes predictions. There is no social hierarchy or leadership, only input-output operations.Users direct the tool's output, while OpenAI's system prompts constrain the available range of responses.

Task 5: Critical Observations - Structural Patternsโ€‹

Agency Slippageโ€‹

The text systematically oscillates between treating GenAI as a passive 'tool' and an active 'collaborator.' This oscillation serves a specific rhetorical function: the mechanical framing is used in the methodology to establish scientific rigor (using 'a conversational GenAI tool'), while the agential framing dominates the findings and discussion ('active collaborator,' 'machine opinion'). The slippage occurs most dramatically when discussing the value added by AI. When the AI works well, it is a 'collaborator' or 'teacher' (agency TO AI). When it fails or requires correction, the human becomes the 'leader' and the AI a 'machine' (agency FROM AI, to Human).

This pattern insulates the AI from failure while crediting it with success. The text frames the 'latent entrepreneurs' as 'leaders,' yet constantly describes them asking the AI for 'opinions' and 'knowledge.' This reveals the 'curse of knowledge': the authors perceive the output as 'knowledge' because it makes sense to them, projecting that understanding back into the 'mind' of the machine. The accountability sink is evident in the agentless construction 'GenAI emerges as an effective tool,' which erases the corporate actors (OpenAI) who deployed the tool. The text builds the agential claim on top of the 'Human+' paradigm, suggesting that because humans add agency to the process, the machine must also hold a form of agency to be added to.

Metaphor-Driven Trust Inflationโ€‹

The text constructs authority through metaphors of social and professional relationship. By framing the AI as a 'collaborator,' 'partner,' and 'teacher,' the text leverages relation-based trust (sincerity, benevolence) for a system that only merits performance-based trust (reliability). This is dangerous because relation-based trust assumes the partner has shared interests. The metaphor of 'machine opinion' is particularly potent for constructing false authority. An 'opinion' implies a weighed judgment, encouraging the user to defer to the 'expert interlocutor.'

The text explicitly notes that participants considered the machine's opinion 'more reliable than their own.' Instead of critiquing this as a failure of critical thinking or a misunderstanding of the technology, the authors validate it as a feature of the 'Human+' paradigm ('enhancing human capabilities'). This conflation of 'statistical probability' with 'expert opinion' creates a high-risk environment where users may trust a hallucination because they view the system as a 'collaborator' rather than a 'text predictor.' The 'leader' metaphor further cements this trust by implying the user is in control, even as they cede epistemic authority to the machine.

Obscured Mechanicsโ€‹

The anthropomorphic language conceals the technical, labor, and economic realities of the AI system. First, the 'collaborator' frame hides the corporate extraction of labor. The 'knowledge' the AI 'gives' was scraped from millions of human workers/writers without compensation. By attributing this knowledge to the 'machine,' the text erases the original authors. Second, the 'opinion' frame hides the Reinforcement Learning from Human Feedback (RLHF) process. The 'machine's opinion' is actually a mimicry of the preferences of low-wage workers in Kenya or the Philippines who rated model outputs, or the safety policies of OpenAI.

Third, the focus on 'interaction' obscures the proprietary opacity. The text treats ChatGPT as a neutral scientific instrument rather than a black-box commercial product whose weights and training data are trade secrets. The claim that AI 'understands' hides the dependency on tokenization and probability distributions. It makes the process seem like a meeting of minds rather than a statistical gamble. If the metaphors were replaced with mechanistic language ('The model retrieved high-probability tokens from its training set'), the 'collaboration' would be revealed as a data retrieval task, and the 'opinion' as a statistical artifact, significantly lowering the perceived value of the 'Human+' framework.

Context Sensitivityโ€‹

Anthropomorphism intensifies significantly as the paper moves from the Methodology to the Findings and Discussion. In the methodology, the tool is described somewhat neutrally ('conversational GenAI tool'). However, in the findings, the intensity spikes: the AI acquires 'opinions,' 'understanding,' and 'proactiveness.' This suggests the authors use mechanical language to pass the gatekeeping of scientific method, then switch to metaphorical license to build their theoretical contribution ('Human+').

There is a clear asymmetry between capabilities and limitations. Capabilities are described agentially: AI 'navigates the realm of paradox,' 'generates insights,' 'acts as an investor.' Limitations are described structurally or vaguely: 'black box effect,' 'lack of predictability.' The machine acts like a human but fails like a machine. This asymmetry serves the 'Human+' narrative: the human provides the 'supervision' (mechanical fix) for the 'collaborator's' (agential) brilliance. The anthropomorphism serves a marketing function for the 'Human+' paradigm, making the integration of AI seem like a team-building exercise rather than a software implementation.

Accountability Synthesisโ€‹

Accountability Architecture

This section synthesizes the accountability analyses from Task 1, mapping the text's "accountability architecture"โ€”who is named, who is hidden, and who benefits from obscured agency.

The text constructs an 'accountability sink' where responsibility for decision-making is diffused between the 'Leader' (human) and the 'Collaborator' (AI), leaving the actual architect (OpenAI) invisible. The 'name the actor' test reveals that OpenAI, the entity that designed the algorithms, selected the training data, and defined the safety filters, is never held accountable for the 'opinions' or 'biases' mentioned.

Responsibility for 'hallucinations' or 'falsehoods' is shifted to the user, whose role is defined as 'supervisor' or 'leader.' If the AI fails, the 'leader' failed to supervise. This creates a liability shield for the vendor. The text uses passive constructions like 'GenAI emerges' or 'decisions are made,' creating a sense of inevitability. The 'collaborator' metaphor is the keystone of this displacement: in a collaboration, risk is shared. By framing the user-product relationship as a collaboration, the text implicitly argues that the user assumes a share of the liability for the product's defects. Naming the corporation would disrupt this: 'OpenAI's product generated false text' places liability on the vendor; 'My collaborator suggested an idea' places liability on the team. The text systematically prefers the latter.

Conclusion: What This Analysis Revealsโ€‹

The Core Finding

The analysis reveals two dominant anthropomorphic patterns: 'AI as Social Collaborator' and 'AI as Epistemic Agent.' These patterns form a self-reinforcing system. The 'Collaborator' frame establishes a social relationship, which then permits the 'Epistemic Agent' frameโ€”attributing 'opinions,' 'knowledge,' and 'reasoning' to the partner. The foundational, load-bearing pattern is the 'Collaborator' metaphor. Without the assumption that the AI is a social entity with shared goals ('Human+'), the attribution of 'opinion' (Task 1) and 'autonomy' (Task 3) would collapse into clear category errors. The consciousness architecture supports this by consistently using verbs like 'understands,' 'thinks,' and 'learns,' creating a linguistic reality where the AI is a 'knower' rather than a 'processor.'

Mechanism of the Illusion:โ€‹

The illusion of mind is constructed through a 'bait-and-switch' rhetorical architecture. The text first establishes the AI's utility through empirical generalization ('generates human-like responses'). It then immediately pivots to intentional explanations ('intended as a learning source'), leveraging the 'curse of knowledge': the authors and participants project their own semantic understanding onto the machine's syntactic outputs. The temporal structure reinforces this: the AI is presented first as a tool, then as a partner, then as a leader-follower dynamic. This gradual anthropomorphic creep desensitizes the reader. By the time the text claims the AI has 'opinions,' the reader has already accepted it as a 'collaborator.' The illusion is amplified by the 'Human+' framework, which requires a 'human-like' counterpart to make the addition meaningful.

Material Stakes:โ€‹

Categories: Economic, Regulatory/Legal, Epistemic

These metaphors have concrete economic and legal consequences. Economically, framing AI as a 'collaborator' with 'opinions' devalues human labor. If an AI has 'knowledge' to 'give,' employers may replace junior mentors or subject matter experts with cheaper AI subscriptions, ignoring the degradation of quality. Legally, the 'Human as Leader' frame shifts liability to the user. If a 'latent entrepreneur' uses an AI 'investor' (Task 2) to validate a fraudulent scheme, the 'leader' metaphor suggests the human is solely responsible for supervision, absolving the AI vendor of negligence in releasing an unsafe product. Epistemically, the acceptance of 'machine opinion' degrades truth standards. It encourages decision-makers to weigh statistical averages (AI output) equally with reasoned judgment, potentially stifling innovation that defies the 'average' patterns of the training data. The winners are AI vendors (OpenAI); the losers are the entrepreneurs who rely on hallucinated 'expertise' and the professionals whose expertise is devalued.

AI Literacy as Counter-Practice:โ€‹

Practicing critical literacy requires systematically reframing these metaphors to restore mechanical precision and human agency. Reframing 'machine opinion' to 'statistical aggregation' (Task 4) forces the user to question the validity of the output rather than accepting it as expert counsel. Changing 'AI collaborator' to 'text generation interface' strips away the illusion of shared goals, revealing the commercial transaction. Adoption of this rigorous vocabulary faces resistance from AI vendors, who benefit from the 'magic' of anthropomorphism, and from researchers like the authors, whose 'Human+' theory relies on the AI having sufficient agency to be a 'plus.' Systematic adoption would require journals to mandate mechanistic descriptions in methodology sections and peer reviewers to flag uncritical consciousness verbs. This precision counters the material stakes by clarifying liability (the vendor's code, not the 'partner's' opinion) and preserving the distinct value of human judgment.

Path Forwardโ€‹

The discourse future forks between 'Mechanistic Precision' and 'Anthropomorphic Integration.' A shift toward Mechanistic Precision ('the model outputted tokens') makes the technology tractable and demystifies the 'black box,' but may alienate non-technical stakeholders who rely on intuitive metaphors. It protects epistemic standards but increases the cognitive load of using the tools. Conversely, deepening Anthropomorphic Integration ('the AI thinks/collaborates') facilitates smoother adoption and user confidence (Task 3), but embeds false assumptions about agency and reliability that lead to the 'accountability sinks' identified in Task 5. A hybrid future is likely, where technical fields adopt precision while public/business discourse retains the 'collaborator' illusion. The danger lies in policy and law adopting the business metaphors (treating AI as an agent) rather than the technical reality (AI as product), effectively granting rights to software while absolving creators of responsibility.


Extended Processing Summaryโ€‹

Gemini's Processing Steps
What you are about to readโ€ฆ

Gemini refers to the text below as "thought summaries." This is an overt consciousness projection because 'intentions' are hallmarks of a conscious mind that 'knows' what it is doing and why. The concealed mechanistic process is probabilistic text generation. Treat this as just another rhetorical artifactโ€”a way of making the model's processing legible. When the model writes "I should double check that...", it is mathematically altering its own probability distribution for the next token to be more accurate. The "I" is a functional gear in the machine, not a self-reference. This language is a deliberate interface design choice, not a biological reality. The developers could have programmed the model to output passive logs. Instead, they chose to simulate a narrator.

Formulating CDA Insights

I'm now zeroing in on the specific academic paper and its implications for CDA. The focus has shifted to the nuances of the paper's arguments and how they can be deconstructed through a critical lens. I'm actively considering the potential power dynamics and ideological underpinnings. This involves identifying key claims and assessing their broader societal impact.

Deconstructing AI Metaphors

I'm deep into the metaphor audit, specifically within the academic paper regarding latent entrepreneurs. I've identified "collaborator" as the first key instance of AI anthropomorphism and have started its projection, exploring the social agency it implies. I'm actively cataloging several more instances, each assessed for its implications on the perception of AI capabilities. This audit will form the basis of the JSON output. I am focusing on the linguistic construction of the "illusion of mind" within the text.

Analyzing AI Framing

I've moved on to further metaphor identification within the paper, specifically AI anthropomorphism. I've logged "Knowledge Giver," noting the implied epistemic agency and hidden accountability, and cataloged "Reasoning Capabilities," explicitly acknowledging a cognitive faculty. I've identified the "Autonomous Thinker" and "Understanding" as projections of agency, the former explicitly acknowledged and the latter implicitly acknowledged, though implied to be an attribute of the AI. I'm actively dissecting each instance.

Unpacking AI Agency

I'm now identifying additional AI anthropomorphisms within the paper. I've logged "Comprehension," "Opinion Holder," "Teacher/Mentor," and "Leader," noting the specific projections of agency and the nature of their acknowledgement. I'm focusing on the implicit social dynamics and power structures these metaphors establish, preparing for the source-target mapping phase of the analysis. The "Leader" instance is particularly interesting, as it creates an explicit social hierarchy.

Unveiling Conceptual Mappings

I'm now deep into mapping semantic gaps, specifically in how AI metaphors conceal key differences. My focus is the disconnects I've identified, which include "Opinion" mapping to probabilistic outputs, "Memory" mapping to data storage, and "Reasoning" mapping to statistical predictions. I'm also cataloging the concealment inherent in "Hallucination," "Conversation," and "Training" to further deconstruct the AI's "illusion of mind".

Unmasking AI Explanations

I'm now cataloging the specific types of explanations offered in the paper using Brown's typology. I've noted examples of dispositional/functional, empirical generalization, and theoretical explanations. The "black box" effect is a rare mechanistic example. I'm especially interested in instances attributing intentionality to the AI, such as the one about "leading the conversation". I'm meticulously recording each quote with its corresponding explanation type.

Exposing Linguistic Distortions

I'm now diving into the reframing tasks, focusing on how specific phrases obscure the underlying mechanisms. I've rephrased "machine's understanding" to "token correlation accuracy," and "AI opinion" to "statistically probable text sequence." I'm working through each instance, carefully translating the anthropomorphic language into a more mechanistic and precise form to eliminate the "illusion of mind" projected onto AI systems. I'm focusing on the reframing of "GenAI as active collaborator," reframing this to a simple "text generation interface."

Unraveling Accountability Shifts

I'm now identifying shifts in agency within the paper, specifically concerning accountability. The text oscillates between framing GenAI as a tool and as a partner, which creates the potential for obscured responsibility. I'm focusing on the implied roles โ€“ with users becoming leaders and AI as followers. The "hallucinations" framing shifts blame towards the users. "Opinion" framing hides the statistical nature of outputs. The "Human+" framing shifts the burden to users.

Synthesizing Anthropomorphic Inflation

I'm now zeroing in on the "Human+" paradigm. It's a key mechanism for anthropomorphic inflation, enabled by the "Collaborator" metaphor. This shift conceals the commercial and mechanical nature of token prediction. I'm actively distinguishing between the participants' and authors' anthropomorphism, and how the "Theoretical Frameworks" are presented. My JSON output strictly adheres to the schema.

Defining Responsibility Shifts

I'm now focusing on how "AI suggests" phrases obscure the specific human engineers who designed the AI. I'm actively pinpointing where agency and accountability are strategically concealed within the paper's language. I've rephrased "AI suggests" to "OpenAI engineers designed RLHF to prefer X," a specific design choice, and will be using this and others for the final JSON construction.


Analysis Provenance

Run ID: 2026-01-08-generative-artificial-intelligence-and-d-metaphor-4amf96 Raw JSON: 2026-01-08-generative-artificial-intelligence-and-d-metaphor-4amf96.json Framework: Metaphor Analysis v6.4 Schema Version: 3.0 Generated: 2026-01-08T10:11:15.584Z

Discourse Depot ยฉ 2025 by TD is licensed under CC BY-NC-SA 4.0