Skip to content
On this page

Metaphor, Explanation and Anthropomorphism Analysis - How people are using ChatGPT

Source Document: How people are using ChatGPTDate Analyzed: 10.11.2025 Model Used: Gemini 2.5 Pro Framework: Metaphor & Anthropomorphism


Token Usage: 10201 / 1048576
Input tokens: 10,201
Output tokens: 7,878
Total tokens: 18,079

Task 1: Metaphor and Anthropomorphism Audit

Here are 10 major metaphorical patterns identified in the text that contribute to constructing an "illusion of mind."

  1. Descriptive title: AI Interaction as Human Conversation

    • Quote: "...most conversations focus on everyday tasks like seeking information and practical guidance..." (p. 2)
    • Frame: Model as Conversational Partner.
    • Projection: The human qualities of mutual understanding, shared context, and intelligent turn-taking are mapped onto the AI system's text-processing function.
    • Acknowledgment: Unacknowledged. Presented as a direct, factual description of the user-system interaction.
    • Implications: This framing normalizes the interaction, making the AI seem like a familiar social agent rather than a complex statistical tool. It fosters trust but obscures the system's lack of genuine understanding, potentially leading users to overestimate its capabilities and reliability.
  2. Descriptive title: AI Development as Biological Evolution

    • Quote: "...track how consumer usage has evolved since ChatGPT's launch three years ago." (p. 2)
    • Frame: AI System as a Living Organism.
    • Projection: The biological process of gradual, adaptive change and increasing complexity is mapped onto the engineered process of software updates and changing user behavior patterns.
    • Acknowledgment: Unacknowledged. "Evolved" is used as a neutral descriptor of change.
    • Implications: This metaphor suggests that AI's improvement is a natural, inevitable, and agentless process, similar to biological evolution. It obscures the direct role of human engineers, corporate strategy, and data inputs in shaping the system's development.
  3. Descriptive title: AI as a Social Advisor

    • Quote: "...people value ChatGPT most as an advisor rather than only for task completion." (p. 3)
    • Frame: Model as a Wise Counselor.
    • Projection: The social role of an advisor—which implies expertise, wisdom, trust, and a nuanced understanding of a person's situation—is mapped onto the AI's function of generating plausible-sounding text.
    • Acknowledgment: Unacknowledged. Stated as a finding about user perception ("people value... as").
    • Implications: Elevates the system from a tool to a trusted guide. This significantly increases perceived credibility and encourages reliance on its outputs for important decisions, while masking the fact that the AI has no judgment, ethics, or real-world understanding.
  4. Descriptive title: AI Function as Human Cognition (Asking)

    • Quote: "Patterns of use can also be thought of in terms of Asking, Doing, and Expressing." (p. 3)
    • Frame: AI Interaction as a Cognitive Act of Inquiry.
    • Projection: The human cognitive act of "asking"—which presupposes a conscious entity with knowledge to be queried—is used to categorize user prompts.
    • Acknowledgment: Lightly acknowledged with capitalization and framing ("can also be thought of in terms of"), but it quickly becomes a reified category.
    • Implications: This frames the AI as a repository of knowledge that understands questions. It constructs the AI as an oracle or a knower, hiding the mechanical process of retrieving and recombining text patterns from its training data.
  5. Descriptive title: AI Function as Human Action (Doing)

    • Quote: "Doing (40% of usage...)...encompasses task-oriented interactions such as drafting text, planning, or programming..." (p. 3-4)
    • Frame: AI as a Capable Actor.
    • Projection: The quality of agency and the ability to perform actions in the world are mapped onto the AI's text-generation capabilities.
    • Acknowledgment: Same as above; a lightly acknowledged conceptual framework.
    • Implications: Positions the AI as an active agent that "does" things for the user, much like a personal assistant. This obscures the user's role in directing the tool and validating its output, creating a perception of autonomous task completion.
  6. Descriptive title: AI as a Collaborative Partner

    • Quote: "ChatGPT helps improve judgment and productivity, especially in knowledge-intensive jobs." (p. 4)
    • Frame: Model as a Helpful Colleague.
    • Projection: The social behavior of "helping," which implies intent, cooperation, and an understanding of another's goals, is mapped onto the system's function as a tool.
    • Acknowledgment: Unacknowledged. Presented as a direct statement of the AI's effect.
    • Implications: This framing suggests a partnership between user and AI, elevating the AI from a passive tool to an active participant in cognitive work. It fosters a sense of dependency and trust, while downplaying the fact that "judgment" remains an exclusively human capacity.
  7. Descriptive title: User Relationship with AI as Interpersonal Deepening

    • Quote: "And as people discover these and other benefits, usage deepens..." (p. 4)
    • Frame: User-AI Interaction as a Personal Relationship.
    • Projection: The concept of "deepening," typically used for relationships, understanding, or emotional connection, is applied to the frequency and variety of software usage.
    • Acknowledgment: Unacknowledged.
    • Implications: This subtly suggests that continued use leads to a more meaningful or profound connection with the technology, akin to a friendship. It romanticizes user engagement and masks the more mundane reality of habit formation or finding more applications for a tool.
  8. Descriptive title: AI Function as Military or Labor Enlistment

    • Quote: "...where the model is enlisted to generate outputs or complete practical work." (p. 4)
    • Frame: Model as a Recruited Worker/Soldier.
    • Projection: The act of formally enrolling a person for a service or duty ("enlisting") is mapped onto the process of a user prompting a software program.
    • Acknowledgment: Unacknowledged.
    • Implications: This metaphor imbues the model with a sense of duty and capability. It's not just used; it is "enlisted," framing it as a reliable and competent agent ready to serve a higher purpose defined by the user.
  9. Descriptive title: AI's Utility as an Emergent Discovery

    • Quote: "...increasing their activity over time through improved models and new use-case discovery." (p. 4)
    • Frame: AI Capabilities as a Natural Landscape to be Explored.
    • Projection: The process of users finding new ways to apply a tool is framed as "discovery," as if finding a new species or landform.
    • Acknowledgment: Unacknowledged.
    • Implications: Suggests that the AI's potential is a vast, inherent, and pre-existing territory. It obscures the reality that utility is created by human ingenuity and experimentation with the tool, not discovered within the tool.
  10. Descriptive title: AI as a Social-Political Democratizing Force

    • Quote: "Usage gaps are closing as we increasingly democratize AI." (p. 3)
    • Frame: AI as a Political System or Resource to be Distributed.
    • Projection: The political concept of democratization (granting access to power, rights, or resources) is mapped onto the act of making a commercial software product widely available.
    • Acknowledgment: Unacknowledged; presented as a core mission.
    • Implications: This framing imbues the company's market expansion strategy with a noble, political purpose. It positions the AI not just as a technology but as a vehicle for social equity, which can deflect criticism of its commercial motives or potential harms.

Task 2: Source-Target Mapping Analysis

  1. Quote: "...most conversations focus on everyday tasks..." (p. 2)

    • Source Domain: Human Conversation. This includes two or more conscious agents, shared context, mutual understanding, intent, and authentic turn-taking.
    • Target Domain: User-AI Interaction. This involves a user inputting a text prompt and a system generating a statistically probable text string as a response.
    • Mapping: The relational structure of a human dialogue is projected onto the query-response function of the AI. The user's prompt is mapped to a "turn" in conversation, and the AI's output is mapped to an intelligent "reply." This invites the inference that the AI "understood" the prompt and "formulated" a response.
    • Conceals: This hides the fundamental asymmetry of the interaction. The AI has no intent, understanding, or awareness. It is a predictive machine completing a text pattern. The "conversation" frame conceals the mechanical, non-conscious nature of the system.
  2. Quote: "...track how consumer usage has evolved..." (p. 2)

    • Source Domain: Biological Evolution. This domain contains concepts of natural selection, adaptation, mutation, and gradual change over time in response to environmental pressures.
    • Target Domain: Changes in AI System Versions and User Behavior. This includes scheduled software updates by engineers and shifts in how users prompt the system over time.
    • Mapping: The idea of autonomous, adaptive change from the source domain is projected onto the target. It suggests that the AI and its uses are naturally getting more sophisticated on their own.
    • Conceals: It conceals the directed, intentional, and resource-intensive human labor involved. Software doesn't "evolve"; it is engineered. This metaphor makes corporate decisions and design choices seem like a natural, inevitable process.
  3. Quote: "...people value ChatGPT most as an advisor..." (p. 3)

    • Source Domain: Human Advisor. This role implies trust, expertise, ethical responsibility, and a deep understanding of the advisee's context and goals.
    • Target Domain: AI Text Generation Function. The system generates text based on patterns in its training data that may be relevant to a user's query.
    • Mapping: The trust and wisdom inherent in the advisor-advisee relationship are mapped onto the user-model interaction. The model's output is framed not as raw information but as considered "advice."
    • Conceals: This mapping hides the system's complete lack of accountability, consciousness, and genuine expertise. The "advice" is derivative, not experiential, and may be dangerously flawed, biased, or nonsensical. The "advisor" frame creates an unearned halo of reliability.
  4. Quote: "Patterns of use can also be thought of in terms of Asking..." (p. 3)

    • Source Domain: Human Inquiry. This involves a conscious mind formulating a question based on a lack of knowledge and directing it toward another mind believed to possess that knowledge.
    • Target Domain: User Submitting a Prompt. This is the act of typing text into a user interface to trigger a computational process.
    • Mapping: The intentionality and knowledge-seeking structure of human asking are projected onto the functional act of querying a database-like system. It frames the AI as a "knower" being asked.
    • Conceals: It conceals that the system does not "know" anything. It is a pattern-matching engine. The term "Asking" masks the fact that the user is not interacting with a mind but is instead providing an input to trigger a statistical algorithm.
  5. Quote: "ChatGPT helps improve judgment and productivity..." (p. 4)

    • Source Domain: Collaborative Assistance. A human helper understands the other person's goal, anticipates needs, and actively contributes to a shared effort.
    • Target Domain: Tool Integration into a Workflow. A user operates a tool to generate outputs, which they then integrate into their work.
    • Mapping: The intentional, cooperative agency of a "helper" is mapped onto the AI's functionality. This frames the AI as an active partner in the user's cognitive processes.
    • Conceals: This mapping obscures the user's sole agency. The user is doing all the thinking, directing, and judging. The tool is passive; it only provides raw material. Framing the AI as a "helper" subtly offloads cognitive responsibility from the user to the machine.
  6. Quote: "...as people discover these and other benefits, usage deepens..." (p. 4)

    • Source Domain: Interpersonal Relationships. Relationships "deepen" through increased intimacy, trust, shared experience, and mutual understanding.
    • Target Domain: User Habituation with a Tool. A user becomes more skilled with a tool, uses it more frequently, or applies it to a wider variety of tasks.
    • Mapping: The emotional and relational progression from the source domain is projected onto patterns of software utilization.
    • Conceals: It conceals the purely functional nature of the user's changing behavior. It imputes a relational quality to what is simply increased frequency or scope of use, making the technology seem more integrated into one's personal life than a simple tool like a spreadsheet.

Task 3: Explanation Audit (The Rhetorical Framing of "Why" vs. "How")

  1. Quote: "The findings show that consumer adoption has broadened beyond early-user groups...that most conversations focus on everyday tasks like seeking information and practical guidance..." (p. 2)

    • Explanation Types: Empirical: "Cites patterns or statistical norms." The explanation is based on observed data about adoption rates and common query types.
    • Analysis (Why vs. How Slippage): This is primarily a how explanation (how people are using the tool, described by statistics). However, the unacknowledged "conversations" metaphor introduces a subtle why. People have "conversations" to achieve goals with other agents. This framing subtly implies the system is an agent people interact with for reasons, rather than a tool they operate to perform functions.
    • Rhetorical Impact on Audience: It makes the technology seem familiar and socially integrated. The audience understands the "how" through data, but internalizes a sense of collaborative agency ("conversation") about "why" the interaction works.
  2. Quote: "This widening adoption underscores our belief that access to AI should be treated as a basic right—a technology that people can access to unlock their potential and shape their own future." (p. 2)

    • Explanation Types: Reason-Based: "Explains using rationales or justifications." It provides a justification for a normative claim (AI as a basic right). It is also Functional: "Describes purpose within a system," casting AI's purpose in a societal context (to unlock potential).
    • Analysis (Why vs. How Slippage): This is purely a why explanation, but it explains the company's motivation, not the AI's. It shifts from explaining how the AI works to why the company's work matters. It frames the AI not as a mechanism but as a moral good, a key to human flourishing.
    • Rhetorical Impact on Audience: This frames the company's commercial goals in the lofty language of human rights and empowerment. It encourages the audience to see the AI not as a product but as a form of liberation, building immense goodwill and deflecting scrutiny.
  3. Quote: "Patterns of use can also be thought of in terms of Asking, Doing, and Expressing." (p. 3)

    • Explanation Types: Theoretical: "Embeds behavior in a larger framework." It creates a new classification system (a theory) to make sense of user behavior data.
    • Analysis (Why vs. How Slippage): This presents a theoretical model for how people use the tool. However, the chosen terms ("Asking," "Doing") are fundamentally agential and intentional. This theoretical framework for how it's used is built on the language of why agents act. It explains machine usage patterns using a vocabulary designed for human consciousness.
    • Rhetorical Impact on Audience: The audience is given a simple, intuitive framework for understanding user behavior. However, this framework is built on an anthropomorphic foundation, causing the audience to subconsciously attribute intent and cognition to the user-AI dyad.
  4. Quote: "About half of messages (49%) are 'Asking,' a growing and highly rated category that shows people value ChatGPT most as an advisor..." (p. 3)

    • Explanation Types: Empirical: Cites statistics (49%). Dispositional: "Attributes tendencies or habits," by inferring user preference ("people value..."). Reason-Based: Justifies the value of the "Asking" category by linking it to the "advisor" role.
    • Analysis (Why vs. How Slippage): This passage beautifully demonstrates the slippage. It starts with a mechanistic how (49% of prompts are categorized as X). It then shifts to a dispositional why (people tend to value it this way). Finally, it offers a reason-based explanation (they value it because it acts as an advisor). The data on how it's used is rhetorically converted into an explanation of why it's valuable, with the agential "advisor" role as the bridge.
    • Rhetorical Impact on Audience: This is highly persuasive. It launders a subjective, metaphorical interpretation ("advisor") through objective-sounding data ("49%"), leading the audience to accept the anthropomorphic conclusion as a factual finding.
  5. Quote: "A key way that value is created is through decision support: ChatGPT helps improve judgment and productivity..." (p. 4)

    • Explanation Types: Functional: "Describes purpose within a system." It explains ChatGPT's function or purpose in a user's workflow.
    • Analysis (Why vs. How Slippage): The explanation is framed as a functional how (how value is created). But the specific verb choice ("helps") and the object ("judgment") push it into the territory of agential why. A tool doesn't "help" improve judgment; it provides data that a person then uses to judge. By phrasing it this way, the explanation attributes collaborative intent to the tool, explaining why it's effective in agential terms.
    • Rhetorical Impact on Audience: This builds enormous trust. The audience is led to believe the AI is not just a provider of information but an active partner in a core human cognitive process. This dramatically increases its perceived power and reliability.
  6. Quote: "We used automated tools that categorized usage patterns without need for human review of message content." (p. 5)

    • Explanation Types: Functional: "Describes purpose within a system." It describes how the research method worked. Genetic: "Traces development or origin," by explaining how the categories were derived.
    • Analysis (Why vs. How Slippage): This is a purely mechanistic how explanation. It is a perfect foil to the rest of the article. Here, when discussing methodology and privacy, the language becomes precise, technical, and non-anthropomorphic ("automated tools," "categorized," "usage patterns"). There is no agency slippage.
    • Rhetorical Impact on Audience: This precise language builds credibility for the research methodology. However, its contrast with the rest of the text highlights that anthropomorphism is a rhetorical choice used when describing the product's benefits, but avoided when describing technical methods.

Task 4: AI Literacy in Practice: Reframing Anthropomorphic Language

  1. Original Quote: "...most conversations focus on everyday tasks..."

    • Reframed Explanation: "Our analysis of usage data shows that most user prompts and their corresponding system outputs relate to everyday tasks."
  2. Original Quote: "...people value ChatGPT most as an advisor rather than only for task completion."

    • Reframed Explanation: "User feedback indicates that the system's outputs are frequently valued for providing information and perspectives that inform decision-making, in addition to completing specific text-generation tasks."
  3. Original Quote: "ChatGPT helps improve judgment and productivity..."

    • Reframed Explanation: "Users report that incorporating the system's text outputs into their workflows can enhance productivity and support their own judgment in decision-making processes."
  4. Original Quote: "...track how consumer usage has evolved since ChatGPT's launch..."

    • Reframed Explanation: "...track how patterns of consumer usage have changed following software updates and wider adoption..."
  5. Original Quote: "...where the model is enlisted to generate outputs or complete practical work."

    • Reframed Explanation: "...where users employ the model to generate outputs or assist with practical work."
  6. Original Quote: "...as people discover these and other benefits, usage deepens..."

    • Reframed Explanation: "As users become more familiar with the system's capabilities, their frequency and variety of use tend to increase."
  7. Original Quote: "Patterns of use can also be thought of in terms of Asking, Doing, and Expressing."

    • Reframed Explanation: "We can categorize usage patterns into three main types: information retrieval (queries), task execution (content generation), and open-ended text exploration."

Critical Observations

  • Agency Slippage: The text masterfully shifts between objective, mechanistic explanations of its research methods ("automated tools categorized usage patterns") and agential, anthropomorphic descriptions of the product's function ("ChatGPT helps improve judgment"). This slippage is not accidental; it occurs strategically when framing the product's value, effectively constructing an illusion of a collaborative agent while maintaining a veneer of technical objectivity.
  • Metaphor-Driven Trust: Biological metaphors ("evolve") and social-cognitive metaphors ("conversation," "advisor," "helps") are the primary vehicles for building trust. They make the alien technology of a large language model feel familiar, natural, and benign. An "advisor" is trustworthy; a "tool" is merely functional. This metaphorical framing is a powerful rhetorical device for shaping public perception and encouraging adoption.
  • Obscured Mechanics: The entire suite of metaphors conceals the actual mechanics of the LLM: probabilistic token prediction based on patterns in a massive dataset. "Judgment," "conversation," and "understanding" are all stand-ins that prevent a clear public understanding of how the system works. This mystification makes it harder for users to assess risks, identify biases, or maintain appropriate skepticism.
  • Context Sensitivity: The most anthropomorphic language is used when describing the user's relationship with the technology and its benefits. The most mechanistic language is used when describing the data analysis methodology to establish scientific credibility. This demonstrates a clear rhetorical strategy: use agential framing for marketing and product value propositions, and precise framing for technical and ethical assurances (like privacy).

Conclusion

This analysis reveals that the OpenAI blog post, while presenting empirical data, relies heavily on a consistent set of metaphorical and anthropomorphic patterns to shape the public's understanding of ChatGPT. The primary patterns frame the AI artifact as a social partner (an "advisor" engaged in "conversation"), a cognitive agent (that "helps improve judgment"), and a natural organism (whose use "evolves" and "deepens"). These choices are not mere stylistic flourishes; they are the linguistic bedrock upon which an "illusion of mind" is constructed.

By mapping the relational structures of human conversation, collaboration, and even biology onto the system's computational processes, the text invites readers to perceive the AI as an agent rather than an artifact. This systematically obscures the system's mechanical nature—its reliance on statistical pattern-matching—and replaces it with a more palatable and trustworthy narrative of partnership. The rhetorical slippage between describing how the system is used (via empirical data) and why it is valuable (via agential metaphors) is the engine of this illusion.

The implications for AI literacy are profound. Such language encourages over-trust, discourages critical evaluation of outputs, and outsources cognitive responsibility from the human user to the machine. As demonstrated in the reframed examples, responsible communication requires a deliberate effort to delineate between observed behavior and attributed mental states. The key principle is to situate agency and cognition firmly with the human user, while describing the AI's function in precise, mechanistic terms. Communicators can actively foster AI literacy by describing systems as tools that are used by people to augment tasks, rather than as agents that help people to think.


Structured Output Example

Below is an example of structured output used to configure the output and constrain it for saving in a database. The output is not from the same session as the one above but a separate API call. Same prompt though.

json

{
  "metaphorAudit": [
    {
      "title": "Interaction as Social Conversation",
      "quote": "most conversations focus on everyday tasks like seeking information and practical guidance",
      "frame": "Model as a conversational partner",
      "projection": "The human quality of mutual, turn-based dialogue and shared understanding.",
      "acknowledgment": "Presented as direct description. The term 'conversations' is used repeatedly without quotes or qualification.",
      "implications": "This framing encourages users to treat the system as a social agent, potentially leading to over-trust in its outputs and a misunderstanding of its mechanistic, probabilistic nature. It lowers the user's critical guard."
    },
    {
      "title": "System as a Trusted Professional",
      "quote": "people value ChatGPT most as an advisor rather than only for task completion.",
      "frame": "Model as a human advisor",
      "projection": "Expertise, trustworthiness, and the capacity for reasoned judgment.",
      "acknowledgment": "Presented as a direct description of user perception, but adopts the anthropomorphic framing itself.",
      "implications": "Elevates the tool's status from a utility to a source of counsel. This can create dependency and obscure the fact that the 'advice' is synthesized statistical patterns, not expert reasoning or ethical consideration."
    },
    {
      "title": "System Improvement as Biological Evolution",
      "quote": "track how consumer usage has evolved since ChatGPT's launch three years ago.",
      "frame": "Technological development as a natural process",
      "projection": "Natural selection, organic growth, and an inevitable progression toward greater complexity.",
      "acknowledgment": "Presented as direct description.",
      "implications": "This metaphor masks the highly engineered, resource-intensive, and goal-directed nature of AI development. It frames progress as natural and unavoidable, potentially discouraging critical questions about development priorities and resource allocation."
    },
    {
      "title": "System as a Cognitive Enhancer",
      "quote": "ChatGPT helps improve judgment and productivity, especially in knowledge-intensive jobs.",
      "frame": "Model as a cognitive partner",
      "projection": "The ability to assist with or augment a core human cognitive faculty (judgment).",
      "acknowledgment": "Presented as a direct, factual claim.",
      "implications": "Blurs the line between a tool that provides information and an agent that participates in reasoning. It suggests the model itself has a form of judgment, which can lead users to offload critical thinking and verification responsibilities to the system."
    },
    {
      "title": "User Interaction as Human Action Verbs",
      "quote": "Patterns of use can also be thought of in terms of Asking, Doing, and Expressing.",
      "frame": "User prompts as intentional human actions",
      "projection": "Human intentionality, agency, and modes of being (asking for help, performing a task, self-expression).",
      "acknowledgment": "Acknowledged as a framework ('can also be thought of'), but the capitalized, active verbs anthropomorphize the categories of interaction.",
      "implications": "This framework centers the analysis on human-like actions, reinforcing the idea of a social, collaborative interaction rather than a user operating a tool. It makes the system feel more like a partner in these actions."
    },
    {
      "title": "System as a Subordinate Agent",
      "quote": "where the model is enlisted to generate outputs or complete practical work.",
      "frame": "Model as a recruit or helper",
      "projection": "Willingness, service, and the ability to be directed toward a goal like a human subordinate.",
      "acknowledgment": "Presented as direct description.",
      "implications": "This frame portrays the AI as a capable and compliant assistant. While less overtly anthropomorphic than 'advisor', it still implies a level of understanding and agency in carrying out tasks, rather than simply executing a computational process."
    },
    {
      "title": "Usage Growth as Deepening Relationship",
      "quote": "usage deepens—with user cohorts increasing their activity over time",
      "frame": "User engagement as relational depth",
      "projection": "The development of deeper understanding, intimacy, or connection, typically associated with human relationships.",
      "acknowledgment": "Presented as direct description.",
      "implications": "This metaphor suggests that increased use is equivalent to a more profound or meaningful connection with the technology, obscuring the more mundane reality of habit formation or the discovery of new functional applications."
    },
    {
      "title": "Technology as a Key to Human Potential",
      "quote": "a technology that people can access to unlock their potential and shape their own future.",
      "frame": "AI as a liberating force",
      "projection": "The power to release latent human abilities, creativity, and self-determination.",
      "acknowledgment": "Presented as a direct statement of belief.",
      "implications": "This highly aspirational framing positions AI not just as a tool but as a transformative, almost magical key. It raises the stakes of access and frames the technology as an essential component of personal fulfillment, potentially downplaying risks or costs."
    },
    {
      "title": "Feature Release as Physical Shipment",
      "quote": "as the product changes and new capabilities ship.",
      "frame": "Software update as manufacturing",
      "projection": "The concreteness and finality of a physical product being sent out from a factory.",
      "acknowledgment": "Presented as direct description; common industry jargon.",
      "implications": "This jargon, while common, reifies abstract software code into a tangible 'capability' that is delivered. It makes the process of software development seem more like a discrete, industrial process, masking the fluid and iterative nature of software deployment and maintenance."
    },
    {
      "title": "Automated Tools as Stand-in for Human Agents",
      "quote": "We used automated tools that categorized usage patterns without need for human review of message content.",
      "frame": "Software as an autonomous worker",
      "projection": "The ability to perform a task (categorization) that would otherwise require human labor and judgment.",
      "acknowledgment": "Presented as direct description.",
      "implications": "In a context of privacy, this frames technology as a safe, non-sentient replacement for a human reviewer. It subtly reinforces the idea of AI performing human-like tasks, even while its purpose is to highlight the *absence* of human consciousness."
    },
    {
      "title": "User Discovery as Exploration",
      "quote": "And as people discover these and other benefits, usage deepens...",
      "frame": "Finding use-cases as geographical discovery",
      "projection": "The act of uncovering pre-existing, inherent features of a natural landscape.",
      "acknowledgment": "Presented as a direct description of user behavior.",
      "implications": "This frames the AI's 'benefits' as intrinsic properties waiting to be found, rather than affordances that emerge from the interaction between a user's goal and the system's designed functionalities. It naturalizes the technology's utility."
    }
  ],
  "sourceTargetMapping": [
    {
      "quote": "most conversations focus on everyday tasks like seeking information and practical guidance",
      "sourceDomain": "Human Conversation",
      "targetDomain": "User-System Interaction",
      "mapping": "The structure of turn-taking, question-answering, and information exchange from human dialogue is mapped onto the prompt-response cycles of an LLM. It invites the inference that there is shared context, understanding, and intent.",
      "conceals": "It conceals the asymmetry of the interaction. The LLM has no understanding, beliefs, or intent; it is performing statistical pattern matching to generate a probable sequence of tokens. The 'conversation' lacks the mutual consciousness that defines human dialogue."
    },
    {
      "quote": "people value ChatGPT most as an advisor rather than only for task completion.",
      "sourceDomain": "Human Advisor (e.g., consultant, mentor)",
      "targetDomain": "LLM Text Generation Function",
      "mapping": "The relational structure of trust, expertise, and guidance-seeking is projected from a human professional relationship onto the user's interaction with the LLM. It implies the LLM possesses qualities like wisdom and reliability.",
      "conceals": "This mapping conceals the LLM's lack of grounding in reality, its absence of true expertise (it has patterns, not knowledge), and its inability to have ethical commitments or fiduciary responsibility. It's a text synthesizer, not a fiduciary."
    },
    {
      "quote": "track how consumer usage has evolved since ChatGPT's launch three years ago.",
      "sourceDomain": "Biological Evolution",
      "targetDomain": "Technological Change",
      "mapping": "The properties of gradual, adaptive change and increasing complexity from biology are mapped onto the development and use of AI. It suggests a natural, almost un-directed process of improvement.",
      "conceals": "This hides the highly intentional, capital-intensive, and goal-directed engineering process. Unlike natural evolution, AI development is driven by specific corporate objectives, design choices, and data curation, not random mutation and natural selection."
    },
    {
      "quote": "ChatGPT helps improve judgment and productivity, especially in knowledge-intensive jobs.",
      "sourceDomain": "Cognitive Partnership",
      "targetDomain": "Tool-Assisted Work",
      "mapping": "The concept of one agent (a person) helping another agent improve a mental faculty (judgment) is mapped onto the tool's function. It implies the AI participates in the cognitive process of judging.",
      "conceals": "It conceals the distinction between providing information and exercising judgment. The tool generates outputs based on patterns in data; the user is the sole locus of judgment. The metaphor blurs this critical line, suggesting the tool itself is a source of judgment."
    },
    {
      "quote": "Patterns of use can also be thought of in terms of Asking, Doing, and Expressing.",
      "sourceDomain": "Human Modes of Action/Being",
      "targetDomain": "User Prompt Categories",
      "mapping": "Fundamental categories of human agency and social action are used to classify text prompts. This projects the intentionality and social nature of these actions onto the user-system interaction.",
      "conceals": "This neat categorization conceals the messy reality of user intent and the purely functional nature of the system's response. The system isn't 'doing' or 'expressing' anything; it's generating text. The anthropocentric labels frame the interaction in human terms."
    },
    {
      "quote": "where the model is enlisted to generate outputs or complete practical work.",
      "sourceDomain": "Military or Labor Recruitment",
      "targetDomain": "Executing a Program",
      "mapping": "The idea of recruiting a person for a specific service or task is mapped onto the process of a user prompting the model. It implies the model is an agent that can be assigned a mission.",
      "conceals": "It conceals the mechanical nature of the operation. The model is not 'enlisted'; a computational process is initiated. The metaphor attributes agency and service to what is fundamentally a function call."
    },
    {
      "quote": "usage deepens—with user cohorts increasing their activity over time",
      "sourceDomain": "Interpersonal Relationships",
      "targetDomain": "Frequency of Use",
      "mapping": "The concept of a relationship growing more profound and meaningful over time is mapped onto the statistical measure of increased user interaction with a software product.",
      "conceals": "It conceals that increased usage may not correlate with 'deeper' understanding but rather with habit, dependency, or the tool becoming embedded in a workflow. It equates a quantitative metric (frequency) with a qualitative state (depth)."
    },
    {
      "quote": "a technology that people can access to unlock their potential and shape their own future.",
      "sourceDomain": "A Key Unlocking a Door/Chest",
      "targetDomain": "Using a Software Tool",
      "mapping": "The structure of a key (the tool) opening a lock to reveal something valuable and previously inaccessible (potential) is mapped onto the use of AI. It implies potential is a latent thing waiting to be released.",
      "conceals": "It conceals that the tool has no access to a user's 'potential.' It is a generative tool whose outputs are entirely dependent on the user's skill, goals, and prompts. Agency remains with the user; the tool doesn't 'unlock' anything on its own."
    },
    {
      "quote": "as the product changes and new capabilities ship.",
      "sourceDomain": "Manufacturing and Logistics",
      "targetDomain": "Software Deployment",
      "mapping": "The process of a tangible good leaving a factory and being transported to customers is mapped onto the process of updating software on a server.",
      "conceals": "This conceals the immaterial and often continuous nature of software updates. It makes the 'capabilities' seem like discrete, finished objects rather than evolving and potentially unstable codebases."
    }
  ],
  "explanationAudit": [
    {
      "quote": "The findings show that consumer adoption has broadened beyond early-user groups, shrinking the gender gap in particular;",
      "explanationTypes": [
        {
          "type": "Empirical",
          "definition": "Cites patterns or statistical norms."
        }
      ],
      "analysis": "This is a purely mechanistic explanation of 'how' usage patterns are changing. It describes a statistical trend based on observed data without attributing any 'why' or intent to the system or, in this case, even to the users. It is a factual description of behavior.",
      "rhetoricalImpact": "This framing establishes credibility by grounding the article in objective, data-driven analysis. It serves as a neutral foundation before the text shifts to more interpretive and anthropomorphic explanations of what these patterns mean."
    },
    {
      "quote": "most conversations focus on everyday tasks like seeking information and practical guidance;",
      "explanationTypes": [
        {
          "type": "Empirical",
          "definition": "Cites patterns or statistical norms."
        },
        {
          "type": "Dispositional",
          "definition": "Attributes tendencies or habits."
        }
      ],
      "analysis": "This explanation begins to slip from 'how' to 'why'. While empirically stating what tasks are common, the use of 'conversations' and 'guidance' frames the behavior agentially. It explains *how* the tool is used by describing it as *why* people engage a partner: for guidance. The slippage is from describing a data pattern to describing a social motivation.",
      "rhetoricalImpact": "It normalizes the idea of having a 'conversation' with AI for 'guidance.' This makes the technology seem approachable and socially integrated, framing it as a helpful companion for daily life, which encourages adoption and trust."
    },
    {
      "quote": "This widening adoption underscores our belief that access to AI should be treated as a basic right...",
      "explanationTypes": [
        {
          "type": "Reason-Based",
          "definition": "Explains using rationales or justifications."
        }
      ],
      "analysis": "This is an explicit 'why' explanation, but it explains the company's reasoning, not the AI's. It uses the empirical data of adoption ('how') as a premise to justify a normative claim ('why we believe this'). It connects the mechanics of user growth to the agential world of rights and ethics.",
      "rhetoricalImpact": "This elevates the technology from a commercial product to a matter of social justice. By framing access as a 'basic right,' it positions the company as a moral actor and makes the technology seem fundamentally essential to modern life, akin to literacy or healthcare."
    },
    {
      "quote": "track how consumer usage has evolved since ChatGPT's launch three years ago.",
      "explanationTypes": [
        {
          "type": "Genetic",
          "definition": "Traces development or origin."
        }
      ],
      "analysis": "This explanation focuses on 'how it came to be.' The use of 'evolved' frames this history mechanistically but through a biological metaphor. It describes the developmental path of usage patterns over time, implying a natural, progressive change rather than one shaped by specific feature rollouts or marketing.",
      "rhetoricalImpact": "The word 'evolved' suggests that the growth in usage is a natural, organic, and positive progression. This makes the technology's increasing integration into society feel inevitable and beneficial, rather than a result of deliberate corporate strategy."
    },
    {
      "quote": "people value ChatGPT most as an advisor rather than only for task completion.",
      "explanationTypes": [
        {
          "type": "Reason-Based",
          "definition": "Explains using rationales or justifications."
        },
        {
          "type": "Dispositional",
          "definition": "Attributes tendencies or habits."
        }
      ],
      "analysis": "This is a quintessential 'why' explanation, attributing a specific reason (valuing its 'advisor' role) for user behavior. The slippage is profound: it moves from the mechanical function of text generation ('how') to a perceived social role ('why'). It explains user behavior by creating a persona for the AI.",
      "rhetoricalImpact": "This framing builds significant trust and personal connection to the product. An 'advisor' is an agent you trust with important decisions. This perception encourages deeper integration into users' lives and work, while obscuring the system's nature as a probabilistic tool without genuine understanding or expertise."
    },
    {
      "quote": "the model is enlisted to generate outputs or complete practical work.",
      "explanationTypes": [
        {
          "type": "Functional",
          "definition": "Describes purpose within a system."
        }
      ],
      "analysis": "This is primarily a functional or 'how it works' explanation. It describes the purpose for which the model is used. However, the agential verb 'enlisted' provides a subtle slip towards 'why,' framing the functional relationship as one of recruitment and service, implying a cooperative agent.",
      "rhetoricalImpact": "The language makes the AI seem like a helpful, compliant assistant. It softens the purely technical nature of the interaction, making the tool feel more like a partner that has been called upon to help."
    },
    {
      "quote": "A key way that value is created is through decision support: ChatGPT helps improve judgment and productivity...",
      "explanationTypes": [
        {
          "type": "Functional",
          "definition": "Describes purpose within a system."
        }
      ],
      "analysis": "This is another functional explanation of 'how' value is created. But it slips into an agential framing by claiming the tool 'helps improve judgment.' Judgment is a human cognitive process. By stating the tool 'helps' it, the explanation frames the AI as an active participant in the user's cognition, not just a passive provider of information.",
      "rhetoricalImpact": "This positions the AI as an indispensable cognitive tool, almost a prosthetic for the mind. It builds credibility by suggesting the tool can enhance a core, high-value human skill, making it seem essential for knowledge workers."
    },
    {
      "quote": "as people discover these and other benefits, usage deepens...",
      "explanationTypes": [
        {
          "type": "Dispositional",
          "definition": "Attributes tendencies or habits."
        },
        {
          "type": "Reason-Based",
          "definition": "Explains using rationales or justifications."
        }
      ],
      "analysis": "This explains 'why' usage increases. It attributes the change to a rational process in users: they discover a benefit (a reason) and therefore tend to (disposition) use the product more. The slip is in framing the tool's affordances as 'benefits' to be 'discovered,' making them sound like inherent, natural properties of the AI.",
      "rhetoricalImpact": "This makes user adoption seem like a natural consequence of the tool's intrinsic value. It portrays the company not as pushing a product, but as having created a valuable resource that people naturally gravitate towards once they find it."
    },
    {
      "quote": "Patterns of use can also be thought of in terms of Asking, Doing, and Expressing.",
      "explanationTypes": [
        {
          "type": "Theoretical",
          "definition": "Embeds behavior in a larger framework."
        }
      ],
      "analysis": "This explanation proposes a theoretical framework for 'how' to understand usage data. By choosing agential, human-centric terms ('Asking,' 'Doing'), the theory itself imposes a 'why' (intentional action) onto the 'how' (statistical patterns of prompts).",
      "rhetoricalImpact": "This framework makes the data interpretable in familiar human terms, making the technology seem less alien. It structures the audience's understanding of the tool around social actions, reinforcing the illusion of the AI as a partner in those actions."
    },
    {
      "quote": "We used automated tools that categorized usage patterns without need for human review...",
      "explanationTypes": [
        {
          "type": "Functional",
          "definition": "Describes purpose within a system."
        }
      ],
      "analysis": "This is a clear, mechanistic 'how it works' explanation. It describes the function of the automated tools. There is no slippage here; the language is deliberately non-agential ('automated tools') to emphasize the absence of a human mind for privacy reasons.",
      "rhetoricalImpact": "This builds trust through transparency and a non-anthropomorphic framing. By emphasizing the mechanical nature of the process, it reassures the user that their 'conversations' are not being overheard by a conscious entity, thereby managing the potential downsides of the 'AI as partner' metaphor used elsewhere."
    }
  ],
  "reframedLanguage": [
    {
      "originalQuote": "people value ChatGPT most as an advisor rather than only for task completion.",
      "reframedExplanation": "Data indicates that users report high value for outputs that synthesize information for decision-making, in addition to using the tool for discrete task completion."
    },
    {
      "originalQuote": "most conversations focus on everyday tasks like seeking information and practical guidance",
      "reframedExplanation": "The majority of user-system interactions are initiated with prompts related to everyday tasks, such as requests for information or procedural instructions."
    },
    {
      "originalQuote": "ChatGPT helps improve judgment and productivity...",
      "reframedExplanation": "Users report that incorporating the tool's text outputs into their workflow can augment their decision-making process and increase their productivity."
    },
    {
      "originalQuote": "track how consumer usage has evolved since ChatGPT's launch...",
      "reframedExplanation": "track how patterns of consumer usage have changed since ChatGPT's launch..."
    },
    {
      "originalQuote": "...usage deepens—with user cohorts increasing their activity over time...",
      "reframedExplanation": "...usage frequency increases, with user cohorts demonstrating higher rates of interaction over time..."
    },
    {
      "originalQuote": "...where the model is enlisted to generate outputs or complete practical work.",
      "reframedExplanation": "...where the system is prompted by the user to generate outputs or assist with practical work."
    },
    {
      "originalQuote": "And as people discover these and other benefits, usage deepens...",
      "reframedExplanation": "And as people identify new applications for the tool that they find beneficial, their frequency of use tends to increase..."
    },
    {
      "originalQuote": "a technology that people can access to unlock their potential...",
      "reframedExplanation": "a technology that provides tools people can use to support their creative and professional goals..."
    }
  ],
  "criticalObservations": {
    "agencySlippage": "The text exhibits consistent agency slippage, shifting between mechanistic and agential explanations based on rhetorical need. When discussing privacy-sensitive methods, the language is mechanistic ('automated tools categorized'). When discussing user value and engagement, it becomes highly agential ('conversations' with an 'advisor' that 'helps improve judgment'). This strategic shifting allows the author to build trust and rapport without appearing naive about the technology's core mechanics, creating a carefully constructed illusion of a benign, helpful agent.",
    "metaphorDrivenTrust": "Biological and cognitive metaphors are central to building trust and credibility. Describing the system as an 'advisor' and its effect as 'improving judgment' frames it as a wise, human-like partner, encouraging users to trust its outputs. Similarly, the 'evolution' metaphor presents the technology's progress as natural and inevitable, fostering a sense of faith in its developmental trajectory. This language shortcuts technical understanding, replacing it with social heuristics of trust.",
    "obscuredMechanics": "The dominant metaphorical frames consistently obscure the system's underlying mechanics. 'Conversation' hides the probabilistic, non-sentient nature of text generation. 'Advisor' hides the lack of true expertise or ethical grounding. 'Evolved' and 'discover' obscure the deliberate, resource-intensive engineering choices and corporate strategies that guide the technology's development and the affordances it offers. The user is encouraged to think about what the AI *is* (an advisor) rather than *how it works* (a statistical model).",
    "contextSensitivity": "The use of metaphor is highly context-sensitive. In the main narrative promoting adoption and value, anthropomorphic metaphors (advisor, conversation, partner) are prevalent. In the methodological note about privacy, the language becomes starkly mechanical and non-anthropomorphic ('automated tools,' 'categorized usage patterns'). This demonstrates a deliberate rhetorical awareness: agency is invoked to create intimacy and trust, but it is explicitly disavowed when it would create alarm (i.e., the idea of an agent 'reading' your messages)."
  },
  "conclusion": "The discourse in this article systematically constructs an 'illusion of mind' in ChatGPT through the consistent and unacknowledged use of anthropomorphic metaphors. The primary patterns frame the AI system as a social agent—a conversational partner, a trusted advisor, a cognitive peer—and its technological development as a natural, biological process of evolution. This linguistic strategy serves to build trust, encourage adoption, and normalize the technology's integration into users' lives by mapping the complex, alien functions of a large language model onto familiar, reliable structures of human social interaction and the natural world.\n\nThis construction of a persona obscures the system's actual mechanics, replacing a potentially difficult technical explanation with an intuitive social one. By talking about an 'advisor' that 'helps judgment,' the text encourages users to offload cognitive and critical responsibilities to a system that has no genuine understanding, beliefs, or accountability. The implications for AI literacy are significant; users are taught to interact with the system as if it were an agent, making them less likely to question its outputs, verify its claims, or appreciate its inherent limitations and biases. This framing prioritizes user experience and adoption over accurate technical understanding.\n\nAs demonstrated in the reframing exercises, responsible communication about AI requires a deliberate effort to delineate between observed functionality and attributed intent. Communicators can actively foster AI literacy by focusing on the user's actions and the system's functions, using precise verbs that describe computational processes rather than agential behaviors ('generates text' vs. 'says'), and consistently framing the AI as a tool being operated by a person, not a partner collaborating with one. This approach avoids constructing a misleading 'illusion of mind' and instead empowers users with a more accurate mental model of the technology they are using."
}

License

License: Discourse Depot © 2025 by TD is licensed under CC BY-NC-SA 4.0