How people are using ChatGPT
This document presents a Critical Discourse Analysis focused on AI literacy, specifically targeting the role of metaphor and anthropomorphism in shaping public and professional understanding of generative AI. The analysis is guided by a prompt that draws from cognitive linguistics (metaphor structure-mapping) and the philosophy of social science (Robert Brown's typology of explanation). All findings and summaries below were generated from detailed system instructions provided to a large language model and should be read critically as interpretive outputs—not guarantees of factual accuracy or authorial intent.
Source Document: How people are using ChatGPT
Date Analyzed: 2025-10-11
Model Used: Gemini 2.5 Pro
Framework: Metaphor & Anthropomorphism Audit
Token Usage: 18079 total (10201 input / 7878 output)
Task 1: Metaphor and Anthropomorphism Audit​
Here are 10 major metaphorical patterns identified in the text that contribute to constructing an "illusion of mind."
-
Descriptive title: AI Interaction as Human Conversation
- Quote: "...most conversations focus on everyday tasks like seeking information and practical guidance..." (p. 2)
- Frame: Model as Conversational Partner.
- Projection: The human qualities of mutual understanding, shared context, and intelligent turn-taking are mapped onto the AI system's text-processing function.
- Acknowledgment: Unacknowledged. Presented as a direct, factual description of the user-system interaction.
- Implications: This framing normalizes the interaction, making the AI seem like a familiar social agent rather than a complex statistical tool. It fosters trust but obscures the system's lack of genuine understanding, potentially leading users to overestimate its capabilities and reliability.
-
Descriptive title: AI Development as Biological Evolution
- Quote: "...track how consumer usage has evolved since ChatGPT's launch three years ago." (p. 2)
- Frame: AI System as a Living Organism.
- Projection: The biological process of gradual, adaptive change and increasing complexity is mapped onto the engineered process of software updates and changing user behavior patterns.
- Acknowledgment: Unacknowledged. "Evolved" is used as a neutral descriptor of change.
- Implications: This metaphor suggests that AI's improvement is a natural, inevitable, and agentless process, similar to biological evolution. It obscures the direct role of human engineers, corporate strategy, and data inputs in shaping the system's development.
-
Descriptive title: AI as a Social Advisor
- Quote: "...people value ChatGPT most as an advisor rather than only for task completion." (p. 3)
- Frame: Model as a Wise Counselor.
- Projection: The social role of an advisor—which implies expertise, wisdom, trust, and a nuanced understanding of a person's situation—is mapped onto the AI's function of generating plausible-sounding text.
- Acknowledgment: Unacknowledged. Stated as a finding about user perception ("people value... as").
- Implications: Elevates the system from a tool to a trusted guide. This significantly increases perceived credibility and encourages reliance on its outputs for important decisions, while masking the fact that the AI has no judgment, ethics, or real-world understanding.
-
Descriptive title: AI Function as Human Cognition (Asking)
- Quote: "Patterns of use can also be thought of in terms of Asking, Doing, and Expressing." (p. 3)
- Frame: AI Interaction as a Cognitive Act of Inquiry.
- Projection: The human cognitive act of "asking"—which presupposes a conscious entity with knowledge to be queried—is used to categorize user prompts.
- Acknowledgment: Lightly acknowledged with capitalization and framing ("can also be thought of in terms of"), but it quickly becomes a reified category.
- Implications: This frames the AI as a repository of knowledge that understands questions. It constructs the AI as an oracle or a knower, hiding the mechanical process of retrieving and recombining text patterns from its training data.
-
Descriptive title: AI Function as Human Action (Doing)
- Quote: "Doing (40% of usage...)...encompasses task-oriented interactions such as drafting text, planning, or programming..." (p. 3-4)
- Frame: AI as a Capable Actor.
- Projection: The quality of agency and the ability to perform actions in the world are mapped onto the AI's text-generation capabilities.
- Acknowledgment: Same as above; a lightly acknowledged conceptual framework.
- Implications: Positions the AI as an active agent that "does" things for the user, much like a personal assistant. This obscures the user's role in directing the tool and validating its output, creating a perception of autonomous task completion.
-
Descriptive title: AI as a Collaborative Partner
- Quote: "ChatGPT helps improve judgment and productivity, especially in knowledge-intensive jobs." (p. 4)
- Frame: Model as a Helpful Colleague.
- Projection: The social behavior of "helping," which implies intent, cooperation, and an understanding of another's goals, is mapped onto the system's function as a tool.
- Acknowledgment: Unacknowledged. Presented as a direct statement of the AI's effect.
- Implications: This framing suggests a partnership between user and AI, elevating the AI from a passive tool to an active participant in cognitive work. It fosters a sense of dependency and trust, while downplaying the fact that "judgment" remains an exclusively human capacity.
-
Descriptive title: User Relationship with AI as Interpersonal Deepening
- Quote: "And as people discover these and other benefits, usage deepens..." (p. 4)
- Frame: User-AI Interaction as a Personal Relationship.
- Projection: The concept of "deepening," typically used for relationships, understanding, or emotional connection, is applied to the frequency and variety of software usage.
- Acknowledgment: Unacknowledged.
- Implications: This subtly suggests that continued use leads to a more meaningful or profound connection with the technology, akin to a friendship. It romanticizes user engagement and masks the more mundane reality of habit formation or finding more applications for a tool.
-
Descriptive title: AI Function as Military or Labor Enlistment
- Quote: "...where the model is enlisted to generate outputs or complete practical work." (p. 4)
- Frame: Model as a Recruited Worker/Soldier.
- Projection: The act of formally enrolling a person for a service or duty ("enlisting") is mapped onto the process of a user prompting a software program.
- Acknowledgment: Unacknowledged.
- Implications: This metaphor imbues the model with a sense of duty and capability. It's not just used; it is "enlisted," framing it as a reliable and competent agent ready to serve a higher purpose defined by the user.
-
Descriptive title: AI's Utility as an Emergent Discovery
- Quote: "...increasing their activity over time through improved models and new use-case discovery." (p. 4)
- Frame: AI Capabilities as a Natural Landscape to be Explored.
- Projection: The process of users finding new ways to apply a tool is framed as "discovery," as if finding a new species or landform.
- Acknowledgment: Unacknowledged.
- Implications: Suggests that the AI's potential is a vast, inherent, and pre-existing territory. It obscures the reality that utility is created by human ingenuity and experimentation with the tool, not discovered within the tool.
-
Descriptive title: AI as a Social-Political Democratizing Force
- Quote: "Usage gaps are closing as we increasingly democratize AI." (p. 3)
- Frame: AI as a Political System or Resource to be Distributed.
- Projection: The political concept of democratization (granting access to power, rights, or resources) is mapped onto the act of making a commercial software product widely available.
- Acknowledgment: Unacknowledged; presented as a core mission.
- Implications: This framing imbues the company's market expansion strategy with a noble, political purpose. It positions the AI not just as a technology but as a vehicle for social equity, which can deflect criticism of its commercial motives or potential harms.
Task 2: Source-Target Mapping Analysis​
-
Quote: "...most conversations focus on everyday tasks..." (p. 2)
- Source Domain: Human Conversation. This includes two or more conscious agents, shared context, mutual understanding, intent, and authentic turn-taking.
- Target Domain: User-AI Interaction. This involves a user inputting a text prompt and a system generating a statistically probable text string as a response.
- Mapping: The relational structure of a human dialogue is projected onto the query-response function of the AI. The user's prompt is mapped to a "turn" in conversation, and the AI's output is mapped to an intelligent "reply." This invites the inference that the AI "understood" the prompt and "formulated" a response.
- Conceals: This hides the fundamental asymmetry of the interaction. The AI has no intent, understanding, or awareness. It is a predictive machine completing a text pattern. The "conversation" frame conceals the mechanical, non-conscious nature of the system.
-
Quote: "...track how consumer usage has evolved..." (p. 2)
- Source Domain: Biological Evolution. This domain contains concepts of natural selection, adaptation, mutation, and gradual change over time in response to environmental pressures.
- Target Domain: Changes in AI System Versions and User Behavior. This includes scheduled software updates by engineers and shifts in how users prompt the system over time.
- Mapping: The idea of autonomous, adaptive change from the source domain is projected onto the target. It suggests that the AI and its uses are naturally getting more sophisticated on their own.
- Conceals: It conceals the directed, intentional, and resource-intensive human labor involved. Software doesn't "evolve"; it is engineered. This metaphor makes corporate decisions and design choices seem like a natural, inevitable process.
-
Quote: "...people value ChatGPT most as an advisor..." (p. 3)
- Source Domain: Human Advisor. This role implies trust, expertise, ethical responsibility, and a deep understanding of the advisee's context and goals.
- Target Domain: AI Text Generation Function. The system generates text based on patterns in its training data that may be relevant to a user's query.
- Mapping: The trust and wisdom inherent in the advisor-advisee relationship are mapped onto the user-model interaction. The model's output is framed not as raw information but as considered "advice."
- Conceals: This mapping hides the system's complete lack of accountability, consciousness, and genuine expertise. The "advice" is derivative, not experiential, and may be dangerously flawed, biased, or nonsensical. The "advisor" frame creates an unearned halo of reliability.
-
Quote: "Patterns of use can also be thought of in terms of Asking..." (p. 3)
- Source Domain: Human Inquiry. This involves a conscious mind formulating a question based on a lack of knowledge and directing it toward another mind believed to possess that knowledge.
- Target Domain: User Submitting a Prompt. This is the act of typing text into a user interface to trigger a computational process.
- Mapping: The intentionality and knowledge-seeking structure of human asking are projected onto the functional act of querying a database-like system. It frames the AI as a "knower" being asked.
- Conceals: It conceals that the system does not "know" anything. It is a pattern-matching engine. The term "Asking" masks the fact that the user is not interacting with a mind but is instead providing an input to trigger a statistical algorithm.
-
Quote: "ChatGPT helps improve judgment and productivity..." (p. 4)
- Source Domain: Collaborative Assistance. A human helper understands the other person's goal, anticipates needs, and actively contributes to a shared effort.
- Target Domain: Tool Integration into a Workflow. A user operates a tool to generate outputs, which they then integrate into their work.
- Mapping: The intentional, cooperative agency of a "helper" is mapped onto the AI's functionality. This frames the AI as an active partner in the user's cognitive processes.
- Conceals: This mapping obscures the user's sole agency. The user is doing all the thinking, directing, and judging. The tool is passive; it only provides raw material. Framing the AI as a "helper" subtly offloads cognitive responsibility from the user to the machine.
-
Quote: "...as people discover these and other benefits, usage deepens..." (p. 4)
- Source Domain: Interpersonal Relationships. Relationships "deepen" through increased intimacy, trust, shared experience, and mutual understanding.
- Target Domain: User Habituation with a Tool. A user becomes more skilled with a tool, uses it more frequently, or applies it to a wider variety of tasks.
- Mapping: The emotional and relational progression from the source domain is projected onto patterns of software utilization.
- Conceals: It conceals the purely functional nature of the user's changing behavior. It imputes a relational quality to what is simply increased frequency or scope of use, making the technology seem more integrated into one's personal life than a simple tool like a spreadsheet.
Task 3: Explanation Audit (The Rhetorical Framing of "Why" vs. "How")​
-
Quote: "The findings show that consumer adoption has broadened beyond early-user groups...that most conversations focus on everyday tasks like seeking information and practical guidance..." (p. 2)
- Explanation Types: Empirical: "Cites patterns or statistical norms." The explanation is based on observed data about adoption rates and common query types.
- Analysis (Why vs. How Slippage): This is primarily a how explanation (how people are using the tool, described by statistics). However, the unacknowledged "conversations" metaphor introduces a subtle why. People have "conversations" to achieve goals with other agents. This framing subtly implies the system is an agent people interact with for reasons, rather than a tool they operate to perform functions.
- Rhetorical Impact on Audience: It makes the technology seem familiar and socially integrated. The audience understands the "how" through data, but internalizes a sense of collaborative agency ("conversation") about "why" the interaction works.
-
Quote: "This widening adoption underscores our belief that access to AI should be treated as a basic right—a technology that people can access to unlock their potential and shape their own future." (p. 2)
- Explanation Types: Reason-Based: "Explains using rationales or justifications." It provides a justification for a normative claim (AI as a basic right). It is also Functional: "Describes purpose within a system," casting AI's purpose in a societal context (to unlock potential).
- Analysis (Why vs. How Slippage): This is purely a why explanation, but it explains the company's motivation, not the AI's. It shifts from explaining how the AI works to why the company's work matters. It frames the AI not as a mechanism but as a moral good, a key to human flourishing.
- Rhetorical Impact on Audience: This frames the company's commercial goals in the lofty language of human rights and empowerment. It encourages the audience to see the AI not as a product but as a form of liberation, building immense goodwill and deflecting scrutiny.
-
Quote: "Patterns of use can also be thought of in terms of Asking, Doing, and Expressing." (p. 3)
- Explanation Types: Theoretical: "Embeds behavior in a larger framework." It creates a new classification system (a theory) to make sense of user behavior data.
- Analysis (Why vs. How Slippage): This presents a theoretical model for how people use the tool. However, the chosen terms ("Asking," "Doing") are fundamentally agential and intentional. This theoretical framework for how it's used is built on the language of why agents act. It explains machine usage patterns using a vocabulary designed for human consciousness.
- Rhetorical Impact on Audience: The audience is given a simple, intuitive framework for understanding user behavior. However, this framework is built on an anthropomorphic foundation, causing the audience to subconsciously attribute intent and cognition to the user-AI dyad.
-
Quote: "About half of messages (49%) are 'Asking,' a growing and highly rated category that shows people value ChatGPT most as an advisor..." (p. 3)
- Explanation Types: Empirical: Cites statistics (49%). Dispositional: "Attributes tendencies or habits," by inferring user preference ("people value..."). Reason-Based: Justifies the value of the "Asking" category by linking it to the "advisor" role.
- Analysis (Why vs. How Slippage): This passage beautifully demonstrates the slippage. It starts with a mechanistic how (49% of prompts are categorized as X). It then shifts to a dispositional why (people tend to value it this way). Finally, it offers a reason-based explanation (they value it because it acts as an advisor). The data on how it's used is rhetorically converted into an explanation of why it's valuable, with the agential "advisor" role as the bridge.
- Rhetorical Impact on Audience: This is highly persuasive. It launders a subjective, metaphorical interpretation ("advisor") through objective-sounding data ("49%"), leading the audience to accept the anthropomorphic conclusion as a factual finding.
-
Quote: "A key way that value is created is through decision support: ChatGPT helps improve judgment and productivity..." (p. 4)
- Explanation Types: Functional: "Describes purpose within a system." It explains ChatGPT's function or purpose in a user's workflow.
- Analysis (Why vs. How Slippage): The explanation is framed as a functional how (how value is created). But the specific verb choice ("helps") and the object ("judgment") push it into the territory of agential why. A tool doesn't "help" improve judgment; it provides data that a person then uses to judge. By phrasing it this way, the explanation attributes collaborative intent to the tool, explaining why it's effective in agential terms.
- Rhetorical Impact on Audience: This builds enormous trust. The audience is led to believe the AI is not just a provider of information but an active partner in a core human cognitive process. This dramatically increases its perceived power and reliability.
-
Quote: "We used automated tools that categorized usage patterns without need for human review of message content." (p. 5)
- Explanation Types: Functional: "Describes purpose within a system." It describes how the research method worked. Genetic: "Traces development or origin," by explaining how the categories were derived.
- Analysis (Why vs. How Slippage): This is a purely mechanistic how explanation. It is a perfect foil to the rest of the article. Here, when discussing methodology and privacy, the language becomes precise, technical, and non-anthropomorphic ("automated tools," "categorized," "usage patterns"). There is no agency slippage.
- Rhetorical Impact on Audience: This precise language builds credibility for the research methodology. However, its contrast with the rest of the text highlights that anthropomorphism is a rhetorical choice used when describing the product's benefits, but avoided when describing technical methods.
Task 4: AI Literacy in Practice: Reframing Anthropomorphic Language​
-
Original Quote: "...most conversations focus on everyday tasks..."
- Reframed Explanation: "Our analysis of usage data shows that most user prompts and their corresponding system outputs relate to everyday tasks."
-
Original Quote: "...people value ChatGPT most as an advisor rather than only for task completion."
- Reframed Explanation: "User feedback indicates that the system's outputs are frequently valued for providing information and perspectives that inform decision-making, in addition to completing specific text-generation tasks."
-
Original Quote: "ChatGPT helps improve judgment and productivity..."
- Reframed Explanation: "Users report that incorporating the system's text outputs into their workflows can enhance productivity and support their own judgment in decision-making processes."
-
Original Quote: "...track how consumer usage has evolved since ChatGPT's launch..."
- Reframed Explanation: "...track how patterns of consumer usage have changed following software updates and wider adoption..."
-
Original Quote: "...where the model is enlisted to generate outputs or complete practical work."
- Reframed Explanation: "...where users employ the model to generate outputs or assist with practical work."
-
Original Quote: "...as people discover these and other benefits, usage deepens..."
- Reframed Explanation: "As users become more familiar with the system's capabilities, their frequency and variety of use tend to increase."
-
Original Quote: "Patterns of use can also be thought of in terms of Asking, Doing, and Expressing."
- Reframed Explanation: "We can categorize usage patterns into three main types: information retrieval (queries), task execution (content generation), and open-ended text exploration."
Critical Observations​
- Agency Slippage: The text masterfully shifts between objective, mechanistic explanations of its research methods ("automated tools categorized usage patterns") and agential, anthropomorphic descriptions of the product's function ("ChatGPT helps improve judgment"). This slippage is not accidental; it occurs strategically when framing the product's value, effectively constructing an illusion of a collaborative agent while maintaining a veneer of technical objectivity.
- Metaphor-Driven Trust: Biological metaphors ("evolve") and social-cognitive metaphors ("conversation," "advisor," "helps") are the primary vehicles for building trust. They make the alien technology of a large language model feel familiar, natural, and benign. An "advisor" is trustworthy; a "tool" is merely functional. This metaphorical framing is a powerful rhetorical device for shaping public perception and encouraging adoption.
- Obscured Mechanics: The entire suite of metaphors conceals the actual mechanics of the LLM: probabilistic token prediction based on patterns in a massive dataset. "Judgment," "conversation," and "understanding" are all stand-ins that prevent a clear public understanding of how the system works. This mystification makes it harder for users to assess risks, identify biases, or maintain appropriate skepticism.
- Context Sensitivity: The most anthropomorphic language is used when describing the user's relationship with the technology and its benefits. The most mechanistic language is used when describing the data analysis methodology to establish scientific credibility. This demonstrates a clear rhetorical strategy: use agential framing for marketing and product value propositions, and precise framing for technical and ethical assurances (like privacy).
Conclusion​
This analysis reveals that the OpenAI blog post, while presenting empirical data, relies heavily on a consistent set of metaphorical and anthropomorphic patterns to shape the public's understanding of ChatGPT. The primary patterns frame the AI artifact as a social partner (an "advisor" engaged in "conversation"), a cognitive agent (that "helps improve judgment"), and a natural organism (whose use "evolves" and "deepens"). These choices are not mere stylistic flourishes; they are the linguistic bedrock upon which an "illusion of mind" is constructed.
By mapping the relational structures of human conversation, collaboration, and even biology onto the system's computational processes, the text invites readers to perceive the AI as an agent rather than an artifact. This systematically obscures the system's mechanical nature—its reliance on statistical pattern-matching—and replaces it with a more palatable and trustworthy narrative of partnership. The rhetorical slippage between describing how the system is used (via empirical data) and why it is valuable (via agential metaphors) is the engine of this illusion.
The implications for AI literacy are profound. Such language encourages over-trust, discourages critical evaluation of outputs, and outsources cognitive responsibility from the human user to the machine. As demonstrated in the reframed examples, responsible communication requires a deliberate effort to delineate between observed behavior and attributed mental states. The key principle is to situate agency and cognition firmly with the human user, while describing the AI's function in precise, mechanistic terms. Communicators can actively foster AI literacy by describing systems as tools that are used by people to augment tasks, rather than as agents that help people to think.
Analysis Framework: Metaphor & Anthropomorphism Audit
Generated: 2025-10-11
Discourse Depot © 2025 by TD is licensed under CC BY-NC-SA 4.0