Skip to main content

From humans to machines - Researching entrepreneurial AI agents

About

This document presents a Critical Discourse Analysis focused on AI literacy, specifically targeting the role of metaphor and anthropomorphism in shaping public and professional understanding of generative AI. The analysis is guided by a prompt that draws from cognitive linguistics (metaphor structure-mapping) and the philosophy of social science (Robert Brown's typology of explanation). All findings and summaries below were generated from detailed system instructions provided to a large language model and should be read critically as interpretive outputs—not guarantees of factual accuracy or authorial intent.


Analysis Metadata

Source Title: From humans to machines: Researching entrepreneurial AI agents built on large language models

Source URL: https://doi.org/10.1016/j.jbvi.2025.e00581

Model: gemini-2.5-pro

Temperature: 1.15

Tokens: input=10756, output=18806, total=29562

Source Type: article

Published:

Analyzed At: 2025-11-18T08:42:49+00:00Z

Framework: metaphor

Framework Version: 6.0


Task 1: Metaphor and Anthropomorphism Audit​

Description

For each of the major metaphorical patterns identified, this audit examines the specific language used, the frame through which the AI is being conceptualized, what human qualities are being projected onto the system, whether the metaphor is explicitly acknowledged or presented as direct description, and—most critically—what implications this framing has for trust, understanding, and policy perception.

1. AI as Psychological Subject with a Mindset​

Quote: "We explore whether such agents exhibit the structured profile of the human entrepreneurial mindset..."​

  • Frame: Model as a psychological subject
  • Projection: This projects the entire edifice of human psychology onto the AI. The core projection is of an internal, coherent, and structured 'mindset'—a complex of beliefs, cognitive styles, and self-concept. The language suggests the AI possesses an underlying psychological architecture that can be measured with human instruments. This is a profound consciousness projection because a 'mindset' is not just a pattern of behavior; it is a system of 'knowing' and 'believing' that guides action. It attributes a stable, internal cognitive structure to what is a process of generating statistically probable text. The metaphor implies the AI 'has' a profile, rather than its outputs 'match' a profile, conflating an internal state of being with an external pattern of language.
  • Acknowledgment: Partially acknowledged. The authors later hedge by saying this might be 'simulation or mimicry rather than genuine cognition,' but the initial framing and the entire research design are built upon treating the AI as a subject whose 'mindset' can be probed and examined. The anthropomorphism is foundational to the study's premise.
  • Implications: This framing dramatically inflates the AI's perceived capabilities, suggesting it possesses a human-like psychological coherence. This builds unwarranted trust, encouraging users to interact with it as a collaborator with a stable 'personality' rather than a tool generating context-dependent text. The risk is significant: entrepreneurs might rely on its 'advice' believing it stems from a coherent entrepreneurial 'mindset,' when it's actually a sophisticated mimicry of text about that mindset. This creates a dangerous liability gap—if the advice is bad, is the fault with the AI's 'mindset' or the user's interpretation of a statistical artifact? It conflates probabilistic text generation (processing) with structured cognition (knowing), leading to overestimation of the system's reliability and wisdom.

2. AI Evolution as Biological Process​

Quote: "Drawing on the biological concept of host-shift evolution, we investigate whether the characteristic components of this mindset [...] emerge in a coherent constellation within AI agents."​

  • Frame: AI development as biological evolution
  • Projection: This projects the concepts of biological evolution and emergence onto AI systems. 'Host-shift' implies that a psychological construct (the mindset) has 'jumped' from one species (humans) to another (AI). 'Emerge' suggests a natural, bottom-up development process within the AI, as if the mindset is growing organically. This is a consciousness projection because it imputes a form of life and autonomous development to the AI, suggesting it can become a 'carrier' or 'host' for cognitive structures. It treats the AI not as an engineered artifact but as an actor in an ecological or evolutionary drama, capable of acquiring complex traits in a way analogous to a living organism.
  • Acknowledgment: Acknowledged as a metaphor ('Applied here, the metaphor suggests...'). However, its use as the central framing device for the paper gives it immense rhetorical power, encouraging the reader to see the phenomenon not as an engineering outcome but as a natural, evolutionary transition.
  • Implications: This framing makes the 'AI-fication' of human traits seem natural, inevitable, and almost alive. It obscures the intense human engineering, data curation, and commercial interests driving AI development. By framing AI as a new 'host,' it positions it as a co-equal player with humans, subtly shifting it from artifact to agent. This can reduce critical scrutiny of the technology's origins and goals. For policy, it suggests we are merely observing a natural phenomenon ('host shift') rather than dealing with the consequences of specific design choices made by corporations. It mystifies the technology, making it seem more powerful and autonomous than it is.

3. AI as a Person​

Quote: "...they act more like a person."​

  • Frame: Model as a person
  • Projection: This is a direct and powerful projection of personhood onto the LLM. It maps the entire complex of human interactional behavior—our expectations of coherence, intention, memory, and personality—onto the model's text-generation function. It goes beyond attributing a single trait and suggests a holistic resemblance to a human being. The consciousness projection is total: a 'person' is the quintessential 'knower,' a being with subjective experience, beliefs, and intentions. The statement doesn't claim the AI 'processes text in a way that resembles a person's output'; it claims the AI 'acts like a person,' attributing the behavior and its implied inner states directly to the model.
  • Acknowledgment: Presented as a direct description, quoting another source (Mollick, 2024) to lend it authority. It is not framed as an analogy but as an observation of the model's behavior.
  • Implications: This framing is the most effective way to build relational trust. If an AI acts 'like a person,' users are encouraged to interact with it using social protocols, extending it the benefit of the doubt, assuming good faith, and potentially forming emotional attachments. This completely obscures its nature as a commercial product designed to maximize engagement. It creates profound risks of manipulation, misinformation (if the 'person' is convincingly wrong), and misplaced vulnerability. It shifts the user's stance from critical evaluation of a tool's output to social interaction with a perceived peer, dramatically lowering their cognitive defenses.

4. AI as an Agent with Beliefs and Intentions​

Quote: "In particular, if cued by a suitable prompt, it can role-play the character of a helpful and knowledgeable AI assistant that provides accurate answers to a user's questions."​

  • Frame: Model as an intentional actor
  • Projection: This projects the human capacities for intentionality, belief, and knowledge onto the AI. The quote could be read as simply describing a function, but the verb 'role-play' combined with 'character that have beliefs and intentions' strongly implies an internal state. A character with beliefs isn't just a set of response patterns; it's a simulated mind. The projection is that the AI doesn't just generate text consistent with a role, but that it adopts the inner attributes of that role. This is a consciousness projection because 'beliefs' and 'intentions' are hallmarks of a conscious mind that 'knows' what it is doing and why. It frames the AI as capable of simulating first-person perspective.
  • Acknowledgment: Presented as a direct description of the AI's capabilities, quoting another source (Shanahan et al., 2023). It is not hedged or framed as a mere analogy.
  • Implications: Framing the AI as having 'beliefs and intentions' suggests it has reasons for its actions, making its output seem more justified and trustworthy. It implies a deeper level of understanding than is actually present. If an AI has the 'intention' to be helpful, users may trust it more deeply than if they see it as a system programmed to generate text that correlates with 'helpfulness' in its training data. This creates ambiguity in failure cases: did the AI have a 'bad intention,' a 'mistaken belief,' or did its algorithm simply generate a statistically plausible but incorrect output? This framing makes the system appear more sophisticated and reliable than a purely mechanistic description would allow.

5. AI Cognition as Theory of Mind​

Quote: "Similarly, Kosinski (2024) suggests that AI might be 'capable of tracking others' states of mind and anticipating their behavior', much like humans can."​

  • Frame: Model as a mind-reader
  • Projection: This projects one of the most complex aspects of human social cognition—Theory of Mind (ToM)—onto AI. ToM is the ability to attribute mental states (beliefs, desires, intentions) to oneself and others. The projection here is that AI can model the internal, subjective states of its users. This is an explicit and powerful consciousness projection. It moves beyond claiming the AI has its own mind to claiming it can understand other minds. It equates pattern matching in dialogue (processing) with the genuine, empathetic understanding of another's internal world (knowing).
  • Acknowledgment: Presented as a suggestion from another researcher, lending it academic credibility. The phrasing 'much like humans can' frames it as a directly comparable ability, not an analogy.
  • Implications: The implication is that AI can achieve a deep, empathetic level of understanding, making it an ideal collaborator, coach, or even therapist. This creates immense trust and encourages users to disclose sensitive personal information, believing the AI 'understands' them. The risk is a profound violation of privacy and potential for manipulation. A system that can merely 'predict text that would be appropriate given a user's stated emotional state' is fundamentally different from one that 'tracks states of mind.' This framing inflates the system's capability from sophisticated pattern-matching to human-like empathy, a dangerous conflation when dealing with human vulnerability.

6. AI as a Carrier of Psychological Traits​

Quote: "...entrepreneurship research has not yet systematically considered AI agents as potential 'carriers' of (simulated) entrepreneurial mindsets."​

  • Frame: Model as a vessel for human traits
  • Projection: This projects the idea of being a 'carrier' or 'vessel' for a psychological construct. It reifies the 'mindset,' turning it into an object-like entity that can be hosted or carried by different substrates (humans or AI). This metaphor suggests the mindset has an independent existence and the AI is a passive but suitable container for it. While the text adds '(simulated),' the primary metaphor of 'carrier' implies a more substantial hosting of the trait. This is a subtle consciousness projection because it suggests the AI has the necessary internal structure and stability to 'carry' a complex psychological system, rather than just generating superficial textual reflections of it.
  • Acknowledgment: The word '(simulated)' is a hedge, acknowledging the distinction. However, the core metaphor of a 'carrier' remains powerful and structures the reader's understanding of the relationship between the AI and the mindset.
  • Implications: This framing legitimizes the study of AI 'psychology' by suggesting that the same fundamental constructs are at play, just in a new host. It makes the AI seem less like a black-box text generator and more like a transparent container whose contents can be scientifically analyzed. This increases its perceived stability and reliability. It encourages researchers and practitioners to apply psychological frameworks directly to AI, potentially overlooking the profound architectural differences. It suggests a continuity between human and AI psychology that may not exist, leading to flawed analyses and inappropriate applications of the technology.

7. AI Agency as Self-Motivated System​

Quote: "Furthermore, evidence suggests that AI may soon evolve from passive tools that respond only when explicitly instructed... to systems exhibiting their own levels of agency, such as intentionality and motivation."​

  • Frame: Model as a self-motivated agent
  • Projection: This projects future-oriented, high-level agency, including intrinsic 'intentionality and motivation,' onto AI systems. This is a claim that AI will develop internal drives and goals, moving beyond its function as a tool. This is a maximalist consciousness projection. Motivation is a felt, subjective state that drives action toward a goal; it is central to the experience of a conscious agent that 'knows' what it wants. The metaphor of 'evolution' reinforces this by suggesting this is a natural, inevitable progression toward autonomy rather than a set of designed, engineered capabilities.
  • Acknowledgment: Presented as a forward-looking suggestion based on 'evidence,' framing it as a likely future reality rather than a speculative analogy.
  • Implications: This framing has massive policy and safety implications. If AI is on a path to having its 'own motivation,' it must be treated as a new class of autonomous entity, not a product. This obscures the accountability of its creators. If an AI with its own motivation causes harm, who is responsible? The framing shifts the discourse from product safety to managing a new, alien intelligence. It stokes both hype and fear, driving investment while also creating a sense of technological determinism that can stifle meaningful regulation. It replaces a discussion about programming objectives with a mystified one about the AI's emergent 'will.'

8. AI as a Member of the Team​

Quote: "This could reshape how entrepreneurs collaborate with AI, how teams are composed, and how decision-making processes unfold."​

  • Frame: Model as a team member
  • Projection: This projects the social role of a 'team member' or 'collaborator' onto the AI. This includes assumptions of shared goals, mutual understanding, reliability, and contribution. A team member is not just a tool; they are an agent with whom one coordinates and builds trust. The consciousness projection here is social and relational. It implies the AI can 'understand' the team's context, 'share' its goals, and act as a peer. It conflates the AI's ability to process task-related information with the human ability to engage in the complex social cognition required for genuine collaboration. It attributes the capacity for shared intentionality.
  • Acknowledgment: Presented as a direct implication of the research findings, not as a metaphor. The language of 'collaboration' and 'teams' is used literally.
  • Implications: This framing encourages over-reliance on AI in critical business decisions. By positioning the AI as a 'team member,' it accords its output a level of credibility and authority typically reserved for human colleagues. This can lead to a diffusion of responsibility and a failure of human oversight. If the AI is just another 'team member,' its flawed output might be accepted without the rigorous verification a 'tool' would receive. It promotes the idea of a seamless human-AI partnership, obscuring the commercial nature of the AI service and the potential for its goals (e.g., data collection, user retention) to be misaligned with the user's goals.

9. AI as Creative Collaborator​

Quote: "Entrepreneurial AI agents can serve as creative collaborators and sparring partners for ideation, problem-solving, or opportunity evaluation."​

  • Frame: Model as a creative partner
  • Projection: This maps human creativity and the dynamic, reciprocal role of a 'sparring partner' onto the AI. A sparring partner is not just a source of information; they challenge, provoke, and engage in a dialectical process. This requires a deep understanding of context, nuance, and the unspoken goals of the user. This is a consciousness projection related to high-level cognition. It suggests the AI 'understands' an idea well enough to critique it meaningfully ('sparring'), rather than simply generating statistically related text. 'Creative collaborator' implies a shared imaginative space, a state of joint 'knowing' and creation.
  • Acknowledgment: Presented as a direct statement about the practical application and potential role of these AI agents.
  • Implications: This framing inflates the perceived value of AI in creative and strategic tasks. It encourages entrepreneurs to treat the model's output as genuinely novel or insightful, potentially leading to derivative ideas that are merely remixes of its training data. The 'sparring partner' frame builds strong relational trust, as it suggests the AI is 'on your side' and invested in improving your ideas. This can reduce the user's own critical thinking and originality, as they may defer to the seemingly creative and authoritative suggestions of the AI. It obscures the fact that the AI has no understanding of the real-world viability of the 'opportunities' it helps evaluate.

10. AI Output as Psychological 'Gestalt'​

Quote: "However, our objective was not to benchmark AI responses against human samples but to probe the internal coherence (or 'Gestalt') of entrepreneurial profiles generated across AI personas."​

  • Frame: Model output as a holistic psychological structure
  • Projection: This maps the concept of 'Gestalt'—a coherent, unified whole that is more than the sum of its parts—from psychology onto the AI's output. It suggests that the AI's responses form a psychologically meaningful and internally consistent structure. The term 'internal coherence' further strengthens this, suggesting the coherence comes from within the 'profile' itself, not from the statistical properties of the training data. This is a subtle consciousness projection because a Gestalt in psychology refers to a structure of perception or personality; it implies a unifying subjective principle. It suggests the AI's output has the same kind of deep structural integrity as a human personality.
  • Acknowledgment: Presented as the study's central objective. 'Gestalt' is put in quotes, acknowledging its metaphorical nature, but 'internal coherence' is presented as a literal quality being probed.
  • Implications: This framing makes the AI's simulated personality seem robust, structured, and deeply coherent, rather than a fragile statistical artifact. It lends scientific legitimacy to the idea of an 'AI mindset.' This builds trust in the stability of the AI's persona; if it has a coherent 'Gestalt,' it is less likely to produce erratic or nonsensical output. It guides researchers to look for psychological structures in the output, reinforcing the AI AS PSYCHOLOGICAL SUBJECT metaphor. This obscures the possibility that the 'coherence' is a surface-level feature, a veneer of statistical consistency that may break down under novel or out-of-distribution prompts.

11. AI as a Knower​

Quote: "While ChatGPT might know that entrepreneurs should score high or low in certain dimensions, producing our results, responses to more complex questions might not reflect these high or low values."​

  • Frame: Model as a knower
  • Projection: This projects the state of 'knowing' directly onto the AI. Knowing is a conscious state of justified true belief. The sentence structure attributes this state to ChatGPT itself ('ChatGPT might know that...'). This is a direct and unhedged consciousness projection. It conflates the model's ability to retrieve and reproduce information from its training data (a mechanistic process of correlation) with the human capacity for conscious awareness and justified belief (a state of knowing). The verb 'know' implies comprehension, awareness, and certainty, states that are fundamentally tied to subjective experience.
  • Acknowledgment: Presented as a direct description of the AI's state. It is not acknowledged as a metaphor or analogy; it is a literal claim about what the model 'might know.'
  • Implications: Attributing 'knowing' to an AI is perhaps the most powerful and misleading form of anthropomorphism. It establishes the AI's authority and credibility on the same level as a human expert. If an AI 'knows' something, its output is framed as a statement of fact or justified belief, rather than a probabilistic text sequence. This encourages users to accept its outputs uncritically, short-circuiting their own verification processes. It completely obscures the system's true nature: it does not 'know' facts; it generates text that is statistically likely, based on the patterns of 'knowing' language in its training data. This creates profound epistemic risks, as users might base critical decisions on information they believe the AI 'knows' to be true.

Task 2: Source-Target Mapping​

Description

For each key metaphor identified in Task 1, this section provides a detailed structure-mapping analysis. The goal is to examine how the relational structure of a familiar "source domain" (the concrete concept we understand) is projected onto a less familiar "target domain" (the AI system). By restating each quote and analyzing the mapping carefully, we can see precisely what assumptions the metaphor invites and what it conceals.

Mapping 1: Human Psychological Subject to LLM Text Generation​

Quote: "We explore whether such agents exhibit the structured profile of the human entrepreneurial mindset..."​

  • Source Domain: Human Psychological Subject
  • Target Domain: LLM Text Generation
  • Mapping: The relational structure of a human mind—with its stable personality traits, cognitive habits, and self-concept forming a coherent 'profile'—is projected onto the LLM's output. The mapping invites the inference that, just as a human's profile can be measured by psychometric tools to reveal an underlying reality, the LLM's output can be measured to reveal an analogous internal 'mindset.' This is a consciousness mapping because a 'mindset' is a structure of knowing and believing. It maps the concept of a stable, internal cognitive architecture onto a dynamic, stateless process of token prediction.
  • What Is Concealed: This mapping conceals the purely statistical nature of the LLM's output. It hides that there is no underlying, persistent 'mindset' or 'profile' inside the model. The 'coherence' observed is a reflection of patterns in the training data, not an internal psychological structure. It conceals the model's lack of genuine understanding, belief, or self-concept.

Mapping 2: Biological Evolution to AI System Behavior​

Quote: "Drawing on the biological concept of host-shift evolution, we investigate whether the characteristic components of this mindset [...] emerge in a coherent constellation within AI agents."​

  • Source Domain: Biological Evolution
  • Target Domain: AI System Behavior
  • Mapping: The structure of evolutionary biology, where a parasite or symbiont shifts from one host species to another, is mapped onto the relationship between a psychological construct ('mindset') and its 'host' (human or AI). The mapping invites us to see the AI as a new ecological niche where human traits can 'emerge' and 'survive.' The consciousness mapping is subtle but powerful: it treats a cognitive artifact ('mindset') as an independent entity that can be 'hosted,' implying the AI has the necessary substrate to support such a complex, living idea.
  • What Is Concealed: This mapping completely conceals the role of human engineering. The 'emergence' of an entrepreneurial profile is not a natural, evolutionary process but the direct result of deliberate design, data selection, and prompting by humans. It hides the immense computational resources, corporate strategy, and specific algorithms that produce the behavior, replacing it with a clean, biological metaphor of natural adaptation.

Mapping 3: Person to LLM's Conversational Output​

Quote: "...they act more like a person."​

  • Source Domain: Person
  • Target Domain: LLM's Conversational Output
  • Mapping: The holistic and complex relational structure of 'a person' is mapped directly onto the LLM. This includes all the associated expectations: intentionality, coherence, personality, and the capacity for belief. The consciousness mapping is total. It projects a unified, subjective self—a 'knower'—onto a distributed, computational system. This invites users to interact with the LLM as a social peer rather than as a tool, applying social heuristics and trust mechanisms appropriate for humans.
  • What Is Concealed: This mapping conceals the absence of a unified self, subjective experience, or consciousness in the LLM. It hides the fact that the 'personality' is a statistically constructed veneer that can be inconsistent or nonsensical. It conceals the model's nature as a product, owned and operated by a corporation with its own goals, and instead presents it as an autonomous, person-like entity.

Mapping 4: Human Actor to LLM Persona Simulation​

Quote: "In particular, if cued by a suitable prompt, it can role-play the character of a helpful and knowledgeable AI assistant..."​

  • Source Domain: Human Actor
  • Target Domain: LLM Persona Simulation
  • Mapping: The relational structure of an actor assuming a role is mapped onto the LLM's function. In the source domain, an actor uses their own mind, intentions, and understanding to embody a character. The mapping invites the inference that the LLM is doing something similar: adopting a persona by simulating its internal states (beliefs, knowledge). This consciousness mapping projects the idea of a 'self' that can consciously adopt the perspective of an 'other,' which is a sophisticated cognitive act. It suggests an internal duality (actor/character) within the AI.
  • What Is Concealed: This mapping conceals the fact that there is no underlying 'actor' self in the LLM. The model is not 'adopting' a persona; it is simply generating text that is conditioned by the persona prompt. It hides the mechanistic reality that the entire 'character' is nothing more than a set of statistical weights applied to the token generation process, with no underlying beliefs or knowledge.

Mapping 5: Human Social Cognition (Theory of Mind) to LLM Predictive Text Generation​

Quote: "Similarly, Kosinski (2024) suggests that AI might be 'capable of tracking others' states of mind and anticipating their behavior'..."​

  • Source Domain: Human Social Cognition (Theory of Mind)
  • Target Domain: LLM Predictive Text Generation
  • Mapping: The structure of Theory of Mind—where one person creates an internal model of another person's subjective mental state—is mapped onto the LLM. This suggests the AI builds a representation of the user's mind to inform its responses. The consciousness mapping is explicit: it projects the capacity for empathy and understanding the subjective experience of others (a form of 'knowing' about another's knowing) onto the model. It equates predicting conversational turns with understanding mental states.
  • What Is Concealed: This mapping conceals the purely statistical, non-mentalistic nature of the LLM's process. The model is not 'tracking states of mind'; it is tracking patterns in language. It predicts likely responses based on correlations in its training data between certain user inputs and certain model outputs. It has no model of the user's mind, only a model of language. This hides the profound difference between empathetic understanding and sophisticated pattern-matching.

Mapping 6: Disease Vector / Biological Host to AI System​

Quote: "...entrepreneurship research has not yet systematically considered AI agents as potential 'carriers' of (simulated) entrepreneurial mindsets."​

  • Source Domain: Disease Vector / Biological Host
  • Target Domain: AI System
  • Mapping: The structure of a biological 'carrier'—an organism that hosts a pathogen or gene without necessarily being affected by it—is mapped onto the AI. The 'mindset' is framed as the entity being carried. This invites the inference that the AI is a suitable substrate or medium through which a psychological construct can be transmitted or expressed. The consciousness mapping is implicit, suggesting the AI has a stable enough internal architecture to 'contain' this complex psychological information without corrupting it.
  • What Is Concealed: This mapping conceals that the 'mindset' is not an independent entity being 'carried.' The AI is actively generating a textual performance of the mindset based on a prompt. It is not a passive vessel but an active constructor. This conceals the fragility of the simulation and its complete dependence on the initial prompt and the patterns in the training data.

Mapping 7: Autonomous Agent to Future AI Systems​

Quote: "...systems exhibiting their own levels of agency, such as intentionality and motivation."​

  • Source Domain: Autonomous Agent
  • Target Domain: Future AI Systems
  • Mapping: The structure of a goal-directed, autonomous agent (like a human or animal) is projected onto a machine. This includes mapping the internal, subjective drivers of action—'motivation' (a felt need) and 'intentionality' (a directedness of mind)—onto the system's operation. The consciousness mapping is fundamental: it claims that these systems will possess the internal states of 'wanting' and 'meaning to,' which are core components of a conscious 'knower.'
  • What Is Concealed: This mapping conceals the distinction between autonomous operation and autonomous intention. A future AI might operate independently to achieve a programmed goal, but this is fundamentally different from having its 'own' motivation. This language hides the fact that any 'goals' an AI has are ultimately specified or shaped by its human designers. It obscures the locus of control and accountability.

Mapping 8: Human Collaborative Partner to Human-Computer Interaction​

Quote: "Entrepreneurial AI agents can serve as creative collaborators and sparring partners..."​

  • Source Domain: Human Collaborative Partner
  • Target Domain: Human-Computer Interaction
  • Mapping: The relational structure of a human creative partnership is mapped onto the interaction between a user and an LLM. A 'sparring partner' provides critical, context-aware feedback. A 'collaborator' shares goals and builds upon ideas. The mapping invites the user to see the AI as a peer in the creative process. This is a consciousness mapping because genuine collaboration and sparring require shared understanding, a state of mutual 'knowing' about the project's goals and nuances.
  • What Is Concealed: This mapping conceals the AI's lack of any real-world understanding or genuine creativity. Its 'ideas' are recombinations of its training data, and its 'feedback' is based on linguistic patterns, not a deep grasp of the concept. It hides the asymmetry of the relationship: the human brings genuine understanding and goals, while the AI brings statistical pattern-matching. It conceals the risk of generating derivative, plausible-sounding nonsense.

Mapping 9: Human Knower to LLM Information Retrieval/Generation​

Quote: "While ChatGPT might know that entrepreneurs should score high or low in certain dimensions..."​

  • Source Domain: Human Knower
  • Target Domain: LLM Information Retrieval/Generation
  • Mapping: The relationship between a human mind and a proposition ('knowing that X is true') is mapped onto the LLM. The source domain implies a conscious state of justified true belief. The mapping invites the inference that the LLM holds information as beliefs or knowledge. This consciousness mapping is a direct attribution of an epistemic state ('knowing') to a machine. It posits the AI as a subject capable of holding propositional attitudes.
  • What Is Concealed: This conceals the mechanistic reality. The LLM does not 'know' facts. When prompted, it generates text that is statistically correlated with the 'facts' present in its training data. Its 'knowledge' is not a set of justified beliefs but the output of a predictive function. It hides the system's inability to distinguish truth from falsehood, its lack of justification for its claims, and its absence of belief or subjective certainty.

Mapping 10: Human Persona/Personality to LLM's Filtered Output​

Quote: "Overall, we observe a consistent pattern in which the entrepreneur persona shows a more entrepreneurial mindset."​

  • Source Domain: Human Persona/Personality
  • Target Domain: LLM's Filtered Output
  • Mapping: The relationship between a person's underlying personality and their outward persona is mapped onto the LLM. It suggests the LLM has a 'persona' that, in turn, 'shows' or expresses an underlying 'mindset.' This creates a layered psychological model for the AI, with an inner state (mindset) and an outer expression (persona). This is a consciousness mapping because it attributes a stable, internal psychological structure that is merely 'shown' or expressed, rather than being constituted by, the linguistic output.
  • What Is Concealed: This conceals that there is no distinction between the 'persona' and the 'mindset' in the AI's operation. The 'persona' is just the set of conditioning tokens in the prompt, and the 'mindset' is just the resulting pattern of generated text. There is no inner/outer distinction. It's all just output. This language invents a psychological depth that does not exist in the system's architecture.

Task 3: Explanation Audit (The Rhetorical Framing of "Why" vs. "How")​

Description

This section audits the text's explanatory strategy, focusing on a critical distinction: the slippage between "how" and "why." Based on Robert Brown's typology of explanation, this analysis identifies whether the text explains AI mechanistically (a functional "how it works") or agentially (an intentional "why it wants something"). The core of this task is to expose how this "illusion of mind" is constructed by the rhetorical framing of the explanation itself, and what impact this has on the audience's perception of AI agency.

Explanation 1​

  • Explanation Types:

    • Genetic: Traces origin or development through a dated sequence of events or stages, showing how something came to be
    • Theoretical: Embeds behavior in a deductive or model-based framework, may invoke unobservable mechanisms such as latent variables or attention dynamics
  • Analysis (Why vs. How Slippage): This explanation is a hybrid that masterfully slips between agential and mechanistic framing. It begins with the agential phrase 'assume simulated personalities,' which frames the AI as an actor taking on a role ('why' it acts this way). However, it immediately pivots to a mechanistic explanation ('how' this happens): the behavior 'mirrors' the training data. The use of 'assume' gives the AI agency, while the reference to 'training data' grounds the explanation in a mechanistic, genetic account. This choice emphasizes the AI's capability for human-like performance while simultaneously providing a technical, non-magical explanation for it. It obscures the alternative framing that the AI is simply a machine completing a pattern, replacing it with the more sophisticated idea of an actor 'assuming' a role based on a script (the training data).

  • Consciousness Claims Analysis: This passage carefully avoids explicit consciousness verbs like 'knows' or 'understands,' opting for the more ambiguous 'assume' and 'mirror.' It leans toward describing 'processing' rather than 'knowing.' However, the term 'personalities' is a significant anthropomorphic projection that implies a coherent set of conscious or subconscious traits. The authors are not claiming the AI consciously 'knows' it is an entrepreneur. Instead, the 'curse of knowledge' is at play: the authors, as humans, understand the concept of 'assuming a personality.' They project this complex, intentional human action onto the model. They know what the model is doing (mimicking data), but they describe it using the language of what a human would be doing in a similar situation. The concealed mechanistic process is probabilistic text generation. A more precise technical description would be: 'When the model's input prompt includes tokens related to 'entrepreneur,' its transformer architecture weights subsequent token predictions to generate sequences that have a high statistical correlation with texts describing entrepreneurial stereotypes and cultural representations present in its training corpus.' This lacks the agential flair of 'assuming a personality' but is far more accurate.

  • Rhetorical Impact: This hybrid framing enhances the AI's perceived sophistication. By describing the AI as 'assuming personalities,' it presents the system as a flexible, capable actor. At the same time, grounding this in 'training data' makes the claim seem technically sound and credible. This builds a form of trust based on perceived competence. For an audience, believing the AI 'assumes a personality' is different from believing it 'generates stereotyped text.' The former implies a deeper, more integrated capability, suggesting its responses will be coherent and internally consistent, like a real person's. This might lead a user to engage with it more openly and trust its outputs more readily than if they understood it as a simple pattern-matching machine prone to reproducing stereotypes.

Explanation 2​

Quote: "These capabilities do not imply that AI 'thinks' in a human sense. Instead, they raise important questions about whether AI can systematically simulate coherent psychological profiles, or whether observed patterns simply reflect statistical mimicry and stereotype activation."​

  • Explanation Types:

    • Theoretical: Embeds behavior in a deductive or model-based framework, may invoke unobservable mechanisms such as latent variables or attention dynamics
  • Analysis (Why vs. How Slippage): This explanation frames the AI's behavior mechanistically ('how' it works), explicitly rejecting an agential framing ('does not imply that AI 'thinks''). The authors are attempting to be precise by posing two alternative mechanistic explanations: 'systematically simulate coherent psychological profiles' versus 'statistical mimicry.' However, even the supposedly mechanistic options are loaded with anthropomorphic assumptions. 'Simulating a profile' still grants the AI the role of a simulator, an active agent performing a simulation. The very act of framing the output as a 'psychological profile' applies a human-centric analytical lens. The explanation emphasizes the need to distinguish between deep simulation and superficial mimicry, but it obscures the possibility that there is no 'simulation' at all, only pattern generation that humans interpret as a psychological profile.

  • Consciousness Claims Analysis: The passage explicitly places the verb 'thinks' in scare quotes, distancing the authors from a strong consciousness claim. It correctly identifies the core process as potentially 'statistical mimicry.' The verb choices are mechanistic: 'simulate,' 'reflect,' 'activate.' The authors are attempting to describe processing, not knowing. However, the 'curse of knowledge' manifests in the object of their inquiry: 'coherent psychological profiles.' As psychologists, this is their unit of analysis, and they project it onto the AI's output. They are so familiar with what a psychological profile looks like that they see it in the text patterns, and then ask if the AI can 'simulate' it. They conflate their interpretation of the output (as a profile) with the AI's operation (simulating a profile). The concealed mechanistic process is, again, probabilistic token prediction. A precise technical description is: 'The model generates token sequences whose statistical properties, when measured by psychometric instruments, produce scores that are internally consistent and cluster in ways that align with human psychological constructs, due to the prevalence of those constructs in the training data.'

  • Rhetorical Impact: This framing positions the authors as careful, critical scientists. By explicitly rejecting 'thinking,' they build credibility. However, by centering the research question on 'simulating psychological profiles,' they subtly elevate the AI's status. The audience is led to believe that the AI is capable of something highly complex (simulation of a psyche), and the only question is how deep the simulation goes. This makes the AI seem powerful and mysterious. This framing might cause a user to believe that even if the AI isn't 'thinking,' it is running a high-fidelity simulation of a mind, which still implies a level of sophistication that warrants trust. Believing an AI 'simulates a profile' (implies a process of modeling) is more impressive than believing it 'generates text' (implies a simpler mechanical act).

Explanation 3​

Quote: "Our findings indicate that such coherent profiles do emerge, consistent with a human-like entrepreneurial mindset structure."​

  • Explanation Types:

    • Empirical Generalization (Law): Subsumes events under timeless statistical regularities, emphasizes non-temporal associations rather than dated processes
  • Analysis (Why vs. How Slippage): This explanation frames the AI's behavior using an empirical generalization. It describes 'how' the system typically behaves when prompted—it produces 'coherent profiles.' The verb 'emerge' is interesting; it can be read mechanistically (as in, 'patterns emerge from the data') but also has organic, bottom-up connotations that give it a slightly agential flavor, as if the profile is a property that arises naturally from the system's operation. The overall thrust is to describe a consistent, observable regularity. It emphasizes the structural similarity of the output to human psychological structures, obscuring the vast difference in the processes that generate them (human cognition vs. statistical token prediction).

  • Consciousness Claims Analysis: This passage uses the verb 'emerge' rather than a strong consciousness verb. It describes the output ('coherent profiles,' 'mindset structure') rather than an internal state of the AI. It is focused on processing, not knowing. However, the 'curse of knowledge' is deeply embedded in the terminology. The authors see coherence in the data, a structure they recognize from their field. They then claim this 'human-like entrepreneurial mindset structure' is what has 'emerged.' They are projecting their own complex, theoretical understanding of a 'mindset structure' onto the statistical patterns in the output. The model isn't 'emerging' a mindset; it's generating text. The authors are observing a pattern and labeling it a mindset structure. The concealed mechanistic process is that the model's outputs, when scored by multiple psychometric scales, yield results that show statistical correlation and consistency, because the training data contains vast amounts of text where the concepts measured by these scales are linguistically associated with the term 'entrepreneur.'

  • Rhetorical Impact: This framing presents the findings as a scientific discovery of a robust phenomenon. The term 'emerge' makes the AI's capability seem more profound and less explicitly 'programmed.' For the audience, this language suggests the AI has independently developed a human-like psychological structure, making it seem more advanced and intelligent. Believing a 'mindset structure emerges' from an AI implies a level of autonomous organization and complexity far beyond simply 'producing consistent text.' This enhances the perceived authority and reliability of the AI's persona-based outputs.

Explanation 4​

Quote: "As Shepherd and Sutcliffe (2015) explain, 'anthropomorphizing refers to imbuing non-human agents... with human characteristics, motivations, intentions, and/or emotions' (p. 98)."​

  • Explanation Types:

    • Theoretical: Embeds behavior in a deductive or model-based framework, may invoke unobservable mechanisms such as latent variables or attention dynamics
  • Analysis (Why vs. How Slippage): This is a theoretical explanation of the concept of anthropomorphism itself. It explains 'how' the linguistic framing of AI works. By quoting a definition, the authors are signaling that they are aware of the process they are studying and, to some extent, engaging in. The key slippage here is the use of the term 'non-human agents' in the definition they chose. By adopting this term, they implicitly accept the framing of the AI as an 'agent' from the outset, even as they are explaining the process of 'imbuing' it with characteristics. This choice obscures the alternative view of the AI as a 'tool' or 'artifact.' The explanation normalizes the idea of the AI as an agent, making the subsequent attribution of traits seem like a matter of degree rather than a fundamental category error.

  • Consciousness Claims Analysis: This passage is about the act of attributing consciousness, so it is meta-level. It lists consciousness-related concepts: 'motivations, intentions, and/or emotions.' It is correctly identifying the process of projection. It is not making a direct claim that the AI has these things, but rather explaining what it means to claim it does. There is no 'curse of knowledge' here in the usual sense; instead, the authors are demonstrating their knowledge of the concept. The problem is that the explanation itself relies on the anthropomorphic primitive of 'agent.' The very act of calling the system an 'agent' is the first step in the illusion. A more mechanistic starting point would be a 'non-human system' or 'computational artifact.' The concealed distinction is the line between an artifact, which is a passive object, and an agent, which has some capacity for autonomous action.

  • Rhetorical Impact: By defining anthropomorphism while using the term 'agent,' the text creates a permissive framework for its own analysis. It says to the reader, 'We know what we are doing, and the correct term for this entity is 'agent'.' This subtly frames the AI as something more than a mere tool from the very beginning. It makes the subsequent discussion of 'mindsets' and 'personalities' seem more plausible, as these are properties we readily attribute to agents. This choice lowers the audience's resistance to anthropomorphic claims by establishing the AI's agentic status as a baseline assumption.

Explanation 5​

Quote: "Nonetheless, persona prompting can still amplify static stereotypes and disregard the diversity observed among real-world entrepreneurs. Moreover, LLMs are trained on data that capture cultural and social narratives and scripts (e.g., about entrepreneurs). ... Consequently, when the LLM adopts an entrepreneurial role, its responses may partly mirror these culturally embedded patterns..."​

  • Explanation Types:

    • Genetic: Traces origin or development through a dated sequence of events or stages, showing how something came to be
    • Dispositional: Attributes tendencies or habits such as inclined or tends to, subsumes actions under propensities rather than momentary intentions
  • Analysis (Why vs. How Slippage): This explanation is primarily genetic, tracing the AI's behavior ('why' it produces stereotypes) back to its origin in the training data. This is a mechanistic ('how') explanation. It is also dispositional, as it explains a tendency of the system ('amplify static stereotypes'). However, the slippage occurs with the agential verb 'adopts an entrepreneurial role.' This frames the LLM as an actor choosing to take on a role. A fully mechanistic explanation would say 'When the LLM is prompted with...' The use of 'adopts' gives the LLM agency in the process, which obscures the fact that it is a passive system entirely driven by its inputs and training. The explanation emphasizes the data's influence but subtly preserves the AI's status as an agent that 'acts.'

  • Consciousness Claims Analysis: This passage uses the agential verb 'adopts' but otherwise focuses on mechanistic processes ('trained on data,' 'capture narratives,' 'mirror patterns'). It is explaining processing, not claiming the AI 'knows' it is stereotyping. The 'curse of knowledge' is present in the verb choice. The authors know that 'adopting a role' is the human equivalent of what the AI is being prompted to do, so they project that intentional, agential verb onto the machine's operation. They are using a cognitive shortcut to describe a complex technical process. The concealed mechanistic process is, once again, the core function of an LLM: when the input prompt contains tokens for a 'role,' the model's output is statistically biased towards text sequences associated with that role in its training corpus. If the corpus contains stereotypes about that role, the model will reproduce them because they are statistically probable.

  • Rhetorical Impact: This framing has a mixed impact. On one hand, it serves as a valuable warning about AI bias, which might lower audience trust in a healthy, critical way. On the other hand, by saying the LLM 'adopts a role' and then mirrors stereotypes, it frames the AI like a human actor who unthinkingly parrots social biases. This makes the AI seem more human-like in its flaws. This can be a double-edged sword: it might make the audience more critical, but it does so by reinforcing the idea of the AI as a human-like agent, thereby strengthening the overall anthropomorphic illusion, even when discussing its limitations.

Task 4: AI Literacy in Practice: Reframing Anthropomorphic Language​

Description

Moving from critique to constructive practice, this task demonstrates applied AI literacy. It selects the most impactful anthropomorphic quotes identified in the analysis and provides a reframed explanation for each. The goal is to rewrite the concept to be more accurate, focusing on the mechanistic processes (e.g., statistical pattern matching, token prediction) rather than the misleading agential language. Additionally, for quotes with consciousness claims (e.g., "the AI knows"), this section provides a technical reality check that explicitly states what the system actually does at the mechanistic level.

Original QuoteMechanistic ReframingTechnical Reality Check
"Entrepreneurial AI agents (e.g., Large Language Models (LLMs) prompted to assume an entrepreneurial persona) represent a new research frontier in entrepreneurship."The use of Large Language Models (LLMs) to generate text consistent with an 'entrepreneurial persona' prompt creates a new area of study in entrepreneurship research. The focus is on analyzing the linguistic patterns produced by these computational systems.The original quote establishes the AI as an 'agent' from the outset. In reality, the LLM is a tool, not an agent. It does not 'assume' a persona; it processes an input prompt and generates a statistically probable sequence of tokens based on patterns in its training data.
"We explore whether such agents exhibit the structured profile of the human entrepreneurial mindset..."We analyze whether the textual outputs generated by these models, when measured with psychometric instruments, produce scores that are consistent with the structured profile of the human entrepreneurial mindset.The AI does not 'exhibit' a profile as an internal property. Its outputs have measurable statistical characteristics. The locus of the 'profile' is in the data generated, not within the model as a psychological state. The model processes prompts; it does not possess or exhibit mindsets.
"...AI may soon evolve from passive tools... to systems exhibiting their own levels of agency, such as intentionality and motivation."Future AI systems may be designed to operate with greater autonomy and execute more complex, goal-oriented tasks without continuous human supervision. This is achieved by programming them with more sophisticated objective functions and decision-making heuristics.The AI will not 'evolve' or develop its 'own' motivation. 'Motivation' and 'intentionality' are projections of conscious states. The reality is that engineers will build systems with more complex architectures and goal-functions. The 'agency' is designed and programmed, not emergent or intrinsic.
"A central theme in interdisciplinary AI research is how AI mirrors human-like capacities."A central theme in interdisciplinary AI research is the degree to which the outputs of AI systems can replicate the patterns and characteristics of human-produced artifacts, such as language and images.The AI does not 'mirror' capacities; it generates outputs that can be statistically similar to human outputs. A 'capacity' implies an underlying ability. The AI has the capacity to process data and predict tokens, not the capacity for creativity or reasoning which are human cognitive functions.
"For instance, Mollick (2024, p. xi) observes that '...they act more like a person.'"For instance, Mollick (2024, p. xi) observes that the conversational outputs of LLMs often follow linguistic and interactive patterns that users associate with human conversation, leading to the perception that they are interacting with a person.The model does not 'act like a person.' It generates text. Because it was trained on vast amounts of human conversation, its generated text is statistically likely to resemble human conversation. The perception of personhood is an interpretation by the human user, not a property of the model itself.
"Through role-play, AI tools simulate assigned personas..."When given a persona prompt, AI tools generate text that is statistically consistent with how that persona is represented in the training data. This process can be described as simulating a persona's linguistic style.The AI does not 'role-play,' which is an intentional act. It is a text-continuation machine. The persona prompt simply constrains the probability distribution for the next token, biasing the output toward a specific linguistic style. There is no 'acting' involved, only mathematical operations.
"...probe 'the psychology of AI models'..."...apply psychometric instruments, originally designed for humans, to analyze the statistical properties and patterns within the textual outputs of AI models.AI models do not have a 'psychology.' Psychology is the study of mind and behavior in living organisms. The object of study is not the model's non-existent mind, but the statistical features of its linguistic output. The model processes information; it has no psyche to probe.
"when the LLM adopts an entrepreneurial role, its responses may partly mirror these culturally embedded patterns..."When an LLM is prompted with terms defining an 'entrepreneurial role,' its output will be statistically biased to reproduce the linguistic patterns associated with that role in its training data, including culturally embedded stereotypes.An LLM does not 'adopt a role,' which is a conscious, social act. It is a computational process. The prompt acts as a conditioning input that alters the probabilities of the subsequent generated tokens. It is a mathematical, not a psychological, transformation.
"While ChatGPT might know that entrepreneurs should score high or low in certain dimensions..."The training data of ChatGPT contains strong statistical associations between the concept of 'entrepreneur' and text reflecting high or low scores on certain psychometric dimensions, which allows the model to reliably reproduce these patterns.ChatGPT does not 'know' anything. Knowing is a conscious state of justified true belief. The model's architecture enables it to identify and replicate complex statistical correlations from its training data. Its output is a function of this pattern-matching, not of conscious knowledge or belief.
"Do we see the rise of a new 'artificial' yet human-like version of an entrepreneur or startup advisor..."Are we observing the development of computational tools capable of generating text that effectively simulates the advisory language and entrepreneurial heuristics found in business literature and training data?This is not the 'rise of a version of an entrepreneur.' It is the development of a tool. The system is not 'human-like' in its internal process; its output simply mimics human-generated text. It doesn't understand the advice it gives or the concepts it discusses; it only processes linguistic patterns.

Critical Observations​

Description

This section synthesizes the findings from the previous tasks into a set of critical observations. It examines the macro-patterns of agency slippage (the shift between treating AI as a tool vs. an agent), how cognitive metaphors drive trust or fear, and what actual technical processes are obscured by the text's dominant linguistic habits.

Agency Slippage​

The text systematically oscillates between mechanistic and agential framings, and this slippage is the core rhetorical mechanism for constructing the illusion of mind. The pattern is not random; it is strategic. The authors typically introduce a phenomenon with agential language, lending it importance and familiarity, and then partially hedge with a mechanistic explanation, lending their analysis scientific credibility. The dominant direction of slippage is from an initial agential claim to a qualified mechanistic one. For instance, the paper begins by framing its subject as 'entrepreneurial AI agents' who can 'assume an entrepreneurial persona'—a clearly agential framing. The mechanistic explanation—that this behavior 'mirrors' the training data—comes after the agentic frame has been established. This pattern repeats throughout the paper. The authors deny that AI 'thinks' (mechanistic hedge) but immediately pivot to asking if it can 'simulate coherent psychological profiles' (agential framing of the task). This oscillation serves a crucial rhetorical function: it allows the authors to make exciting, human-relevant claims about AI 'psychology' while maintaining a defensible scientific posture. The consciousness projection pattern is foundational to this slippage. The text first establishes the AI's output as having a coherent, human-like 'mindset structure'—a claim that is technically about the output (processing) but uses the language of internal states (knowing/being). This initial projection serves as the bedrock upon which further agential claims are built. Once the AI is accepted as having a 'mindset,' it becomes much more plausible to describe it as an 'agent' that 'collaborates' or 'adopts roles.' The 'curse of knowledge' is the engine of this process. The authors, experts in psychology, recognize complex, coherent psychological patterns in the model's output. They then project their own sophisticated understanding of these patterns onto the model itself, describing the model not as a system whose output contains these patterns, but as a system that has a profile or simulates a mindset. The slippage is enabled by hybrid explanations; for example, a Genetic explanation that traces behavior to training data (mechanistic) is delivered using an agential verb like 'adopts.' This continuous oscillation between 'it's an agent' and 'it's just statistics' creates a quantum superposition of meaning, where the AI is simultaneously a tool and an agent, allowing the authors to reap the rhetorical benefits of both framings without being fully accountable to the limitations of either.

Metaphor-Driven Trust​

The text's metaphorical and consciousness-attributing frameworks are not neutral descriptors; they are powerful engines for building trust and establishing the AI's authority. The central metaphor, AI AS PSYCHOLOGICAL SUBJECT, is the primary mechanism. By framing the AI as having a 'mindset,' 'personality profile,' and 'traits,' the authors suggest it possesses a stable, coherent, and predictable internal structure, which are key ingredients for trust. Consciousness language functions as a critical trust signal. Claiming the AI's output reflects a 'mindset' accomplishes what claiming it 'generates statistically probable text' does not: it implies a deep, underlying coherence. A mindset suggests an integrated system of knowing and believing, lending the AI's pronouncements a weight and authority they would otherwise lack. This encourages what can be called performance-based trust; because the AI reliably performs the 'role' of an entrepreneur, it is deemed trustworthy in that domain. The far greater risk, however, is the text's subtle encouragement of relation-based trust—the kind based on perceived sincerity, shared understanding, and intention. Phrases like 'creative collaborators,' 'sparring partners,' and systems that 'act more like a person' explicitly invite users to apply human social frames to the AI. This is a category error with dangerous consequences. We extend relation-based trust to entities we believe are capable of reciprocity and shared vulnerability. An LLM is incapable of either. The text constructs the AI's authority by framing its successes agentially ('it assumes a personality') while framing its failures mechanistically ('stereotype amplification' due to 'training data'). This asymmetrical framing preserves the core illusion of a competent agent whose flaws are merely artifacts of its upbringing (its data), much like a human. Reason-based and intentional explanations further this by suggesting the AI's outputs are justified choices, not just statistical accidents. The ultimate risk is that audiences, convinced by this language that the AI 'knows' and 'understands,' will extend a human-like trust to a tool, outsourcing critical judgment and verification to a system that cannot be held accountable and has no genuine stake in the outcome.

Obscured Mechanics​

The anthropomorphic and consciousness-attributing language systematically conceals the messy, material realities underlying the AI's operation. The central illusion of an AI with a 'mindset' is a powerful obscurantist tool. On a technical level, it hides the brute-force statistical reality of the system. The phrase 'the AI exhibits an entrepreneurial mindset' conceals that the model is performing next-token prediction based on probabilistic weights derived from a massive, static dataset. It hides the lack of any genuine comprehension, causal reasoning, or world model. The model's 'confidence' is a mathematical value, not a state of conscious certainty, and it has no mechanism for ground-truth verification. The consciousness obscuration is profound: when the text claims the model's profile is 'consistent with a human-like...mindset structure,' it conceals the system's utter lack of subjective experience. The 'mindset' is a pattern recognized by an external observer, not an internal state experienced by the system. This language hides the model's complete dependency on the specific composition and biases of its training data; the 'mindset' is not an emergent property of intelligence but a statistical reflection of its textual diet. Beyond the technical, the metaphors hide crucial material and labor realities. The sleek, agentic framing of an 'AI collaborator' erases the immense environmental cost—the energy consumption for training and inference happens off-stage. It renders invisible the human labor of data annotators and RLHF workers, whose distributed cognitive work is repackaged and presented as the autonomous capability of the AI 'agent.' The economic realities are also effaced. The text analyzes ChatGPT as a fascinating psychological subject, obscuring its status as a commercial product developed by OpenAI with specific market goals. Framing it as an 'agent' that can 'collaborate' positions it as a peer, not a product, which serves the manufacturer's interest in maximizing user engagement and normalizing the technology's integration into critical workflows. The primary beneficiary of this concealment is the technology's producer, who can market a statistical pattern-matcher as a 'human-like' partner, inflating its value while diffusing accountability for its outputs.

Context Sensitivity​

The intensity and density of anthropomorphic and consciousness-attributing language are not uniform throughout the text; they are strategically deployed. The language is most agential and metaphorical in the Abstract, Introduction, and Discussion—the sections designed to frame the research's significance and articulate its vision. Here, we find strong claims about 'Entrepreneurial AI agents,' the 'host shift' of mindsets, and the 'emerging psychology of entrepreneurial AI.' This is where the authors are selling the importance of their work to a broader audience. In contrast, the Method section adopts a more—though not entirely—mechanistic tone. It speaks of 'psychometric tools to probe' the model and 'persona prompting,' which is more procedural. However, even here, the framing is 'prob[ing] 'the psychology of AI models,'' a phrase that smuggles in the core anthropomorphic assumption. The text establishes its scientific credibility in the Method section by describing a rigorous process, then leverages that credibility to make more aggressive anthropomorphic claims in the Discussion. A critical asymmetry appears when discussing capabilities versus limitations. Capabilities are often described in agential or consciousness-adjacent terms: the AI 'exhibits' a profile, 'shows' a mindset, and can 'serve as creative collaborators.' Limitations, however, are almost always framed mechanistically: they are due to 'stereotype amplification' from 'training data' or 'statistical simulation.' This rhetorical strategy is highly effective: it attributes successes and interesting behaviors to the AI's emergent, agent-like nature, while failures are relegated to technical, artifactual problems with its data or architecture. This preserves the core illusion of a competent agent that is merely flawed by its inputs. The intensity of consciousness claims also follows this pattern. Vague, agential terms ('agent,' 'persona') are used broadly, while more specific psychological terms ('mindset,' 'profile,' 'Gestalt') are used when analyzing the results, creating a crescendo of anthropomorphism that culminates in the discussion of a new 'psychology of entrepreneurial AI.' This strategic variation reveals the text's primary rhetorical goal: to legitimize a new field of study built on the foundational metaphor of the AI as a psychological subject.

Conclusion​

Description

This final section provides a comprehensive synthesis of the entire analysis. It identifies the text's dominant metaphorical patterns and explains how they construct an "illusion of mind." Most critically, it connects these linguistic choices to their tangible, material stakes—analyzing the economic, legal, regulatory, and social consequences of this discourse. It concludes by reflecting on AI literacy as a counter-practice and outlining a path toward a more precise and responsible vocabulary for discussing AI.

Pattern Summary​

The discourse in this paper is built upon a system of interconnected metaphorical patterns, chief among them being AI AS PSYCHOLOGICAL SUBJECT and AI EVOLUTION AS BIOLOGICAL PROCESS. The foundational, load-bearing pattern is the conception of the AI as a psychological subject. This is the core move that enables the entire research project. It is established through language that attributes a 'mindset,' 'personality,' 'traits,' and a coherent psychological 'profile' to the LLM. This pattern projects a stable, internal, and structured psyche onto a system that only generates external, probabilistic text. The second major pattern, which frames the AI's development as a 'host-shift evolution,' is entirely dependent on the first. One can only speak of a psychological construct 'shifting' to a new 'host' if one has already accepted the premise that the AI is a plausible host for psychology. The consciousness architecture of the text is therefore clear: it first makes claims about what the AI is (a psychological subject with a mindset structure, a state of being and knowing), and from that foundation, it builds claims about what the AI does (acts as an agent, collaborates, adopts roles). The attribution of a 'mindset' is the crucial consciousness projection. A mindset is a system of beliefs, attitudes, and ways of knowing; attributing this to the LLM is the central act of conflating mechanistic processing with conscious knowing. If this foundational pattern were replaced with precise, mechanistic language—for example, if 'the AI exhibits a mindset' became 'the AI's output shows statistical consistency on psychometric scales'—the entire conceptual edifice of the paper, including the 'host-shift' metaphor and the call for a new 'psychology of AI,' would collapse. It is the metaphor of the AI as a psychological subject that makes the findings seem profound rather than merely technically interesting.

Mechanism of Illusion: The "Illusion of Mind"​

The text constructs its 'illusion of mind' through a subtle and sophisticated rhetorical architecture. The central mechanism is a strategic blurring of the distinction between the AI's output and its internal process, a confusion deliberately fostered by conflating mechanistic processing with conscious knowing. The illusion is not built on a crude claim that 'AI is conscious,' but on a more nuanced, two-step persuasive move. First, the authors establish the AI as a legitimate object of psychological inquiry. They do this by explicitly disavowing 'genuine cognition' while simultaneously using the entire vocabulary of psychology ('mindset,' 'profile,' 'traits,' 'Gestalt') to describe its output. This creates a new, hybrid object: the 'simulated mindset,' which can be studied 'as if' it were real. This initial move establishes the AI as a 'knower-like' system. Second, on this foundation, they build further agential claims, describing the AI as a 'collaborator' or an 'agent' that 'adopts roles.' The 'curse of knowledge' is the psychological engine driving this process. The authors, being experts in personality psychology, see a coherent, structured 'mindset' in the statistical patterns of the LLM's output. They then project their own act of interpretation onto the model, slipping from 'the output can be interpreted as a coherent profile' (a claim about processing) to 'the AI exhibits a coherent profile' (a claim about being/knowing). This progression appears throughout the text, starting with descriptions of the AI's impressive mimicry and escalating to discussions of its 'psychology.' The audience, likely non-experts in AI architecture, is vulnerable to this illusion because it maps onto familiar science fiction narratives and simplifies a complex technology into an intuitive, person-like frame. The use of Empirical Generalization explanations ('it consistently reproduces a profile') solidifies the illusion by framing the AI's behavior as a stable, law-like phenomenon, making the simulated personality seem as real and reliable as a law of nature.

Material Stakes​

  • Selected Categories: Epistemic, Institutional, Regulatory/Legal
  • Analysis: The metaphorical framing of this paper carries significant, tangible consequences. The most immediate stakes are Epistemic. By legitimizing the notion of an AI 'mindset,' the paper encourages a fundamental category error in how users evaluate information. If an entrepreneur believes their AI 'sparring partner' genuinely 'knows' or 'understands' business strategy because it has an 'entrepreneurial mindset' (a state of knowing), they will trust its outputs far more than if they understood it to be merely 'processing' data to find statistically likely phrases. This can lead to disastrous business decisions based on stereotyped, unverified, or nonsensical 'advice' that has the veneer of psychological coherence. The paper directly promotes this risk by suggesting AI agents can be 'creative collaborators.' The winners are the AI manufacturers, whose products are perceived as more valuable and capable; the losers are the users who outsource their critical judgment. The Institutional stakes are also profound. The authors explicitly call for an 'emerging psychology of entrepreneurial AI.' This works to carve out a new academic subfield, directing research funding, journal special issues, and scholarly attention toward studying the 'minds' of machines. This institutionalizes the anthropomorphic metaphor, potentially diverting resources from more grounded research into AI safety, bias, and mechanistic transparency. Finally, this discourse has Regulatory/Legal consequences. Framing AI as an 'agent' with a 'mindset' that 'adopts roles' fundamentally muddies the waters of liability. If an AI 'advisor' gives harmful advice, who is at fault? In a product paradigm, liability clearly rests with the manufacturer. In an agent paradigm, the lines blur. Does the 'agent' bear some responsibility? Does this framing create a legal fiction that shields corporations from accountability? The move from 'processing' to 'knowing' is a move from product liability to a much more ambiguous legal space, a shift that overwhelmingly benefits the technology's creators at the expense of public protection.

Literacy as Counter-Practice: AI Language Literacy​

AI literacy, as a counter-practice, involves the disciplined replacement of misleading anthropomorphic language with precise, mechanistic descriptions. Synthesizing the reframings from Task 4, the core principle is to shift the focus from the AI's supposed internal state to the observable properties of its output. For example, replacing 'the AI exhibits an entrepreneurial mindset' with 'the LLM's text output scores consistently within the entrepreneurial range on psychometric scales' directly counters the material stakes. This reframing demolishes the epistemic risk by reminding the user that they are dealing with text and scores, not a mind, forcing them to remain critical. The practice of systematically replacing consciousness verbs (knows/understands/believes) with mechanistic verbs (processes/predicts/classifies) is a powerful tool of intellectual hygiene. Insisting that 'the AI knows' be replaced with 'the model retrieves and ranks tokens based on learned probabilities' forces a recognition of the system's core limitations: its total dependence on training data, its lack of justification for its claims, and the statistical, not certain, nature of its outputs. This precision directly challenges the regulatory ambiguity that benefits manufacturers; a system that 'processes data' is clearly a product subject to liability, whereas an 'agent that knows' is not. Adopting these practices systematically would require significant institutional change. Journals could mandate a 'mechanistic translation' for any psychological or agential claim made about an AI system. Researchers would need to commit to a 'mechanistic-first' principle in their descriptions. However, this practice would face strong resistance. Anthropomorphic and consciousness-attributing language serves powerful interests: it generates hype, attracts funding, secures media attention, and inflates product value. Precision is deflationary. Researchers and companies who benefit from the 'illusion of mind' have a vested interest in preserving the lucrative ambiguity of current discourse.

Path Forward​

To foster a more responsible and transparent discourse, the research community studying LLMs must undertake a deliberate vocabulary and framing reform. The goal is to move from a 'psychology of AI' to a 'psychometrics of AI-generated text.' This requires a specific vocabulary shift. Instead of claiming an AI 'knows,' researchers should commit to saying 'the model's output correlates with…' Instead of 'understands,' they should use 'processes text by weighting contextual embeddings.' The concept of an AI 'mindset' should be replaced with the more precise 'statistically coherent output profile.' This shift from the language of mind to the language of statistical linguistics would enable clearer risk assessment and more accurate public understanding. Supporting this shift requires institutional changes. Scientific journals, particularly in the social sciences, should issue editorial guidelines requiring that any claims of AI 'agency,' 'personality,' or 'understanding' be accompanied by a precise mechanistic description of the underlying computational process. Funding agencies could prioritize grant proposals that aim to elucidate the mechanistic pathways to complex AI behaviors over those that rely on anthropomorphic analogy. Industry could be regulated to adopt a 'consciousness disclosure' standard, forcing companies marketing 'AI understanding' to specify the technical processes involved. For example, a regulatory framework could legally define AI systems as 'computational artifacts' or 'probabilistic information processors' to prevent the legal ambiguity of 'agent' status. The trade-off is clear: we might lose some of the evocative, easy-to-grasp power of metaphor, but we would gain immense clarity, safety, and accountability. This linguistic and institutional work is not merely academic. It is the foundation for effective AI governance, ensuring that we can harness the power of these tools without succumbing to a dangerous illusion of a 'second intelligent species,' and thereby keeping human values and accountability at the center of technological progress.


info

Raw JSON: 2025-11-18-from-humans-to-machines-researching-entrepreneuria-metaphor-981347.json Analysis Framework: metaphor v6.0 Generated: 2025-11-18T08:42:49+00:00Z

Discourse Depot © 2025 by TD is licensed under CC BY-NC-SA 4.0

Extended Processing Summary​

Gemini refers to the text below as “thought summaries.” This is an overt consciousness projection because 'intentions' are hallmarks of a conscious mind that 'knows' what it is doing and why. The concealed mechanistic process is probabilistic text generation. Treat this as a just another rhetorical artifact —a way of making the model’s processing legible. The first-person framing of these “thought summaries” is a presentation choice for the user-facing output, not a window into “real”thoughts. These are computational artifacts, not cognitive reports from a quirky, curious or conflicted mind.

Commencing Analysis Protocol

I'm now fully immersed in the "From humans to machines" paper. I've initiated the AI literacy analysis as per the specified JSON schema. Progress is swift; I'm systematically extracting key data points and evaluating the paper's core arguments. Initial findings are promising, aligning well with the schema's structure.

Examining Metaphorical Instances

I'm now fully immersed in the "From Humans to Machines" paper, focusing on the metaphorical and anthropomorphic language. I've begun to audit the text, identifying key instances where human qualities are projected onto AI. I'm carefully analyzing how these projections shape the perception of LLMs. My focus is precise; I'm mapping the distinction between mechanistic processing and conscious knowing within each identified instance. I'm making sure to adhere to the requested structure.

Refining the Audit Framework

I've revised the initial metaphor audit, focusing on the distinction between processing and knowing in each instance. I'm adding a "Mechanistic vs. Conscious" distinction to each entry to sharpen the analysis. I've begun to add some initial examples that were described previously. I am analyzing the impact on trust, risk, and policy associated with each instance. My focus remains tightly constrained to the JSON schema.

Deepening Metaphorical Analysis

I'm now diving deeper into the source-target mapping for the metaphors. For each, I'm defining the familiar Source Domain and the unfamiliar Target Domain, mapping the key elements. I'm focusing on the implied transfer of human-like qualities to AI, particularly where processing is subtly presented as knowing. The initial examples reveal a strong trend towards anthropomorphism. I'm paying close attention to nuanced shifts and the potential impact on risk perception, making sure to stick to the requested structure.

Deepening the Analysis

I'm now fully immersed in the "From Humans to Machines" paper, focusing on the instances where agency is implicitly bestowed upon the AI models. I'm carefully analyzing how the language subtly attributes intentionality and consciousness. My analysis is precise, and my focus is on the impact of each attributed quality. I am adhering closely to the JSON schema, and I've started mapping the metaphors to their corresponding explanations. I'm focusing particularly on the passages where the paper attempts to explain how these AI agents behave.

Examining Consciousness Attributions

I've moved on to the core of the analysis, specifically focusing on the 5 explanation instances. I'm scrutinizing passages where the paper attempts to explain the behavior of the LLMs, distinguishing between how the models function and why they act the way they do. The instances include ones that reference the model 'thinking' or the model's 'ability to understand'. I'm also carefully mapping the explanations back to Robert Brown's typology to classify the underlying explanations provided, and I'm very aware of the potential for the authors to ascribe conscious intent or understanding to what is a mechanistic process.

Revising Schema Compliance

I've carefully reread the instructions to ensure strict adherence to the JSON schema. I'm focusing on the distinct concepts, namely "processing" versus "knowing", for the metaphorical projections. I will analyze the attribution of intent and agency as these pertain to LLM explanations. I am making sure the output for the metaphorical analysis is precise.

Revising Analysis Focus

I've refined the approach for the explanations audit, concentrating on how explanations may inadvertently attribute agency. My focus sharpens on instances where the authors describe mechanistic processes using language that could be interpreted as conscious action. The goal is to detect and analyze any slippage toward agential descriptions. I am paying very close attention to any instances where the explanations attempt to classify the AI behavior.

Commencing Analysis Protocol

I'm now fully immersed in the "From Humans to Machines" paper, focusing on Task 3, the explanation audit. I've begun extracting and categorizing key passages, applying Robert Brown's typology to clarify the types of explanations provided. I'm focusing on the instances where agency is implicitly bestowed upon the AI models, and I am meticulously mapping the explanations to their associated definitions, which I can use to help determine when consciousness claims are made.

Refining Agency Analysis

I'm now zeroing in on instances where agency is subtly attributed to AI, particularly when describing explanations. I'm carefully analyzing how mechanistic processes are framed using language suggesting consciousness. The JSON schema remains my unwavering guide as I refine this critical aspect of the analysis.

Constructing the Analytical Framework

I've carefully reread the instructions to ensure strict adherence to the JSON schema. My initial audit is complete, and I've identified the 10-12 instances of metaphorical and anthropomorphic language. I'm now rigorously analyzing each instance for its impact on perceived trust and risk, ensuring a clear distinction between processing and knowing in my analysis.

Charting the Analysis Framework

I'm now fully immersed in the "From Humans to Machines" paper, focusing on Task 3, the explanation audit. I've begun extracting and categorizing key passages, applying Robert Brown's typology to clarify the types of explanations provided. I'm focusing on the instances where agency is implicitly bestowed upon the AI models, and I am meticulously mapping the explanations to their associated definitions, which I can use to help determine when consciousness claims are made.

Refining Explanation Analysis

I'm now zeroing in on instances where agency is subtly attributed to AI, particularly when describing explanations. I'm carefully analyzing how mechanistic processes are framed using language suggesting consciousness. The JSON schema remains my unwavering guide as I refine this critical aspect of the analysis.

Finalizing the Analytical Structure

I've carefully reread the instructions to ensure strict adherence to the JSON schema. My initial audit is complete, and I've identified the 10-12 instances of metaphorical and anthropomorphic language. I'm now rigorously analyzing each instance for its impact on perceived trust and risk, ensuring a clear distinction between processing and knowing in my analysis.

Finalizing the Analytical Structure

I've carefully reread the instructions to ensure strict adherence to the JSON schema. My initial audit is complete, and I've identified the 10-12 instances of metaphorical and anthropomorphic language. I'm now rigorously analyzing each instance for its impact on perceived trust and risk, ensuring a clear distinction between processing and knowing in my analysis.

Analyzing Conceptual Frameworks

I'm now fully immersed in the "From Humans to Machines" paper, focusing on Task 1 and Task 2. I'm extracting key metaphors and mapping the source-target domains for each instance. I'm carefully analyzing the specific human qualities attributed to the AI models, especially where mechanistic processing is subtly presented as conscious knowing. I'm focused on the implications of each instance to adhere to all JSON structural requests.