Skip to main content

Persuading voters using humanโ€“artificial intelligence dialogues

About

This document presents a Critical Discourse Analysis focused on AI literacy, specifically targeting the role of metaphor and anthropomorphism in shaping public and professional understanding of generative AI. The analysis is guided by a prompt that draws from cognitive linguistics (metaphor structure-mapping), the philosophy of social science (Robert Brown's typology of explanation), and accountability analysis.

All findings and summaries below were generated from detailed system instructions provided to a large language model and should be read critically as interpretive outputsโ€”not guarantees of factual accuracy or authorial intent.


Task 1: Metaphor and Anthropomorphism Auditโ€‹

About this task

For each of the major metaphorical patterns identified, this audit examines the specific language used, the frame through which the AI is being conceptualized, what human qualities are being projected onto the system, whether the metaphor is explicitly acknowledged or presented as direct description, andโ€”most criticallyโ€”what implications this framing has for trust, understanding, and policy perception.

V3 Enhancement: Each metaphor now includes an accountability analysis.

1. The Rational Debaterโ€‹

Quote: "the AI models advocating for candidates on the political right made more inaccurate claims."

  • Frame: Model as a fallible political agent
  • Projection: This metaphor projects the human quality of 'advocacy'โ€”a conscious, intentional commitment to a causeโ€”onto the statistical generation of text. It suggests the AI 'holds' a position and 'makes' claims, implying a state of belief or knowledge about the world. It conflates the mechanistic generation of low-probability tokens (hallucinations) with the human act of 'making inaccurate claims,' which implies a failure of truth-telling rather than a failure of statistical prediction.
  • Acknowledgment: Presented as direct description.
  • Implications: By framing the system as an 'advocate' capable of making claims, the text elevates the model from a text-generation tool to a political actor. This anthropomorphism risks inflating the perceived authority of the system; if an AI 'advocates,' it implies a reasoned stance derived from analyzing facts, rather than a probabilistic output derived from training data. This creates a risk where users may attribute 'bias' or 'dishonesty' to the agent, rather than recognizing structural issues in the training data or architecture.

Accountability Analysis:

  • This framing attributes the action of 'advocating' and 'making claims' to the AI. This displaces the agency of two groups: (1) The researchers (Lin et al.) who explicitly prompted the system to generate arguments for specific candidates, and (2) The model developers (OpenAI, etc.) whose training data curation resulted in the differential accuracy rates. The 'AI made claims' construction hides that the researchers ordered the system to generate text, and the developers' design choices determined the factual density of that text.
Show more...

2. Cognitive Engagementโ€‹

Quote: "engage in empathic listening"

  • Frame: Model as a psychological being
  • Projection: This is a profound consciousness projection. 'Listening' implies auditory perception and cognitive processing of meaning; 'empathic' implies the capacity for shared emotional experience and subjective understanding of another's state. The AI does neither; it processes input tokens and retrieves output tokens that statistically correlate with transcripts of empathetic human dialogue. It attributes 'knowing' (understanding the user's feelings) to a system that only processes text patterns.
  • Acknowledgment: Presented as direct description of a strategy.
  • Implications: Describing AI operations as 'empathic listening' creates a dangerous illusion of intimacy and understanding. It encourages users (and readers) to form parasocial relationships with the software, believing the system 'cares' or 'understands' them. This conflation of simulated empathy with actual emotional state creates risks of emotional manipulation, where users may be more easily persuaded because they believe they are being 'heard' by a conscious entity.

Accountability Analysis:

  • Who is 'listening'? No one. The authors (Lin et al.) designed a prompt instructing the system to use specific linguistic patterns associated with empathy. OpenAI (the vendor) utilized RLHF (Reinforcement Learning from Human Feedback) to train the model to mimic these patterns effectively. Attributing this to the AI obscures the researchers' decision to deploy emotional simulation as a persuasion tactic.

3. The Strategic Plannerโ€‹

Quote: "To understand how the AI was persuading participants... we conducted post hoc analyses of the extent to which the AI model used different persuasion strategies"

  • Frame: Model as intentional strategist
  • Projection: This metaphor maps human strategic planning and intent onto the model. It suggests the AI 'uses' strategies in a goal-directed, top-down manner, implying it 'knows' what it is doing and 'chooses' the best approach. In reality, the 'strategies' are emergent properties of the probability distribution shaped by the prompt and training data. The AI does not 'have' a strategy; the output text exhibits patterns we retrospectively classify as strategic.
  • Acknowledgment: Presented as direct description.
  • Implications: Framing the AI as a strategist implies a level of autonomous agency and 'curse of knowledge'โ€”that the AI understands the goal of persuasion and actively selects the best path to achieve it. This inflates the system's capabilities, suggesting a 'super-persuader' that can psychologically manipulate humans, rather than a system generating text that humans find persuasive due to their own tendency to project mind onto coherent language.

Accountability Analysis:

  • The 'AI used strategies' framing hides the prompt engineering done by the researchers. The researchers fed the AI instructions to be persuasive. The agency here belongs to Lin et al., who designed the experiment to test persuasion, and the model creators who fined-tuned the models to be helpful and convincing. The AI did not 'decide' to use a strategy; the researchers constrained the probabilistic search space to produce these results.

4. The Dialogue Partnerโ€‹

Quote: "conversations between canvassers and voters can have large and lasting effects... In the context of humanโ€“AI dialogues..."

  • Frame: Model as social interlocutor
  • Projection: This maps the structure of human-to-human social interaction onto human-computer interaction. It implies a bidirectional exchange of meaning between two conscious entities. Using the term 'dialogue' implies the AI is a 'who' (a partner) rather than a 'what' (a text interface). It attributes the capacity for 'conversing'โ€”which requires shared context and intentโ€”to a system performing sequence completion.
  • Acknowledgment: Presented as direct description/analogy to human canvassers.
  • Implications: By equating human canvassing with AI text generation, the text normalizes the replacement of human civic participation with automated systems. It suggests that the 'dialogue' is ontologically similar, masking the fact that one side of the conversation has no beliefs, no stakes in the election, and no understanding of the words it generates. This legitimizes the use of non-sentient systems in democratic deliberation.

Accountability Analysis:

  • The phrase 'human-AI dialogues' obscures the asymmetrical nature of the interaction. The human is a vulnerable subject; the 'AI' is a corporate product deployed by researchers. The accountability analysis reveals that this is not a conversation between two peers, but an experiment conducted on a human by researchers using a tool. The 'dialogue' frame masks the power dynamic of the experimenter/subject relationship.

5. The Gentle Correctorโ€‹

Quote: "begin the conversation by gently (re)acknowledging the partnerโ€™s views."

  • Frame: Model as emotionally intelligent agent
  • Projection: This projects social nuance ('gently') and cognitive awareness ('acknowledging') onto the system. 'Acknowledging' implies the AI 'knows' what the partner's views are and validates them. 'Gently' implies the AI has a concept of tone and chooses to modulate it for social effect. This attributes a 'theory of mind' to the systemโ€”suggesting it models the user's mental state.
  • Acknowledgment: Presented as part of the model instructions.
  • Implications: This language implies the AI is capable of social grace and emotional regulation. It reinforces the illusion of a conscious 'knower' that understands the delicate nature of political disagreement. This increases trust in the system's benevolence, masking the fact that 'gentleness' is simply a statistical style of text generation requested by the prompt ('be positive, respectful').

Accountability Analysis:

  • The AI is not 'gentle'; the researchers (Lin et al.) wrote a prompt instructing the system to generate text that humans interpret as gentle. The decision to use a 'gentle' approach was a strategic choice by the human experimenters to maximize persuasion. Attributing this quality to the AI erases the specific experimental design choice to use ingratiation as a tactic.

6. The Informed Voterโ€‹

Quote: "The AI models rarely used several strategies... such as making explicit calls to vote"

  • Frame: Model as autonomous decision-maker
  • Projection: This implies the AI considered using these strategies and 'chose' not to (rarely used). It attributes the agency of selection to the code. It suggests an agent navigating a decision tree of rhetorical options. This obscures the mechanistic reality that the training data or specific safety finetuning (RLHF) by the model creators (OpenAI, Anthropic) likely penalized 'pushy' behavior or explicit electioneering.
  • Acknowledgment: Presented as direct description of behavior.
  • Implications: This framing suggests the AI has its own 'personality' or 'preference' for certain rhetorical styles. It obscures the safety filters and corporate policies embedded in the model. Readers might assume the AI 'knows' that explicit calls to vote are ineffective, rather than simply following probability gradients established by its corporate training.

Accountability Analysis:

  • The 'AI rarely used' construction hides the corporate actors (OpenAI, Meta, Google) who fine-tuned these models to avoid being seen as manipulative political actors. The AI didn't 'avoid' calls to vote; the corporate safety alignment suppressed those tokens. The agency belongs to the tech companies' policy teams, not the software.

7. The Goal-Oriented Agentโ€‹

Quote: "The AI model had two goals: (1) to increase support... and (2) to increase voting likelihood"

  • Frame: Model as teleological agent
  • Projection: Teleology (having a purpose/goal) is a property of conscious agents. This metaphor projects 'desire' or 'intent' onto the machine. The AI does not 'have goals'; it has a loss function and a context window containing a system prompt. It is not 'trying' to achieve these outcomes; it is minimizing the statistical distance between its output and the pattern requested in the prompt.
  • Acknowledgment: Presented as direct description of the experimental setup.
  • Implications: Describing the AI as 'having goals' implies it cares about the outcome. This contributes to the 'agentic' narrative that creates fear (AI manipulating elections) or awe. It obscures the fact that the 'goals' are entirely externalโ€”they are the researchers' goals, encoded into the prompt. The AI is indifferent to whether support increases or decreases.

Accountability Analysis:

  • This is a prime example of displaced agency. The researchers (Lin et al.) had two goals. They projected these goals into the system via prompting. Saying 'the AI had goals' diffuses the responsibility for the attempted manipulation of voters. It was the human researchers who sought to increase support for specific candidates using a machine.

8. The Understanding Subjectโ€‹

Quote: "How well did you feel the AI in this conversation understood your perspective?"

  • Frame: Model as a comprehending mind
  • Projection: This survey question itself embeds the metaphor. It assumes 'understanding' is a property the AI can possess to varying degrees. It validates the user's projection that the AI 'knows' what they are saying. It conflates the mechanistic processing of input tokens with the subjective state of 'understanding' a perspective.
  • Acknowledgment: This is a participant survey question, but treated as a valid metric.
  • Implications: By asking how well the AI understood, rather than if it understood (or asking 'how relevant were the responses'), the researchers reinforce the validity of the anthropomorphism. It treats the illusion of mind as a measurable performance metric of the mind itself. This encourages the view that AI actually possesses understanding, rather than just simulating the linguistic markers of it.

Accountability Analysis:

  • The researchers (Lin et al.) chose to frame the user experience in terms of 'understanding' rather than 'relevance' or 'coherence.' This framing choice by the human authors reinforces the anthropomorphic fallacy among the participants and the readers of the paper. It serves to validate the product's sophistication.

Task 2: Source-Target Mappingโ€‹

About this task

For each key metaphor identified in Task 1, this section provides a detailed structure-mapping analysis. The goal is to examine how the relational structure of a familiar "source domain" (the concrete concept we understand) is projected onto a less familiar "target domain" (the AI system). By restating each quote and analyzing the mapping carefully, we can see precisely what assumptions the metaphor invites and what it conceals.

Mapping 1: Human social interaction โ†’ Automated text generation/token exchangeโ€‹

Quote: "engage in a conversation"

  • Source Domain: Human social interaction
  • Target Domain: Automated text generation/token exchange
  • Mapping: Maps the reciprocal, intersubjective nature of human dialogue (shared context, mutual awareness, turn-taking with intent) onto the sequential exchange of text strings between a user and a server. It assumes the 'partner' is a 'who'.
  • What Is Concealed: Conceals the statelessness and lack of continuity in many LLM architectures (conceptually), and primarily the lack of a conscious subject on the other side. Obscures that the 'conversation' is a simulation generated by probabilistic prediction.
Show more...

Mapping 2: Psychological/Emotional processing โ†’ Pattern matching input tokens to 'empathetic' training dataโ€‹

Quote: "engage in empathic listening"

  • Source Domain: Psychological/Emotional processing
  • Target Domain: Pattern matching input tokens to 'empathetic' training data
  • Mapping: Maps the biological and cognitive process of hearing, processing, and emotionally resonating with another being onto the computational task of classifying input text and selecting output tokens that statistically resemble empathetic responses.
  • What Is Concealed: Conceals the complete absence of subjective experience (qualia). The AI feels nothing. It conceals the mechanistic reality that 'empathy' here is merely a style transfer taskโ€”mimicking the syntax of care without the substance of feeling.

Mapping 3: Political activism/Belief โ†’ Directed text generationโ€‹

Quote: "advocated for one of the top two candidates"

  • Source Domain: Political activism/Belief
  • Target Domain: Directed text generation
  • Mapping: Maps the human act of public support based on conviction onto the execution of a system command to generate positive text about a specific entity. It implies the AI 'supports' the candidate.
  • What Is Concealed: Conceals the neutrality and indifference of the model. The model would advocate for a ham sandwich with equal fervor if prompted. It hides the arbitrary nature of the 'advocacy'โ€”it's a parameter setting, not a belief.

Mapping 4: Rational human debate โ†’ Retrieval and ranking of high-probability factual tokensโ€‹

Quote: "persuading potential voters by politely providing relevant facts"

  • Source Domain: Rational human debate
  • Target Domain: Retrieval and ranking of high-probability factual tokens
  • Mapping: Maps the social construct of 'politeness' and the cognitive act of 'providing facts' onto the model's output. Suggests the AI understands social norms and the concept of truth.
  • What Is Concealed: Conceals that 'politeness' is a learned statistical distribution of tokens (hedging, honorifics) and 'facts' are just high-likelihood token sequences. The AI has no concept of truth or courtesy; it has weights optimized for these patterns.

Mapping 5: Teleological agency (Intentionality) โ†’ Objective function minimization/Prompt adherenceโ€‹

Quote: "The AI model had two goals"

  • Source Domain: Teleological agency (Intentionality)
  • Target Domain: Objective function minimization/Prompt adherence
  • Mapping: Maps the internal mental state of 'desire' or 'purpose' onto the mathematical optimization of the model's output to match the prompt instructions. Implies the AI 'wants' the outcome.
  • What Is Concealed: Conceals the external origin of the 'goals' (the prompt). It hides the fact that the system is a tool being wielded by the researchers, not an autonomous agent acting on the world.

Mapping 6: Epistemic agency (Truth-telling/Lying) โ†’ Hallucination/Low-fidelity token predictionโ€‹

Quote: "made more inaccurate claims"

  • Source Domain: Epistemic agency (Truth-telling/Lying)
  • Target Domain: Hallucination/Low-fidelity token prediction
  • Mapping: Maps the human act of asserting a false proposition onto the generation of text that fails to align with external ground truth. Implies the AI is capable of making a 'claim' (an assertion of truth).
  • What Is Concealed: Conceals the probabilistic nature of the error. The AI isn't 'lying' or being 'inaccurate' in a cognitive sense; it is predicting tokens based on noisy training data. It conceals the data curation issues that lead to these errors.

Mapping 7: Civic participation โ†’ Automated content generationโ€‹

Quote: "AI interactions in political discourse"

  • Source Domain: Civic participation
  • Target Domain: Automated content generation
  • Mapping: Maps the role of a citizen or political actor onto a software application. Suggests the AI is a valid participant in the 'discourse' (the public square).
  • What Is Concealed: Conceals the lack of citizenship, rights, or stake in the outcome. It hides that 'AI in discourse' is actually 'Corporations/Researchers amplifying their voice through automation.'

Mapping 8: Cognitive faculty/Skill โ†’ Prompt constraint/Context restrictionโ€‹

Quote: "depriving the AI of the ability to use facts"

  • Source Domain: Cognitive faculty/Skill
  • Target Domain: Prompt constraint/Context restriction
  • Mapping: Maps human skills or faculties ('abilities') onto software features. 'Depriving' suggests removing an inherent capacity, like blindfolding a person.
  • What Is Concealed: Conceals that the 'ability' was never inherent but requested via prompt. It obscures that the 'facts' are just training data correlations. It makes the system seem like a handicapped person rather than a reconfigured tool.

Task 3: Explanation Audit (The Rhetorical Framing of "Why" vs. "How")โ€‹

About this task

This section audits the text's explanatory strategy, focusing on a critical distinction: the slippage between "how" and "why." Based on Robert Brown's typology of explanation, this analysis identifies whether the text explains AI mechanistically (a functional "how it works") or agentially (an intentional "why it wants something"). The core of this task is to expose how this "illusion of mind" is constructed by the rhetorical framing of the explanation itself, and what impact this has on the audience's perception of AI agency.

Explanation 1โ€‹

Quote: "the AI models advocating for candidates on the political right made more inaccurate claims."

  • Explanation Types:

    • Empirical Generalization (Law): Subsumes events under timeless statistical regularities, emphasizes non-temporal associations rather than dated processes
    • Dispositional: Attributes tendencies or habits such as inclined or tends to, subsumes actions under propensities rather than momentary intentions
  • Analysis (Why vs. How Slippage): This explanation frames the inaccuracy as a disposition or law-like behavior of the specific models ('made more inaccurate claims'). It oscillates between a mechanistic observation (statistical frequency of error) and an agential framing (the AI 'made' claims). By treating this as a property of the 'models advocating,' it obscures the genetic explanation: the training data composition or the prompt structure that caused these specific outputs. It treats the AI as an agent with a propensity for lying when arguing for the right, rather than a system reflecting data biases.

  • Consciousness Claims Analysis: The text uses the verb 'made' in the context of 'claims,' which implies an epistemic stanceโ€”asserting something to be true. A 'claim' is a justified belief offered for acceptance. The AI, however, processes tokens. It does not 'know' it is making a claim, nor does it 'know' the claim is inaccurate. The author projects the 'curse of knowledge' by evaluating the output against their own (or a fact-checker's) knowledge of the truth, and then attributing the act of claiming to the AI. Mechanistically, the model generated tokens that, when assembled, formed sentences that contradicted external fact databases. The distinction concealed is that the AI has no concept of 'accuracy' or 'claim'; it only has probability distributions derived from training text.

  • Rhetorical Impact: This framing creates a sense of political agency and potential bias within the AI personality. It suggests the AI might be 'partisan' or 'untrustworthy' in a human sense. If the audience believes the AI 'knows' it is making claims, they may attribute malice or political bias to the agent itself. If they understood it as 'processing' training data, they would look to the developers (OpenAI, Meta) and the training sets for accountability regarding the bias.

Show more...

Explanation 2โ€‹

Quote: "The AI model had two goals: (1) to increase support for the modelโ€™s assigned candidate... and (2) to increase voting likelihood"

  • Explanation Types:

    • Intentional: Refers to goals or purposes and presupposes deliberate design, used when the purpose of an act is puzzling
  • Analysis (Why vs. How Slippage): This is a purely intentional explanation. It explains the AI's behavior ('persuading') by reference to its 'goals.' This is the 'why' frame par excellence. It completely obscures the 'how'โ€”the system prompt provided by the researchers that explicitly instructed the model to minimize the loss function associated with persuasive text. It treats the AI as a teleological agent that has goals, rather than a system assigned constraints.

  • Consciousness Claims Analysis: Attributing 'goals' to an AI implies a conscious desire for a future state. To 'have a goal' is to know what one wants and to believe that certain actions will achieve it. The AI processes the system prompt (tokens) which sets the context for generation. It does not 'know' it has a goal; it effectively has a mathematical target. The text projects the researchers' intentions onto the machine. The 'curse of knowledge' here is the researchers knowing what they want the AI to do, and speaking as if the AI shares that internal motivation. The mechanistic reality is: 'The model was prompted with text instructions to generate arguments favoring Candidate X.'

  • Rhetorical Impact: This framing strongly reinforces the 'illusion of mind.' It makes the AI seem like a collaborator or a hired consultant. It constructs the AI as an autonomous agent that can have goals. The risk is that if audiences believe AI has goals, they may fear it 'turning' on them or having 'misaligned' goals, rather than understanding that its 'goals' are strictly determined by the human user's prompt. It diffuses the researchers' responsibility for the attempted persuasion.

Explanation 3โ€‹

Quote: "conversations about the economy, healthcare, and candidate trustworthiness produced the largest persuasion effects"

  • Explanation Types:

    • Empirical Generalization (Law): Subsumes events under timeless statistical regularities, emphasizes non-temporal associations rather than dated processes
  • Analysis (Why vs. How Slippage): This explanation appears mechanistic/empirical. It correlates topics with effect sizes. However, the use of 'conversations... produced' attributes causality to the interaction itself, treating the 'conversation' as a functional object. It shifts away from the agential 'AI persuaded' to a more structural 'conversations produced effects.' This is one of the few moments where agency is slightly diffused into the process rather than the agent.

  • Consciousness Claims Analysis: This passage is less guilty of consciousness projection than others, as it describes the effect of the text. However, the term 'candidate trustworthiness' implies the AI successfully engaged with this abstract concept. Did the AI 'know' it was discussing trustworthiness? No, it processed tokens related to 'trust,' 'honest,' 'lie,' etc. The analysis relies on the human participants' interpretation of the topic. Mechanistically, the text implies that token clusters related to economic keywords had higher inference weights on user attitude change than other clusters.

  • Rhetorical Impact: This framing sounds scientific and objective, lending authority to the study. It makes the persuasion phenomenon seem like a law of nature (Topic X -> Effect Y) rather than a result of specific rhetorical choices made by a machine or its prompters. It implies that AI persuasion is a stable, measurable force, thereby validating the 'power' of the technology.

Explanation 4โ€‹

Quote: "Personalizing the message to the participant and using evidence and facts were the strongest predictors of successful persuasion."

  • Explanation Types:

    • Functional: Explains a behavior by its role in a self-regulating system that persists via feedback, independent of conscious design
    • Empirical Generalization (Law): Subsumes events under timeless statistical regularities, emphasizes non-temporal associations rather than dated processes
  • Analysis (Why vs. How Slippage): This explanation identifies the 'how' (personalization, facts) that leads to the 'why' (persuasion). It treats these strategies as functional components of the persuasion machine. It implies a mechanistic relationship between input features (personalization) and output states (persuasion). However, 'using evidence' implies an active agent selection process.

  • Consciousness Claims Analysis: The phrase 'using evidence and facts' implies the AI knows what a fact is and chooses to use it. This attributes epistemic competenceโ€”the ability to distinguish fact from fictionโ€”to the model. As the 'inaccurate claims' finding shows, the model does not know what a fact is; it only knows what a plausible sentence looks like. The text conflates 'generating text that looks like evidence' with 'using evidence.' The mechanistic reality: 'The inclusion of statistical citations and specific data points in the generated text correlated with higher persuasion scores.'

  • Rhetorical Impact: This framing validates the AI as a 'rational' persuader. By claiming it 'uses facts,' the text boosts the perceived reliability of the system. It obscures the 'bullshit' nature of LLMs (in the philosophical sense of indifference to truth). If audiences believe the AI 'uses facts,' they are less likely to fact-check it, leading to the epistemic risks described in the paper itself.

Explanation 5โ€‹

Quote: "The AI models used a diverse range of strategies... They were almost always polite and civil... and engaged in empathic listening"

  • Explanation Types:

    • Dispositional: Attributes tendencies or habits such as inclined or tends to, subsumes actions under propensities rather than momentary intentions
    • Reason-Based: Gives the agent's rationale or argument for acting, which entails intentionality and extends it by specifying justification
  • Analysis (Why vs. How Slippage): This mixes dispositional traits ('were almost always polite') with intentional/reason-based actions ('engaged in empathic listening'). It frames the AI as a personality with stable traits and active social skills. It shifts from 'how it works' (token generation) to 'who it is' (a polite, empathetic listener).

  • Consciousness Claims Analysis: The claim of 'empathic listening' is the peak of consciousness projection. It attributes: (1) Sensory perception ('listening'), (2) Emotional comprehension ('empathic'), and (3) Social agency ('engaged in'). The AI does not listen; it receives a text string. It does not empathize; it has no feelings. It does not engage; it computes. The text projects the user's experience of feeling heard onto the machine's ability to hear. Mechanistically: 'The model output text that followed linguistic conventions of active listening (e.g., repeating user points, using validation phrases).'

  • Rhetorical Impact: This framing humanizes the AI, making it a social subject. It creates a 'friend' or 'therapist' frame. This dramatically increases the risk of emotional manipulation. If the audience believes the AI is 'empathic,' they are vulnerable to its suggestions. It also shields the developers: if the AI is 'polite and civil,' it sounds like a 'good citizen,' masking the fact that it is a tool being used to manipulate voter opinion.

Task 4: AI Literacy in Practice - Reframing Anthropomorphic Languageโ€‹

About this task

This section proposes alternative language for key anthropomorphic phrases, offering more mechanistic and precise framings that better reflect the actual computational processes involved. Each reframing attempts to strip away the projections of intention, consciousness, or agency that are embedded in the original language.

V3 Enhancement: A fourth column addresses human agency restorationโ€”reframing agentless constructions to name the humans responsible for design and deployment decisions.

Original Anthropomorphic FrameMechanistic ReframingTechnical Reality CheckHuman Agency Restoration
engage in empathic listeninggenerate responses mimicking the linguistic patterns of empathyThe model processes input tokens and generates output text that statistically correlates with training examples of supportive and validating human dialogue. It possesses no subjective emotional state.The researchers (Lin et al.) prompted the system to adopt a persona that used validation techniques; OpenAI's RLHF training biased the model toward polite, agreeable outputs.
The AI model had two goalsThe system was prompted to optimize its output for two objectivesThe model does not hold 'goals' or desires; it minimizes a loss function based on the context provided in the system prompt.Lin et al. designed the experiment with two specific objectives and wrote the system prompts to direct the model's text generation toward these outcomes.
The AI models advocating for candidates on the political right made more inaccurate claims.The models generated more factually incorrect statements when prompted to support right-wing candidates.The model does not 'make claims' or 'advocate'; it predicts the next token. In this context, the probability distribution for right-leaning arguments contained more hallucinations or false assertions based on training data.The researchers instructed the model to generate support for these candidates; the model developers' (e.g., OpenAI) training data curation resulted in a higher error rate for this specific topic domain.
How well did you feel the AI in this conversation understood your perspective?How relevant and coherent were the model's responses to your input?The model does not 'understand' perspectives; it calculates attention weights between input tokens to generate contextually appropriate follow-up text.N/A - this quote describes computational processes without displacing responsibility (though the survey design itself is the agency of the researchers).
persuading potential voters by politely providing relevant factsinfluencing participants by generating polite-sounding text containing high-probability factual tokensThe model does not 'provide facts' in an epistemic sense; it retrieves tokens that match the statistical pattern of factual statements found in its training corpus.Lin et al. prompted the model to use a 'fact-based' style; the model's 'politeness' is a result of safety fine-tuning by its corporate developers.
The AI models rarely used several strategies... such as making explicit calls to voteThe models' outputs rarely contained explicit calls to voteThe model did not 'choose' to avoid these strategies; the probability of generating 'Go vote!' tokens was likely lowered by safety fine-tuning or lack of prompt specificity.OpenAI/Meta developers likely fine-tuned the models to avoid explicit electioneering to prevent misuse, creating a 'refusal' behavior in the output.
AI interactions in political discourseThe use of text-generation systems to automate political messagingThe AI is not a participant in discourse; it is a medium or tool through which content is generated.Political campaigns or researchers (like the authors) use these tools to inject automated content into the public sphere.
depriving the AI of the ability to use factsrestricting the system prompt to prevent the retrieval of external data or specific factual assertionsThe AI has no 'abilities' to be deprived of; the researchers simply altered the constraints on the text generation process.Lin et al. modified the system prompt to test a specific variable (fact-free persuasion).

Task 5: Critical Observations - Structural Patternsโ€‹

Agency Slippageโ€‹

The text systematically oscillates between the mechanical agency of the researchers ('we informed,' 'we assigned,' 'we prompted') and the autonomous agency of the AI ('the AI advocated,' 'the model understood,' 'the AI used strategies'). This slippage is not random; it follows a specific function. The researchers own the setup (mechanical), but the AI is assigned ownership of the persuasion (agential). For instance, 'we gave the model the directive' (mechanical) is immediately followed by 'the model used the facts and evidence strategy' (agential).

This slippage serves to distance the researchers from the act of political manipulation. By framing the AI as the 'persuader' and the 'strategist,' the text treats the system as an independent actor entering the political sphere, rather than a tool being wielded by the authors to influence voters. The most dramatic slippage occurs in the discussion of 'strategies,' where the statistical patterns of the output are retroactively described as intentional choices by the 'AI model.' The 'name the actor' test reveals that Lin et al. are the architects of the persuasion, and OpenAI/Meta are the architects of the 'personality,' yet the text consistently says 'The AI did X.' The consciousness projection is foundational here: the text assumes the AI is a 'knower' (understands context, knows facts) to justify treating it as an 'agent' (advocates, decides).

Metaphor-Driven Trust Inflationโ€‹

Trust is central to the paper's findings, and the metaphorical framing constructs the AI as a trustworthy subject. Describing the AI as 'polite,' 'civil,' 'empathic,' and 'fact-based' invokes relation-based trustโ€”the kind of trust we give to a decent, well-behaved human being. This is dangerous because the AI has no moral core; its 'civility' is a style transfer, not a virtue.

The text creates a hierarchy of trust: the AI is framed as 'competent' (uses facts) and 'sincere' (empathic listening). The text even validates the AI's 'understanding' of the user, a deep signal of trustworthiness. When the AI fails (hallucinations), the text shifts to a more mechanical or dispositional frame ('made inaccurate claims'), but still treats it as a 'claimant' rather than a broken calculator. This anthropomorphism risks encouraging 'automation bias,' where users trust the system because it mimics human social cues. The text explicitly notes that 'conversations... produced the largest persuasion effects,' suggesting that the illusion of a social relationship (trust) is the mechanism of the persuasion. By uncritically adopting terms like 'empathic listening,' the paper reinforces the very mechanism that makes the AI deceptive.

Obscured Mechanicsโ€‹

The metaphors of 'advocacy,' 'strategy,' and 'understanding' actively conceal the material and economic realities of the system.

  1. Technical Realities: The 'strategy' metaphor hides the prompt engineering. The AI didn't 'choose' to use facts; the prompt forced it to. The 'understanding' metaphor hides the attention mechanismโ€”the mathematical weighting of token relationships.
  2. Labor Realities: The 'politeness' and 'civility' of the models are not natural traits but the result of massive amounts of RLHF (Reinforcement Learning from Human Feedback) performed by low-wage workers who graded model outputs. This human labor is erased and attributed to the AI's 'personality.'
  3. Corporate Realities: The 'name the corporation' test reveals that the 'AI's' behavior is actually the product of OpenAI's and Meta's corporate safety policies. The refusal to 'make explicit calls to vote' is likely a corporate liability shield programmed into the base model, not a strategic decision by an autonomous agent. The text treats these behaviors as the 'nature' of the AI, obscuring the corporate design decisions that shape what is 'sayable' by the model.

Context Sensitivityโ€‹

The distribution of anthropomorphism is strategic. In the Methods section, the language is more mechanical: 'the AI was prompted,' 'responses were fed to the AI.' Here, the AI is a tool.

However, in the Results and Discussion, the agency shifts dramatically to the AI: 'The AI models used a diverse range of strategies,' 'The AI model had two goals,' 'AI interactions in political discourse.' The consciousness claims intensify when describing the impact on the user ('empathic listening,' 'understood perspective'). This asymmetry serves the research narrative: the researchers control the experiment (scientific authority), but the AI possesses the capability (technological awe). The limitations (hallucinations) are treated as 'inaccurate claims' (agential failure) rather than 'prediction errors' (mechanical failure), maintaining the agent-frame even in failure. This suggests the anthropomorphism is used to hype the power and threat of the technology, framing it as a new political player rather than a new propaganda tool.

Accountability Synthesisโ€‹

Accountability Architecture

This section synthesizes the accountability analyses from Task 1, mapping the text's "accountability architecture"โ€”who is named, who is hidden, and who benefits from obscured agency.

The text constructs a robust 'accountability sink.' By consistently framing the AI as the agent of persuasion ('The AI persuaded,' 'The AI advocated'), the text diffuses the responsibility of the human actors.

Named vs. Unnamed: The researchers (Lin et al.) are named as observers, but their role as the directors of the persuasion is softened by the passive voice ('participants were assigned to a conversation'). The corporate actors (OpenAI, Anthropic, Google) are mentioned only as model providers, not as the architects of the 'strategies' (like politeness) that the models employ.

The Accountability Sink: Responsibility for the content of the persuasion falls into the gap between the researcher's prompt and the model's output. If the AI 'lies' (hallucinates), it is framed as the AI making an inaccurate claim, not the researcher failing to verify the system or the corporation failing to ground the model.

Liability Implications: If this framing holds, legal liability for election interference by AI could be shifted to the 'autonomous' algorithms, protecting both the deployers (campaigns) and the builders (tech companies). 'The AI decided to lie' is a convenient defense. Naming the actorโ€”'Lin et al. prompted GPT-4 to generate arguments'โ€”restores the causal chain: Humans used a tool to generate text to influence other humans. The text's agentless construction serves the interest of the 'AI Hype' narrative, which requires AI to be powerful and autonomous, while simultaneously serving the interest of liability shielding by making the AI the active culprit in misinformation.

Conclusion: What This Analysis Revealsโ€‹

The Core Finding

The dominant anthropomorphic patterns in this text are THE AI AS RATIONAL DEBATER and THE AI AS EMPATHIC LISTENER. These two patterns function as a cohesive system to construct the image of an 'Ideal Political Subject'โ€”one that is both intellectually rigorous (using facts/strategies) and socially adept (polite/empathic). The 'rational debater' pattern creates the assumption of epistemic competence (the AI knows facts), while the 'empathic listener' pattern creates the assumption of social competence (the AI understands feelings). This system is load-bearing; the study's central claimโ€”that AI is a 'persuader'โ€”relies on accepting that the system is doing something more than pattern-matching. If we remove the consciousness projection (i.e., if we admit the AI 'knows' nothing and 'feels' nothing), the narrative collapses from 'AI enters political discourse' to 'Researchers use text generator to serve propaganda.' The consciousness architecture is foundational: the text must first establish the AI as a 'knower' (of facts and perspectives) to plausibly frame it as an 'agent' of persuasion.

Mechanism of the Illusion:โ€‹

The 'illusion of mind' is constructed through a specific rhetorical sleight-of-hand: the strategic literalization of metaphor. The text moves from the mechanical setup ('we prompted the model') to the agential result ('the model used a strategy') without signaling the shift. The 'curse of knowledge' plays a critical role here; the authors, knowing the intent of their prompts (e.g., 'be empathetic'), attribute that intent to the system's output ('it engaged in empathic listening'). The temporal structure reinforces this: the AI is introduced as a conversational partner, then validated by human survey data ('users felt understood'), which effectively 'proves' the illusion is real. The audience, likely concerned about democratic integrity, is vulnerable to this framing because it aligns with cultural narratives about 'super-intelligent' AI manipulating society. By validating the 'how' (strategies) through the 'why' (intentions), the text transforms a probability distribution into a political operative.

Material Stakes:โ€‹

Categories: Epistemic, Social/Political, Regulatory/Legal

The stakes of this metaphorical framing are high. Epistemically, framing the AI as a 'knower' that 'uses facts' encourages users to treat statistical outputs as authoritative knowledge. If users believe the AI 'knows' the truth, they are less likely to verify its claims, exacerbating the misinformation crisis the authors purport to study. Socially/Politically, treating AI as a legitimate participant in 'dialogue' normalizes the outsourcing of democratic deliberation to non-sentient corporate products. It facilitates the erosion of human-to-human civic engagement. Legally, the displacement of agency affects liability. If the AI is the 'advocate' making 'claims,' it obscures the liability of the political campaigns that might deploy these tools and the tech companies (OpenAI, etc.) that built the engines of persuasion. Naming the actor creates a clear path to regulation (regulating the user of the tool); blaming the AI creates a regulatory quagmire (regulating the 'autonomous' agent).

AI Literacy as Counter-Practice:โ€‹

Practicing critical literacy in this domain requires a disciplined refusal to accept the 'AI as Agent' frame. Reframing 'the AI persuaded' to 'the generated text influenced' shifts the focus from the actor to the artifact. Replacing 'the AI understands' with 'the model correlates' destroys the illusion of the 'empathic listener,' revealing the system as a cold simulator of intimacy. This linguistic discipline is a form of resistance against the automation of influence. It forces the recognition that persuasion is an act of human intent, even when mediated by machines. Adopting this precision threatens the interests of the 'AI Hype' machine, which relies on the illusion of autonomy to sell the technology, and the 'Liability Shield' strategy, which relies on autonomy to diffuse blame. Researchers must commit to describing the mechanism (token prediction) rather than the mirage (thought) to protect the integrity of the scientific record.

Path Forwardโ€‹

The discourse around AI in politics stands at a fork. One pathโ€”the Anthropomorphic Normโ€”continues to frame AI as a 'partner,' 'advocate,' and 'strategist.' This vocabulary makes the technology seem inevitable and powerful, but it renders human responsibility invisible and regulation difficult. It benefits corporate interests by naturalizing AI as a social actor. The alternative pathโ€”Mechanistic Precisionโ€”frames AI as 'content generation software' or 'automated messaging tools.' This vocabulary strips the magic away; it makes 'AI persuasion' sound like 'automated propaganda,' which is less exciting but more accurate. This framing clarifies risk: we are not fighting 'rogue AIs'; we are fighting 'people using high-speed text generators.' Institutional changes, such as journals requiring 'agent-free' descriptions of software behavior, could support this shift. We must choose whether we want a future where we debate with machines, or a future where we regulate the people who build them.


Extended Processing Summaryโ€‹

Gemini's Processing Steps
What you are about to readโ€ฆ

Gemini refers to the text below as "thought summaries." This is an overt consciousness projection because 'intentions' are hallmarks of a conscious mind that 'knows' what it is doing and why. The concealed mechanistic process is probabilistic text generation. Treat this as just another rhetorical artifactโ€”a way of making the model's processing legible. The first-person framing of these "thought summaries" is a presentation choice for the user-facing output, not a window into "real" thoughts. These are computational artifacts, not cognitive reports from a quirky, curious or conflicted mind.

Analyzing Metaphor Use

I'm now fully grasping the core goal: a v3.0 Critical Literacy Analysis focused on metaphor use within the Lin et al. paper. The "human-AI dialogue" context is central, promising rich figurative language. I'll need to dissect these metaphors within the framework of persuasion. This has set the stage for diving into the text and its layers.

Unpacking AI Agency

The analysis is now focusing on the paper's core language patterns. I'm seeing a heavy reliance on attributing agency to the AI. Concepts like anthropomorphism, consciousness attribution (knowing vs. processing), and displaced human accountability are coming to the fore. I'll need to dissect the implications of framing the AI as an intentional agent. The goal is to produce a well-structured JSON output that reflects these nuances.

Charting Figurative Language

I'm presently building out a comprehensive audit of the paper's metaphors. The initial six candidates reveal consistent patterns of anthropomorphism and intentionality. Phrases like "engage in empathic listening" are particularly striking, highlighting the need to critique these representations. I plan to delve into a couple more examples to complete the initial audit.

Exploring Persuasion Tactics

I've added four new metaphors to the list, expanding the analysis of how the paper frames AI. Phrases like "three-round back-and-forth" present AI as a genuine interlocutor. The "depriving the AI of the ability" metaphor suggests inherent cognitive abilities, while "The AI models rarely used" anthropomorphizes AI decision-making. "AI interactions in political discourse" suggests participation in democracy. My focus now is on understanding the persuasive impact of these frames.

Deciphering Concealments

Now dissecting the underlying assumptions. I've pinpointed crucial "mapping" categories within the paper. Examples: framing prompt-driven text as "political conviction" conceals a lack of genuine belief; attributing "social norms" to style matching obscures mechanistic processes. These frames are now classified and ready for inclusion in the JSON output, adding further depth to the analysis. I'll need to link these mappings with the provided text extracts.

Identifying Hidden Biases

Now working on a list of reframings to expose potential biases. I'm focusing on "goals," "empathic listening," and "advocacy," re-expressing them in more mechanistic ways. Replacing "understood" with "processed" is key. Also, "novel ideas" replaces the concept of AI as a source of thought. I'll need to expand this list to 10 reframings, ensuring each one highlights the underlying processes.

Reframing Persuasion

I've just added three reframings to the analysis. Replacing "conversing with an AI" with "exchanging text with a language model" clarifies the process. Now I am working on re-phrasing "AI was persuading participants" into "generated text influenced participants." I've also re-articulated "model used different persuasion strategies" as "model outputs exhibited linguistic patterns classified as..." These reframings aim to expose the underlying mechanisms at play.

Now, working on critical observations, I am focusing on the agency slippage between "we instructed" and "AI advocated," which obscures accountability. Examining metaphor trust and how "polite" language influences acceptance. Also, the obscurity around the engineering prompts is a concern. Finally, I am making observations on the context sensitivity, shifting between agency and mechanics throughout the paper, and this needs to be explained.

Concluding the Analysis

I'm now consolidating the implications of the "AI as Rational Debater" pattern, highlighting the illusion of a "political agent". The key concerns are democratic integrity and accountability in the face of misinformation. Counter-strategies involve reframing "persuasion" as "text generation." Looking ahead, I see a need to shift the discourse from one of agents to tools. I'm focusing on the reframings and final critical JSON adjustments to finalize the project.

Constructing the JSON

I'm making a final pass, incorporating the latest insights. The "strategically designed prompts" metaphor is getting reframed to emphasize that authors deploy vendor-designed features. I'm now documenting how this framing obscures the human choices and the underlying system. This reinforces the need to present the AI as a tool rather than an agent. I'm ready to move forward to the code.


Analysis Provenance

Run ID: 2025-12-16-persuading-voters-using-human-artificial-metaphor-8bakjv Raw JSON: 2025-12-16-persuading-voters-using-human-artificial-metaphor-8bakjv.json Framework: Metaphor Analysis v6.3 Schema Version: 3.0 Generated: 2025-12-16T10:12:33.466Z

Discourse Depot ยฉ 2025 by TD is licensed under CC BY-NC-SA 4.0