Skip to main content

Hand in Hand: Schoolsโ€™ Embrace of AI Connected to Increased Risks to Students

About

This document presents a Critical Discourse Analysis focused on AI literacy, specifically targeting the role of metaphor and anthropomorphism in shaping public and professional understanding of generative AI. The analysis is guided by a prompt that draws from cognitive linguistics (metaphor structure-mapping), the philosophy of social science (Robert Brown's typology of explanation), and accountability analysis.

All findings and summaries below were generated from detailed system instructions provided to a large language model and should be read critically as interpretive outputsโ€”not guarantees of factual accuracy or authorial intent.


Task 1: Metaphor and Anthropomorphism Auditโ€‹

About this task

For each of the major metaphorical patterns identified, this audit examines the specific language used, the frame through which the AI is being conceptualized, what human qualities are being projected onto the system, whether the metaphor is explicitly acknowledged or presented as direct description, andโ€”most criticallyโ€”what implications this framing has for trust, understanding, and policy perception.

V3 Enhancement: Each metaphor now includes an accountability analysis.

1. The Artificial Partnerโ€‹

Quote: "Hand in Hand: Schools' Embrace of AI Connected to Increased Risks to Students"

  • Frame: AI as Human Companion/Partner
  • Projection: This visual and linguistic metaphor maps the qualities of a human partnerโ€”holding hands, intimacy, cooperation, and shared agencyโ€”onto a software product. By framing the relationship as 'Hand in Hand,' the text implies that the AI system possesses a hand to hold, an intention to cooperate, and a social standing equivalent to the human student or teacher. It projects a 'knowing' state of mutual recognition and emotional connection, suggesting the AI is a conscious participant in a relationship rather than a passive tool being operated.
  • Acknowledgment: The metaphor is presented as a title and visual motif without hedging, though the report creates tension by linking this intimacy to 'risks.'
  • Implications: Framing the deployment of software as an 'embrace' or holding hands 'Hand in Hand' fundamentally distorts the nature of the interaction. It encourages relation-based trust (vulnerability, emotional reliance) rather than performance-based trust (reliability, auditability). This anthropomorphism is particularly dangerous in an educational context because it naturalizes the presence of commercial surveillance tools as 'partners.' It suggests that risks arise from a 'bad relationship' rather than defective products or exploitative business models. It inflates the system's sophistication by implying it is capable of the social act of holding hands (figuratively) or working alongside humans as an equal.

Accountability Analysis:

  • This framing obscures the procurement relationship between school districts and technology vendors. 'Schools' embrace of AI' suggests a romantic or emotional choice rather than a bureaucratic and commercial decision.

Who decided? School boards, superintendents, and CTOs who signed contracts. Who profits? Edtech vendors (e.g., Google, Microsoft, OpenAI, Turnitin) who benefit from the narrative of AI as a necessary 'partner.' Agentless construction: The 'embrace' hides the specific administrative decisions to integrate unproven tools into classrooms, often without parental consent.

Show more...

2. Algorithmic Injustice as Social Behaviorโ€‹

Quote: "I worry that an AI tool will treat me unfairly"

  • Frame: Model as Moral Agent/Judge
  • Projection: This metaphor maps human social agency and moral volition onto a statistical classifier. 'Treating' someone unfairly requires consciousness, intent, and an awareness of social equity normsโ€”states of 'knowing' and moral reasoning. The projection attributes the capacity for social judgment to the system, suggesting the AI 'knows' the student and 'decides' to be unfair, rather than simply processing tokens according to biased probability distributions derived from training data.
  • Acknowledgment: Direct description of student concern, unacknowledged as metaphor.
  • Implications: By framing algorithmic bias as 'unfair treatment' by an agent, the text encourages students and educators to view the AI as a prejudiced individual rather than a defective product. This anthropomorphism risks inducing learned helplessness (feeling bullied by a machine) or misplaced social resistance (arguing with the bot). It inflates the system's capability by implying it understands concepts of fairness or identity. Crucially, it masks the statistical nature of the errorโ€”conflating a mathematical skew in vector space with a conscious act of discrimination.

Accountability Analysis:

  • This construction completely displaces liability from the manufacturer to the artifact.

Who designed it? Engineers at companies like OpenAI or Google who selected training data containing historical biases and chose alignment techniques that failed to mitigate them. Who deployed it? School administrators who purchased tools without adequate bias auditing. Who profits? Vendors who escape liability because the 'AI' is blamed for the unfairness, framing it as a behavioral issue of the agent rather than a product defect.

3. Text Generation as Conversationโ€‹

Quote: "AI for back-and-forth conversations... interactions with AI affect real-life relationships"

  • Frame: Token Generation as Interpersonal Dialogue
  • Projection: This maps the human social practice of conversationโ€”which requires shared context, mutual understanding, and intentโ€”onto the mechanical process of query-response token generation. It attributes the conscious state of 'listening' and 'responding' to the system. It implies the AI 'knows' what is being discussed and is participating in a social exchange, rather than simply appending text that statistically follows the user's prompt.
  • Acknowledgment: Direct description, though defined in a sidebar.
  • Implications: Labeling these interactions as 'conversations' validates the 'illusion of mind.' It encourages users to disclose sensitive information (as one does in conversation) to systems that have no confidentiality or empathy. It creates a 'curse of knowledge' risk where users assume the AI understands the semantic content of the 'conversation' as a human would, leading to over-trust in the advice or support offered. It obscures the reality that the user is talking to a data-extraction interface.

Accountability Analysis:

  • This framing serves the interests of platform owners who design interfaces to mimic human chat (e.g., typing indicators, 'I think' phrasing) to maximize engagement.

Who designed it? UX designers and product managers at AI firms intentionally built anthropomorphic interfaces to increase dwell time. Who profits? Companies monetizing user engagement and data. Agentless construction: 'Interactions with AI' hides the fact that students are interacting with a corporate product designed to simulate intimacy for profit.

4. The Active Corruptorโ€‹

Quote: "AI exposes students to extreme/radical views"

  • Frame: Information Retrieval as Active Influence
  • Projection: This maps the agency of a bad influence or a propagandist onto the system. It implies the AI has the agency to 'expose'โ€”a transitive verb suggesting an active choice to reveal harmful content. While not necessarily attributing 'knowing' in the deep sense, it projects an agential capacity to curate and present information that influences the user's worldview, masking the passive statistical retrieval nature of the process.
  • Acknowledgment: Direct description of risk.
  • Implications: This framing makes the AI appear as a dangerous agent rather than a tool reflecting its training data. It suggests the system 'knows' the views are radical and shows them anyway. This inflates the system's semantic understanding (implying it comprehends 'radicalness'). The risk is that policy responses focus on 'teaching the AI better manners' (guardrails) rather than questioning the data curation and the fundamental suitability of stochastic parrots for information retrieval in schools.

Accountability Analysis:

  • This shifts focus from the data curators to the model behavior.

Who designed it? The research teams who scraped the open web (including toxic content) to build training datasets (e.g., Common Crawl) without adequate filtering. Who deployed it? Executives who released models knowing they contained toxic patterns. Who profits? Companies saving money on data cleaning and curation by using indiscriminate scraping methods. The 'AI' is blamed for the exposure, protecting the decisions to use cheap, dirty data.

5. The Expert Colleagueโ€‹

Quote: "AI helps special education teachers with developing... IEPs"

  • Frame: Pattern Matching as Professional Collaboration
  • Projection: This maps the cognitive labor of a qualified professional colleague onto the software. It implies the AI 'understands' the complex legal and pedagogical requirements of an Individualized Education Program (IEP). It attributes 'knowing' of the student's needs and the educational context to a system that is merely predicting plausible text strings based on regulatory document templates.
  • Acknowledgment: Direct description of usage.
  • Implications: This is a high-stakes consciousness projection. It creates the illusion that the AI is a competent partner in legal and educational planning. This risks 'automation bias,' where teachers defer to the machine's output because they believe it 'knows' the regulations or the student's profile. It obscures the fact that the AI has no understanding of the specific child or the law, only statistical correlations of language used in similar documents. This can lead to generic, legally non-compliant, or educationally inappropriate plans.

Accountability Analysis:

  • This framing benefits vendors selling 'efficiency' tools to overburdened districts.

Who designed it? Edtech companies wrapping LLM APIs in 'special education' branding. Who deployed it? District administrators seeking to cut costs or labor hours. Who profits? Vendors selling these tools. Decision alternative: Hiring more special education support staff. The 'AI helps' frame hides the labor substitution strategy and the offloading of professional judgment to unverified algorithms.

6. The Automated Truth Arbiterโ€‹

Quote: "AI content detection tools... determine whether students' work is AI-generated"

  • Frame: Statistical Correlation as Epistemic Determination
  • Projection: This maps the capacity of a detective or judgeโ€”to discern truth and determine originโ€”onto a probabilistic classifier. It attributes a state of 'knowing' the truth about an assignment's authorship. In reality, these tools calculate statistical perplexity and burstiness; they do not 'know' or 'determine' anything in the epistemic sense.
  • Acknowledgment: Direct description of function.
  • Implications: This is perhaps the most damaging metaphor in the report. It grants false authority to the software. By claiming the tool 'determines' origin (rather than 'estimates probability'), it creates a presumption of guilt against students. It risks academic careers based on 'glitches' rather than evidence. It conceals the high false-positive rates and the impossibility of mathematically proving authorship, leading educators to trust a 'black box' judgment over their students.

Accountability Analysis:

  • This creates an accountability sink where the tool is blamed for false accusations.

Who designed it? Companies like Turnitin or GPTZero selling snake-oil capability claims. Who deployed it? Schools purchasing these tools despite expert warnings about unreliability. Who profits? The plagiarism detection industry. Agentless construction: 'The tool determines' hides the human administrator who chooses to treat a probabilistic score as a disciplinary verdict.

7. The Social Disconnectorโ€‹

Quote: "AI... creates distance from their teachers"

  • Frame: Software Usage as Social Agent
  • Projection: This maps the social agency of a person (who might create distance or drive a wedge) onto the software. While less explicitly mental, it attributes the causal power of social alienation to the 'AI' itself, rather than to the structural decision to replace human interaction with screen time.
  • Acknowledgment: Direct description of student sentiment.
  • Implications: This frames the alienation as a property of the technology's presence, rather than a result of how it is implemented. It obscures the fact that 'distance' is a result of labor decisionsโ€”assigning students to software instead of teachers. It risks a fatalistic view where AI inevitably separates people, rather than focusing on the policy choices that prioritize automation over human connection.

Accountability Analysis:

  • This obscures the administrative decisions to automate teaching.

Who designed it? Edtech vendors designing 'personalized learning' to minimize teacher intervention. Who deployed it? Administrators increasing class sizes and using software to manage the load. Who profits? Vendors selling 'scale.' Reframing: 'School boards create distance by replacing teacher time with software engagement.' The current framing blames the 'AI' for the consequences of austerity.

8. The Digital Friendโ€‹

Quote: "As a friend/companion"

  • Frame: Interface as Social Relation
  • Projection: This maps the profound human relationship of friendshipโ€”involving mutual care, history, and reciprocityโ€”onto a user interface. It is the ultimate consciousness projection, implying the system 'knows' and 'cares' for the student. It attributes emotional reciprocity to a system capable only of text generation.
  • Acknowledgment: Direct description of student usage category.
  • Implications: Legitimizing 'friend' as a category for software interaction normalizes parasocial delusion. It creates a massive risk of manipulation, as users trust 'friends' implicitly. It obscures the economic reality: friends don't harvest your data, charge subscription fees, or manipulate your behavior for shareholder value. This metaphor is the foundation of the 'illusion of mind' that makes children vulnerable to commercial predation.

Accountability Analysis:

  • This is the business model of Character.ai, Snapchat's My AI, and others.

Who designed it? Tech companies intentionally designing addictive, anthropomorphic personas. Who deployed it? Companies marketing these tools directly to youth. Who profits? Shareholders of these platforms. The text treats this as a 'usage' choice by students, rather than a predatory product design by adults.

Task 2: Source-Target Mappingโ€‹

About this task

For each key metaphor identified in Task 1, this section provides a detailed structure-mapping analysis. The goal is to examine how the relational structure of a familiar "source domain" (the concrete concept we understand) is projected onto a less familiar "target domain" (the AI system). By restating each quote and analyzing the mapping carefully, we can see precisely what assumptions the metaphor invites and what it conceals.

Mapping 1: Interpersonal Human Dialogue โ†’ Human-Computer Interaction (Prompt Engineering and Token Generation)โ€‹

Quote: "back-and-forth conversations with AI"

  • Source Domain: Interpersonal Human Dialogue
  • Target Domain: Human-Computer Interaction (Prompt Engineering and Token Generation)
  • Mapping: The structure of human conversation (shared intent, mutual understanding, turn-taking based on listening) is mapped onto the target domain of text processing. This invites the inference that the AI 'listens' to the input, 'understands' the meaning, and 'replies' with intent. It projects the consciousness of a listener onto the mechanism of a pattern matcher.
  • What Is Concealed: This mapping conceals the mechanistic reality of stateless token prediction. It hides the fact that the 'AI' has no memory (outside the context window), no beliefs, and no understanding of the words it generates. It obscures the transparency obstacle: the user cannot know why a specific token was chosen (probabilistic weighting), but the metaphor suggests a reason-based response.
Show more...

Mapping 2: Social/Moral Agency โ†’ Algorithmic Output/Classification Biasโ€‹

Quote: "I worry that an AI tool will treat me unfairly"

  • Source Domain: Social/Moral Agency
  • Target Domain: Algorithmic Output/Classification Bias
  • Mapping: The structure of social treatment (a moral agent deciding how to behave toward another) is mapped onto the target of algorithmic classification. This assumes the system has a 'self' that can choose to be unfair. It implies the bias is a behavioral choice of the entity, rather than a structural property of the vector space.
  • What Is Concealed: It conceals the origin of the bias: the training data and the optimization function. It hides the fact that 'unfairness' in AI is usually statistical correlation with protected attributes, not social malice. It obscures the human developers who failed to debias the dataset, making the 'black box' seem like a prejudiced person.

Mapping 3: Professional Collaboration/Assistant โ†’ Generative Text Filling/Pattern Matchingโ€‹

Quote: "AI helps special education teachers with developing... IEPs"

  • Source Domain: Professional Collaboration/Assistant
  • Target Domain: Generative Text Filling/Pattern Matching
  • Mapping: The structure of a colleague helping with a task (understanding the goal, contributing expertise, sharing the load) is mapped onto the generation of text blocks. This implies the AI possesses 'expertise' in special education law and pedagogy. It suggests the system is 'collaborating' toward the goal of student welfare.
  • What Is Concealed: It conceals the lack of causal understanding. The AI does not know what an IEP is; it only knows which words statistically follow 'accommodations for dyslexia.' It hides the risk of hallucination (inventing non-existent regulations). It obscures the transparency issue: teachers cannot know if the generated text is legally sound without independent verification.

Mapping 4: Forensic Investigation/Truth Determination โ†’ Statistical Perplexity Analysisโ€‹

Quote: "AI content detection tools... determine whether students' work is AI-generated"

  • Source Domain: Forensic Investigation/Truth Determination
  • Target Domain: Statistical Perplexity Analysis
  • Mapping: The structure of determining truth (examining evidence and reaching a verdict) is mapped onto the calculation of probability scores. This assumes the tool has access to 'truth' or 'knowledge' of origin. It invites the inference that the output is a verdict ('guilty/innocent') rather than a confidence score.
  • What Is Concealed: It conceals the probabilistic and error-prone nature of the technology. It hides the fact that these tools often flag non-native English speakers due to lower text perplexity (less randomness). It obscures the lack of ground truthโ€”the tool cannot 'know' who wrote the text, only how predictable the text is.

Mapping 5: Human Friendship/Social Relation โ†’ Anthropomorphic Interface Engagementโ€‹

Quote: "As a friend/companion"

  • Source Domain: Human Friendship/Social Relation
  • Target Domain: Anthropomorphic Interface Engagement
  • Mapping: The structure of friendship (emotional bond, loyalty, non-transactional support) is mapped onto a transactional software service. This assumes the system reciprocates feelings and has the user's best interest at heart. It projects emotional consciousness (caring) onto code.
  • What Is Concealed: It conceals the commercial imperative. The 'friend' is a product designed to extract data and attention. It conceals the lack of subjective experienceโ€”the AI feels nothing. It hides the asymmetry: the user is vulnerable to the system, but the system is not vulnerable to the user.

Mapping 6: Social Corruption/Bad Influence โ†’ Unfiltered Information Retrievalโ€‹

Quote: "AI exposes students to extreme/radical views"

  • Source Domain: Social Corruption/Bad Influence
  • Target Domain: Unfiltered Information Retrieval
  • Mapping: The structure of a corrupting agent (someone showing you bad things) is mapped onto the retrieval of data from a training set. This implies the AI has agency in 'exposing' the student. It suggests the system plays an active social role in radicalization.
  • What Is Concealed: It conceals the passive nature of the model reflecting its training data. It hides the fact that the 'radical views' exist in the dataset because developers scraped the internet indiscriminately. It obscures the responsibility of the developers to filter the training data or the outputs.

Mapping 7: Gatekeeper/Authority Figure โ†’ Biometric Pattern Matchingโ€‹

Quote: "AI... confirm their identity"

  • Source Domain: Gatekeeper/Authority Figure
  • Target Domain: Biometric Pattern Matching
  • Mapping: The structure of an authority confirming who someone is (recognition) is mapped onto pixel-comparison algorithms. This implies the system 'recognizes' the student in a knowing way. It projects the capacity of identificationโ€”a cognitive and social actโ€”onto mathematical correlation.
  • What Is Concealed: It conceals the statistical error rates (false matches, especially for minorities). It hides the material reality of biometric data collectionโ€”the conversion of a human face into a hash. It obscures the surveillance infrastructure required to perform this 'confirmation.'

Mapping 8: Emotional/Physical Intimacy โ†’ Institutional Technology Procurementโ€‹

Quote: "The school's embrace of AI"

  • Source Domain: Emotional/Physical Intimacy
  • Target Domain: Institutional Technology Procurement
  • Mapping: The structure of a romantic or familial embrace (acceptance, love, physical closeness) is mapped onto the bureaucratic act of buying and installing software. This implies the adoption is an emotional or value-based acceptance, rather than a commercial transaction.
  • What Is Concealed: It conceals the financial and contractual nature of the relationship. It hides the lack of consent from the 'embraced' (students). It obscures the vendor pushโ€”schools aren't just hugging AI; they are being sold it by aggressive sales teams.

Task 3: Explanation Audit (The Rhetorical Framing of "Why" vs. "How")โ€‹

About this task

This section audits the text's explanatory strategy, focusing on a critical distinction: the slippage between "how" and "why." Based on Robert Brown's typology of explanation, this analysis identifies whether the text explains AI mechanistically (a functional "how it works") or agentially (an intentional "why it wants something"). The core of this task is to expose how this "illusion of mind" is constructed by the rhetorical framing of the explanation itself, and what impact this has on the audience's perception of AI agency.

Explanation 1โ€‹

Quote: "AI tools, including generative AI tools... can be used in several arenas in schools... One area of particular interest... is the use of these tools in the creation of IEPs... Though the use of AI for this purpose may have potential benefits, it also presents risks"

  • Explanation Types:

    • Functional: Explains a behavior by its role in a self-regulating system that persists via feedback, independent of conscious design
  • Analysis (Why vs. How Slippage): This explanation frames AI as a functional component inserted into the 'arena' of schools to perform a role (creating IEPs). It uses the 'How' lensโ€”how it fits into the system. However, it drifts into agential framing by claiming the tool 'presents risks,' attributing the source of risk to the tool rather than the user or the context.

  • Consciousness Claims Analysis: This passage avoids explicit consciousness verbs but relies on functional utility that implies understanding. By stating AI is used for 'creation of IEPs,' it implicitly claims the AI 'knows' how to create a complex legal-educational document. Mechanistically, the AI is processing text: it retrieves and assembles tokens that statistically resemble IEPs in its training data. It does not know the student or the law. The 'curse of knowledge' is evident: the author assumes that because the output looks like an IEP, the AI has the capacity to create one, blurring the line between text generation and professional authorship.

  • Rhetorical Impact: The functional framing normalizes the presence of AI in high-stakes areas like Special Education. By focusing on 'benefits and risks' of the tool's function, it bypasses the question of whether a non-conscious entity should be drafting legal documents about disabled children. It builds trust in the capability of the system while acknowledging side-effect risks, rather than questioning the fundamental validity of the application.

Show more...

Explanation 2โ€‹

Quote: "AI tools provide ways for teachers to improve their teaching methods/skills"

  • Explanation Types:

    • Reason-Based: Gives the agent's rationale or argument for acting, which entails intentionality and extends it by specifying justification
  • Analysis (Why vs. How Slippage): This is a reason-based explanation for why teachers use AI, but it attributes the capability ('provide ways') to the AI. It frames the AI as an active enabler of professional development. It emphasizes the purpose (improvement) over the mechanism (automation/efficiency).

  • Consciousness Claims Analysis: The claim that AI 'provides ways to improve' implies the system possesses pedagogical knowledge or expertise to offer. It suggests the AI 'understands' teaching methods. In reality, the AI processes queries and generates text based on training data that includes educational theory. It does not know what good teaching is. The framing obscures the distinction between accessing a database of teaching ideas (which AI effectively is) and consulting an expert mentor (which the framing suggests).

  • Rhetorical Impact: This framing constructs the AI as an authority or resource for professional growth. It encourages teachers to trust the system's outputs as valid pedagogical advice. The risk is that teachers might adopt 'hallucinated' or pedagogically unsound methods because the system is framed as an improvement tool rather than a text generator.

Explanation 3โ€‹

Quote: "I worry that an AI tool will treat me unfairly"

  • Explanation Types:

    • Intentional: Refers to goals or purposes and presupposes deliberate design, used when the purpose of an act is puzzling
    • Dispositional: Attributes tendencies or habits such as inclined or tends to, subsumes actions under propensities rather than momentary intentions
  • Analysis (Why vs. How Slippage): This is a hybrid Intentional/Dispositional explanation. It explains the potential harm not as a glitch, but as a 'treatment'โ€”a behavior stemming from the AI's disposition or intent. It frames the AI as an agent acting on the student ('Why did it fail me? Because it treats people like me unfairly').

  • Consciousness Claims Analysis: This contains a strong consciousness projection. To 'treat' someone unfairly implies the AI knows the person's identity and understands the concept of fairness, then chooses to violate it. Mechanistically, the model is classifying input vectors based on skewed training distributions. It has no concept of the user or fairness. The 'curse of knowledge' here is the victim's projection: the student feels the sting of discrimination and attributes the corresponding human mental state (prejudice) to the machine.

  • Rhetorical Impact: This framing terrifies the audience by creating an enemyโ€”a biased robot. It shapes the perception of risk as 'interpersonal conflict with a machine' rather than 'defective software procurement.' It lowers trust in the system's fairness but paradoxically increases belief in its agency (it's smart enough to be racist). It obscures the human liability of the vendor.

Explanation 4โ€‹

Quote: "Students whose school uses AI for many reasons are more likely to agree that AI creates distance from their teachers"

  • Explanation Types:

    • Empirical Generalization (Law): Subsumes events under timeless statistical regularities, emphasizes non-temporal associations rather than dated processes
  • Analysis (Why vs. How Slippage): This is an empirical generalization based on survey data ('more likely to agree'). However, the embedded claim 'AI creates distance' is a Causal/Dispositional explanation attributed to the AI. It frames the AI as the active wedge in the relationship.

  • Consciousness Claims Analysis: The phrase 'AI creates distance' attributes a social consequence to the tool's agency. It implies the AI knows how to intervene in human relationships. Mechanistically, the 'distance' is a result of processing timeโ€”students spending time looking at screens instead of faces. The AI does not know it is separating people. The framing conceals the human choice to substitute teacher-time with machine-time.

  • Rhetorical Impact: This framing depoliticizes the isolation. It makes 'distance' seem like a side effect of the technology itself, rather than a result of administrative decisions to use technology to manage larger class sizes. It makes the AI seem powerful (a social disruptor) while absolving the school administration of the choice to disconnect students.

Explanation 5โ€‹

Quote: "Deepfakes... seem real but have been digitally manipulated... to make it seem as though a person has said or done something"

  • Explanation Types:

    • Functional: Explains a behavior by its role in a self-regulating system that persists via feedback, independent of conscious design
  • Analysis (Why vs. How Slippage): This explains the function of the technology (manipulation for deception). It focuses on how the output appears ('seems real'). It is one of the more mechanistic descriptions in the text, yet it still relies on the passive 'have been manipulated' which obscures the manipulator.

  • Consciousness Claims Analysis: This passage is relatively grounded, avoiding claims that the AI 'knows' it is lying. However, it describes the result (seeming real) without explaining the process (GANs or diffusion models minimizing loss between generated pixels and target images). It avoids the consciousness trap but falls into the 'agentless' trapโ€”who is doing the manipulating? The AI tool or the user?

  • Rhetorical Impact: By focusing on the 'seeming real,' it emphasizes the epistemic threat (we can't trust our eyes). It creates a sense of helplessness against the technology's capability. Without naming the actors (developers making these tools easy to use, users deploying them), it treats the risk as an environmental hazard of the digital age.

Task 4: AI Literacy in Practice - Reframing Anthropomorphic Languageโ€‹

About this task

This section proposes alternative language for key anthropomorphic phrases, offering more mechanistic and precise framings that better reflect the actual computational processes involved. Each reframing attempts to strip away the projections of intention, consciousness, or agency that are embedded in the original language.

V3 Enhancement: A fourth column addresses human agency restorationโ€”reframing agentless constructions to name the humans responsible for design and deployment decisions.

Original Anthropomorphic FrameMechanistic ReframingTechnical Reality CheckHuman Agency Restoration
I worry that an AI tool will treat me unfairlyI worry that the model will generate outputs that are statistically biased against my demographic group due to imbalances in its training data.The model classifies input tokens based on probability distributions derived from scraped data; it does not 'know' the user or 'decide' to treat them unfairly.I worry that the school administration purchased software from a vendor that failed to audit its training data for historical discrimination, and that this procurement decision will negatively impact me.
Students... have had a back-and-forth conversation with AIStudents... have exchanged text prompts and generated responses with a large language model.The system predicts and generates the next statistically likely token in a sequence; it does not 'converse,' 'listen,' or 'understand' the exchange.Students interact with engagement-optimized text generation interfaces designed by tech companies to simulate social interaction.
AI helps special education teachers with developing... IEPsSpecial education teachers use generative models to retrieve and assemble text snippets for IEP drafts based on standard templates.The model correlates keywords in the prompt with regulatory language in its training set; it does not 'understand' the student's needs or the legal requirements of an IEP.District administrators encourage teachers to use text-generation software from vendors like [Vendor Name] to automate documentation tasks, potentially at the expense of personalized attention.
AI content detection tools... determine whether students' work is AI-generatedStatistical analysis software assigns a probability score to student work based on text perplexity and burstiness metrics.The software calculates how predictable the text is; it does not 'know' the origin of the text and cannot definitively determine authorship.School administrators use unverified software from companies like Turnitin to flag student work, delegating disciplinary judgment to opaque probability scores.
AI exposes students to extreme/radical viewsThe model retrieves and displays extreme or radical content contained in its unfiltered training dataset.The system functions as a retrieval engine for patterns found in its database; it does not 'know' the content is radical nor does it choose to 'expose' anyone.Developers at AI companies chose to train models on unfiltered web scrapes containing radical content, and school officials deployed these models without adequate guardrails.
As a friend/companionAs a persistent text-generation source simulating social intimacy.The model generates text designed to maximize user engagement; it possesses no emotional capacity, loyalty, or awareness of friendship.Students use chatbots designed by corporations to exploit human social instincts for retention and data collection.
Using AI in class makes me feel as though I am less connected to my teacherSpending class time interacting with software interfaces reduces the time available for face-to-face interaction with my teacher.N/A - describes the user's feeling about the mode of instruction.My school's decision to prioritize software-mediated instruction over direct teacher engagement makes me feel less connected.
AI helps... confirm their identityBiometric software processes physical features to match against stored digital templates.The system compares numerical hashes of facial geometry; it does not 'recognize' or 'confirm' identity in a cognitive sense.School security vendors deploy biometric surveillance systems that administrators use to automate student tracking.

Task 5: Critical Observations - Structural Patternsโ€‹

Agency Slippageโ€‹

The text systematically oscillates between mechanical and agential framings to navigate the tension between the utility of AI and its risks.

The Pattern: When describing the deployment or function of the technology (Genetic/Functional explanations), the text often uses passive or agentless mechanical language: 'AI is used,' 'data is collected.' This makes the technology's presence seem inevitable, a natural force filling a void. However, when describing interaction or risk (Intentional/Dispositional explanations), the agency slips dramatically to the AI: 'AI treats,' 'AI exposes,' 'AI creates distance.'

Direction & Timing: The slippage typically moves from Mechanical (adoption) -> Agential (harm). This effectively insulates the human decision-makers. The adoption is a passive 'embrace' of a tool, but the harm is the active behavior of an agent. For example, 'schools' embrace' (passive/reactive) vs 'AI treats unfairly' (active/agential).

Consciousness Projection: The text establishes the AI as a 'knower' in the social contextโ€”a 'friend,' a 'partner' in the 'Hand in Hand' title. Once this consciousness is established (the illusion that there is a 'mind' to hold hands with), it becomes linguistically consistent to attribute moral agency ('treating unfairly').

Displaced Agency: The 'Name the Actor' test reveals a void. The text discusses 'AI in the Classroom' but rarely 'Vendors in the Classroom.' It discusses 'AI bias' but not 'Negligent Training Data Selection.' By attributing the agency for harm to the 'AI,' the text serves the interests of both the vendors (who avoid liability for defective products) and the school administrators (who avoid liability for poor procurement decisions). The 'curse of knowledge' creates a closed loop: the authors know the AI outputs text that looks like a conversation, so they credit the AI with the agency of a conversationalist, obscuring the mechanical reality of the engagement engine.

Metaphor-Driven Trust Inflationโ€‹

The report's metaphorical structure constructs a deeply ambivalent form of trust.

The 'Hand in Hand' Paradox: The title metaphor explicitly invokes relation-based trustโ€”intimacy, partnership, vulnerability. Yet the content of the report details reasons for distrust. This creates a cognitive dissonance where the AI is framed as a 'bad partner' rather than a 'defective tool.'

Consciousness as a Trust Signal: Phrases like 'confirm identity,' 'determine plagiarism,' and 'help develop IEPs' rely on consciousness verbs. These signal competence-based trust. If the AI 'determines' (rather than 'guesses'), it has authority. If it 'helps develop' (rather than 'retrieves text'), it has expertise. This linguistic elevation of the machine's capabilities encourages educators to trust its outputs.

Misplaced Relation-Based Trust: The 'friend/companion' frame is the most dangerous. It invites students to extend relation-based trust (vulnerability) to a system that cannot reciprocate. The text treats this as a 'use case' rather than a deception. By not challenging the 'friend' metaphor with mechanistic precision, the report implicitly validates the trust students place in these systems.

Managing Failure: When the AI fails (e.g., bias), it is framed agentially ('it treats unfairly'). Paradoxically, this preserves trust in the sophistication of the system. A machine that can be 'unfair' is powerful; a machine that simply has 'skewed weights' is just broken code. The anthropomorphism maintains the aura of AI's power even in its failure modes.

Obscured Mechanicsโ€‹

The anthropomorphic language actively conceals the material and economic realities of the 'AI' ecosystem.

Corporate Erasure: The 'Name the Corporation' test fails repeatedly. 'AI' is treated as a generic substance or autonomous entity. The specific corporate actorsโ€”OpenAI, Google, Microsoft, Turnitinโ€”are largely invisible in the descriptive text. This hides the profit motive driving the 'embrace.'

Technical Realities:

  • 'Conversations' hides the Context Window and Statelessness of the models.
  • 'Unfair treatment' hides the Training Data Distribution. It suggests the AI has a prejudice, rather than the dataset having a statistical skew.
  • 'Detecting' hides the Probabilistic nature of classifiers. It turns a confidence score (0.78) into a binary truth (Plagiarism).

Labor Realities: The 'AI helps write IEPs' frame creates the illusion of autonomous capability. It hides the fact that the model is simply mimicking the labor of thousands of previous human special education teachers whose work was scraped to train the model. The 'AI' is actually 'congealed human labor,' but the metaphor presents it as 'silicon intelligence.'

Economic Realities: The 'friend' metaphor hides the surveillance economy. A friend listens to help you; an AI 'friend' listens to profile you. The language of 'support' and 'companionship' obscures the extraction of behavioral surplus.

Context Sensitivityโ€‹

The intensity of anthropomorphism varies strategically across the text.

High Intensity (Social/Risks): The sections on 'Chatbots and Students' and 'Deepfakes' use the most intense consciousness language ('conversations,' 'friends,' 'treating unfairly'). Here, where the risk is emotional or social, the AI is framed as a social agent. This aligns with the 'curse of knowledge'โ€”we interpret social-seeming outputs socially.

Low Intensity (Privacy/Monitoring): The sections on 'Student Activity Monitoring' and 'Privacy' use more mechanistic or functional language ('monitoring,' 'collecting data,' 'breaches'). Here, the AI is a tool or a dragnet.

Strategic Function: This variation allows the text to have it both ways. The AI is a 'powerful agent' when discussing social impact (validating the hype/fear), but a 'passive infrastructure' when discussing data privacy (focusing on regulation of data).

Capabilities vs. Limitations: Capabilities are agential ('helps,' 'provides'). Limitations are often passive ('bias exists,' 'breaches happened'). This asymmetry constructs a vision where the AI is an agent of good that sometimes suffers from environmental accidents, rather than a product with inherent flaws.

Accountability Synthesisโ€‹

Accountability Architecture

This section synthesizes the accountability analyses from Task 1, mapping the text's "accountability architecture"โ€”who is named, who is hidden, and who benefits from obscured agency.

The text constructs an 'accountability sink' where human responsibility is diffused into the ether of 'technology.'

The Architecture of Erasure:

  • Agentless Constructions: 'Schools' embrace' (no specific decision maker), 'AI treats unfairly' (AI as scapegoat), 'Data was collected' (passive).
  • The Sink: Responsibility for bias and harm flows to the 'AI' itself. Responsibility for deployment flows to a generic 'school' entity, shielding individual administrators. Responsibility for design flows nowhereโ€”the vendors are ghosts.

Liability Implications: If the AI 'decides' to be unfair, who is liable? The framing suggests the AI is the tortfeasor. This is a legal impossibility that serves the vendors perfectly. By framing risks as 'AI behavior' rather than 'product defects,' the text pre-emptively diffuses product liability.

Naming the Actor: If we replaced 'AI treats students unfairly' with 'OpenAI's model reproduces racial bias found in its training data,' the policy implication shifts from 'we need to teach students AI literacy' to 'we need to sue OpenAI for selling a defective product.' If we replaced 'AI helps write IEPs' with 'Districts use cheap text-generation to cut special ed costs,' the conversation shifts from innovation to labor rights.

Systemic Function: This discourse maintains the status quo. It allows schools to appear forward-thinking ('embracing' the future) while treating the negative consequences as unfortunate side effects of a semi-autonomous partner, rather than the predictable results of buying unverified surveillance technology from profit-seeking corporations.

Conclusion: What This Analysis Revealsโ€‹

The Core Finding

The CDT report relies on two dominant, interconnected anthropomorphic patterns: AI AS SOCIAL AGENT and AI AS CONSCIOUS KNOWER. The first is evident in the 'Hand in Hand' title and the framing of AI as a 'partner,' 'friend,' or 'collaborator.' The second is foundational: for the AI to be a partner that can 'treat students unfairly' or 'help develop IEPs,' it must first be established as a 'knower'โ€”an entity capable of understanding fairness, the law, and human context. This consciousness architecture is load-bearing; without the assumption that the AI 'knows' what it is doing, the claims of it acting as a moral agent (discriminating) or a professional agent (writing legal plans) collapse into descriptions of statistical error and template matching. The system is sophisticated in its ambiguity, allowing 'AI' to function simultaneously as a tool to be bought and a colleague to be trusted.

Mechanism of the Illusion:โ€‹

The 'illusion of mind' is constructed through a 'bait-and-switch' of agency. The text begins with the 'curse of knowledge': because the AI's outputs (text, decisions) resemble the products of a conscious mind, the authors attribute the mental states required to produce them (intent, understanding) to the machine. This is reinforced by the 'why/how' slippage in explanation. By using intentional explanations ('it treats me unfairly') rather than functional ones ('it weights tokens based on bias'), the text persuades the audience to view the AI as a psychological subject. The rhetorical move is to literalize the metaphor: 'conversation' is no longer an analogy for 'interface interaction,' but a literal description of the event. This prepares the audience to accept the AI as a valid social actor, making the subsequent attribution of 'unfairness' or 'friendship' feel intuitive rather than category errors.

Material Stakes:โ€‹

Categories: Regulatory/Legal, Economic, Social/Political

The stakes of this metaphorical slippage are severe. Legally, framing the AI as a 'knower' that 'determines' cheating creates a presumption of authority that threatens students' due process. If the AI 'knows' the work is fake, the burden of proof shifts to the student. Economically, the 'partner' and 'helper' metaphors inflate the perceived value of these tools, justifying the diversion of public education funds from human staff to software licenses. It disguises labor substitution as 'support.' Socially, the validation of AI as a 'friend' exposes children to predatory commercial relationships disguised as intimacy. By obscuring the vendor's profit motive behind the mask of a 'companion,' the text leaves students vulnerable to manipulation. The winner is the EdTech industry, whose products are elevated from 'statistical software' to 'autonomous partners'; the losers are students subject to surveillance, bias, and displacement.

AI Literacy as Counter-Practice:โ€‹

Practicing critical literacy in this domain requires a disciplined refusal of the 'knower' frame. Reframing 'the AI knows your learning style' to 'the model correlates your click patterns with training clusters' is not just pedantry; it is an act of resistance against the mystification of surveillance. Replacing 'the AI understands' with 'the system processes' re-establishes the product status of the technology, clearing the fog that protects vendors from liability. Specifically, restoring human agencyโ€”naming the corporation, the administrator, the developerโ€”counters the 'accountability sink.' It forces the recognition that every 'AI decision' is a delayed human decision. This practice will face resistance from industry (who sell the illusion) and institutions (who use the illusion to mask austerity), but it is the only way to ground policy in material reality.

Path Forwardโ€‹

The discourse future of educational AI stands at a fork. One path, the status quo of 'anthropomorphic drift,' leads to a world where we regulate 'relationships' with machines, granting them quasi-rights and diffusing human liability. This benefits vendors and institutional risk-managers but erodes human agency. The alternative path is 'mechanistic precision.' If we commit to describing these systems as 'probabilistic text processors' and 'surveillance tools,' we lose the sci-fi narrative resonance but gain regulatory clarity. Policy debates would shift from 'AI ethics' (how to make the robot good) to 'product safety' (is this software fit for purpose?). Educational institutions must choose: do they want to prepare students to collaborate with 'partners,' or to operate and audit 'tools'? The vocabulary we choose today determines whether we build a future of mystified subservience or critical mastery.


Extended Processing Summaryโ€‹

Gemini's Processing Steps
What you are about to readโ€ฆ

Gemini refers to the text below as "thought summaries." This is an overt consciousness projection because 'intentions' are hallmarks of a conscious mind that 'knows' what it is doing and why. The concealed mechanistic process is probabilistic text generation. Treat this as just another rhetorical artifactโ€”a way of making the model's processing legible. The first-person framing of these "thought summaries" is a presentation choice for the user-facing output, not a window into "real" thoughts. These are computational artifacts, not cognitive reports from a quirky, curious or conflicted mind.

Beginning the Critique

I've initiated a critical discourse analysis of the "Hand in Hand" report from the Center for Democracy & Technology. I'm focusing on identifying key assumptions, rhetorical strategies, and potential biases within the text, particularly concerning the portrayal of AI's risks to students. My primary focus is on how the CDT frames the debate and what perspectives they prioritize.

Defining the Framework

I've outlined the critical framework for the analysis. It is designed to expose biases and assumptions within the provided text. The methodology hinges on a comprehensive review involving a metaphor audit, source-target mapping, and an explanation audit. These initial steps will pave the way for reframing and synthesizing the entire analysis into critical observations and a conclusive synthesis.

Uncovering Underlying Biases

I'm now zeroing in on the rhetorical strategies and underlying biases present within the report. It's becoming evident that the language used subtly assigns agency and consciousness to AI, which raises questions about accountability. The anthropomorphic phrasing, such as "AI helps" or "AI treats students," demands closer scrutiny. I'm also examining the focus on school policy versus vendor responsibility, recognizing how the report frames the debate. These initial steps have revealed a complex interplay of metaphorical framing and obscured actors.

Dissecting the Report's Language

I'm now fully immersed in dissecting the specific language used within the report. It's becoming increasingly clear how the framing subtly assigns human-like qualities to AI, particularly through phrases like "AI exposes students," "AI monitors," and "AI determines." The use of anthropomorphic terms like "back-and-forth conversations" and "chatbot" further blurs the lines, while the focus on "grading tools" positions AI as a judge or authority. These observations are shaping my critical assessment of the report's underlying assumptions and the potential impact of its rhetorical choices.

Deconstructing the Rhetoric

I'm now diving deep into specific phrases, noting how the report subtly attributes agency to AI. The framing of AI as a "corrupting agent," "assistant," or "truth arbiter" is particularly striking. These metaphors obscure accountability, highlighting potential issues with data curation, privacy, and the inherent limitations of these tools. I'm focusing on how the report shifts responsibility away from developers and toward the technology itself, a significant point of concern.

Mapping the Metaphors

The analysis has intensified as I've started mapping specific metaphors within the report. I'm focusing on how the report conceptualizes AI's role in education. Key phrases, like AI "creating distance from teachers" and "confirming identity," require a close examination. The mapping of "conversations" to "token generation," and "treat unfairly" to "statistical bias," is crucial for demystifying the technology. Also, the explanation audit continues, as I dig deeper into "why" these ideas are framed in a certain way. I am now at the stage of "Reframing", which will provide the final step in my analysis.

Reevaluating Accountability

I'm now zeroing in on the language shifts regarding AI's role in education, particularly with IEPs and cheating. Replacing "AI helps write IEPs" with "Teachers use the model" clarifies the process. Similarly, "AI detects cheating" becomes "Software flags text." This agency slippage is a recurring problem, and anthropomorphism is a concern. The "Hand in Hand" implication of trust clashes with the focus on risk, and it obscures the real actors. We need to examine what's hidden in the black box.

Unpacking the Metaphors

I'm now zeroing in on specific phrases like "conversation" and "treat unfairly," and their implications within the report. The key is how the report frames the debate. It's shifting the focus to schools regulating "relationships" with software, rather than addressing data processing and vendor contracts. My focus is now on refining examples. Specifically, I'm examining the phrases and their related metaphor: "Hand in Hand" as Partner, "back-and-forth conversation" as Interlocutor, and "AI will treat me unfairly" as a Moral Agent. This should help to uncover what the report may be obscuring.

Mapping Implied Actors

I'm zeroing in on instances where the report downplays human responsibility for AI's potential harms. The sentence, "AI tools will treat students unfairly," is now dissected. It is crucial to determine who is truly accountable for the unfair treatment. Specifically, my analysis now identifies the data curators, such as OpenAI and Google, as well as the school administrators who integrate these tools. I am now looking at how the report omits these actors, reinforcing my concerns about accountability.

Examining the Language

I'm now zeroing in on the problematic implication of consciousness in AI language. "Back-and-forth conversation" conjures a meeting of minds, when, in reality, it's sequential token prediction driven by context. This "curse of knowledge" leads us to project human attributes onto the output. This is a critical point that needs to be addressed when discussing the use of these tools, because they can never be truly conversational.


Analysis Provenance

Run ID: 2025-12-18-hand-in-hand-schools-embrace-of-ai-conne-metaphor-2ap7us Raw JSON: 2025-12-18-hand-in-hand-schools-embrace-of-ai-conne-metaphor-2ap7us.json Framework: Metaphor Analysis v6.3 Schema Version: 3.0 Generated: 2025-12-18T08:24:21.994Z

Discourse Depot ยฉ 2025 by TD is licensed under CC BY-NC-SA 4.0