Skip to main content

πŸ†•+πŸ“Š The U.S. Department of Labor’s Artificial Intelligence Literacy Framework

About

This document presents a Critical Discourse Analysis focused on AI literacy, specifically targeting the role of metaphor and anthropomorphism in shaping public and professional understanding of generative AI. The analysis is guided by a prompt that draws from cognitive linguistics (metaphor structure-mapping), the philosophy of social science (Robert Brown's typology of explanation), and accountability analysis.

All findings and summaries below were generated from detailed system instructions provided to a large language model and should be read critically as interpretive outputsβ€”not guarantees of factual accuracy or authorial intent.


Task 1: Metaphor and Anthropomorphism Audit​

About this task

For each of the major metaphorical patterns identified, this audit examines the specific language used, the frame through which the AI is being conceptualized, what human qualities are being projected onto the system, whether the metaphor is explicitly acknowledged or presented as direct description, andβ€”most criticallyβ€”what implications this framing has for trust, understanding, and policy perception.

V3 Enhancement: Each metaphor now includes an accountability analysis.

1. The Hallucinating Mind​

Quote: "AI can produce confident but incorrect outputs... Hallucinations..."

  • Frame: Model as cognitively impaired subject
  • Projection: Maps the biological/psychological state of 'hallucination' (perceptual error in a conscious mind) onto probabilistic error rates. It suggests the system typically 'knows' the truth but is having a temporary episode of madness. It attributes the human quality of 'confidence'β€”a subjective feeling of certaintyβ€”to a mathematical probability score (logit value). This projects a mind that 'believes' its own falsehoods, rather than a calculator that simply outputs the highest-weighted token regardless of truth value.
  • Acknowledgment: Direct (Unacknowledged) (The text presents 'Hallucinations' as a standard technical term without scare quotes or explanation that the system has no mental states to hallucinate with. It treats 'confident' as a literal property of the output.)
  • Implications: Framing errors as 'hallucinations' implies that truth-telling is the system's default state and errors are anomalies/glitches, rather than acknowledging that all outputs are probabilistically generated fabrications (some of which happen to align with facts). This inflates trust by suggesting a 'mind' that usually understands. It creates liability ambiguity: one cannot easily sue a software vendor for a 'psychological episode,' whereas one can sue for a defective product design.

Accountability Analysis:

  • Actor Visibility: Hidden (agency obscured)
  • Analysis: WHO designed the temperature settings? WHO optimized the model for fluency over accuracy? The phrasing 'AI can produce' treats the software as the sole agent of the error. It erases the engineers who tuned the RLHF (Reinforcement Learning from Human Feedback) to prioritize confident-sounding answers, and the executives who released a model known to confabulate. Naming the actor would change this to: 'Developers released a product that statistically generates falsehoods.'
Show more...

2. AI as Autonomous Economic Force​

Quote: "Artificial Intelligence (AI) is rapidly reshaping the economy and transforming how work gets done."

  • Frame: Technology as autonomous agent of history
  • Projection: Maps the human capacity for intentional action and political will onto a software category. It suggests 'AI' (the abstract concept) has the agency to 'reshape' an economy. This attributes a god-like or force-of-nature consciousness that acts upon society, rather than being a tool wielded by specific societal actors. It projects intent and inevitability onto a market dynamic.
  • Acknowledgment: Direct (Unacknowledged) (Presented as a statement of fact in the Introduction. No hedging (e.g., 'usage of AI'). AI is the grammatical subject performing the action 'reshaping'.)
  • Implications: This framing breeds fatalism. If 'AI' is doing the reshaping, it feels like a weather eventβ€”inevitable and agentless. This discourages policy intervention (you can't regulate a hurricane). It hides the specific corporate strategies deploying automation to cut labor costs. It inflates the perceived power of the technology itself while masking the human power dynamics driving its adoption.

Accountability Analysis:

  • Actor Visibility: Hidden (agency obscured)
  • Analysis: WHO is reshaping the economy? 'AI' does not have a bank account or a board of directors. The specific actors are corporations (Amazon, Microsoft, etc.) and employers choosing to replace labor with capital (software). The agentless construction 'AI is reshaping' serves the interests of these corporations by making their profit-driven restructuring of the labor market appear as a neutral, technological inevitability. The text obscures the management decisions behind the 'reshaping.'

3. The Intelligent Assistant​

Quote: "Decision-support systems – Using AI tools to generate recommendations... that help inform and augment human decision-making."

  • Frame: Software as junior colleague/consultant
  • Projection: Maps the social role of a consultant or analyst onto a statistical model. Suggests the system 'recommends' (a communicative act implying understanding of a goal and a judgment about how to reach it) rather than 'calculates correlations.' This projects a 'knower' that understands the decision context and offers advice, rather than a processor that retrieves similar data patterns.
  • Acknowledgment: Hedged/Qualified (The text uses 'support systems' and 'augment,' which are slightly more mechanistic, but 'recommendations' remains a strong anthropomorphic projection of judgment.)
  • Implications: Framing outputs as 'recommendations' invites users to treat the AI as a rational agent with valid reasons for its output. This leads to automation biasβ€”where humans defer to the machine's 'judgment.' In high-stakes environments (hiring, healthcare), this creates significant risk if the 'recommendation' is based on biased training data, as the user assumes a level of cognitive deliberation that does not exist.

Accountability Analysis:

  • Actor Visibility: Hidden (agency obscured)
  • Analysis: WHO defined the optimization function for the recommendation? If an AI recommends firing a worker or denying a loan, it is executing a mathematical policy set by humans. Calling it a 'recommendation' from the AI diffuses responsibility from the policy-makers. If the advice is bad, the 'assistant' was wrong, not the system designer. It obscures the fact that the 'recommendation' is a frozen historical correlation from the training data.

4. Contextual Understanding​

Quote: "Providing background information... helps shape the AI’s response to better match the user’s needs"

  • Frame: Model as listener/interlocutor
  • Projection: Attributes the cognitive state of 'understanding context' and 'meeting needs' to the system. It implies the AI 'reads' the context and 'adjusts' its behavior to be helpful, like a human listener. In reality, adding context changes the token distribution in the prompt, altering the mathematical probability of subsequent tokens. The AI does not know the user has 'needs'; it only has weights.
  • Acknowledgment: Direct (Unacknowledged) (Phrases like 'match the user's needs' and 'shape the response' treat the interaction as a communicative exchange of meaning, not a syntactic input-output operation.)
  • Implications: This is the 'ELIZA effect' amplified. Believing the AI 'understands' context leads users to trust it with nuances it cannot comprehend (e.g., legal or ethical subtleties). It creates a false sense of safety that the system is 'trying' to help, obscuring the risk that the system is simply completing a pattern that could be harmful or nonsensical if the statistical correlation dictates it.

Accountability Analysis:

  • Actor Visibility: Hidden (agency obscured)
  • Analysis: N/A - This specific instance is more about capability overestimation than agency displacement, though it implicitly obscures the developers who designed the attention mechanisms that technically 'handle' the context.

5. AI Authority​

Quote: "recognizing the limits of AI authority... avoid treating AI responses as final or authoritative"

  • Frame: Software as institutional superior/expert
  • Projection: Attributes 'authority'β€”a social and epistemic status derived from expertise and legitimacyβ€”to a software program. Even when negating it ('limits of'), using the word projects that the system occupies a position in the social hierarchy. It suggests the system could be an authority, but we should be careful, rather than recognizing it as a tool incapable of holding authority.
  • Acknowledgment: Direct (Unacknowledged) (Uses 'AI authority' as a noun phrase. The warning to 'avoid treating' it as such implies the default assumption is that it has authority.)
  • Implications: The very concept of 'AI authority' anthropomorphizes the machine as a holder of truth. This framing shifts the burden of skepticism to the user (the worker), who must 'recognize limits,' rather than on the vendor to prove reliability. It suggests that if a worker follows a bad AI instruction, it was their failure to recognize 'limits,' not the vendor's failure to provide a safe tool.

Accountability Analysis:

  • Actor Visibility: Hidden (agency obscured)
  • Analysis: The text warns users not to treat AI as authoritative, which paradoxically shifts the blame for errors onto the user. If the AI is 'authoritative' by design (confident tone, declarative syntax), the design is the problem. The text obscures the design choice to make LLMs sound authoritative (high assertiveness, no hedging). WHO programmed the tone? The developers.

6. The Learning Student​

Quote: "Training builds the AI model using large datasets... learning how to assess the quality"

  • Frame: Model as pupil/student
  • Projection: Uses the metaphor of 'training' and 'learning' to describe data processing and parameter adjustment. This suggests the AI is acquiring knowledge and concepts like a human student, implying a trajectory toward mastery. It attributes the cognitive act of 'learning' (conceptual restructuring) to the mechanical act of 'optimization' (curve fitting).
  • Acknowledgment: Explicitly Acknowledged (The text defines 'Training and inference' in the principles section (p. I-5), distinguishing it slightly, but the metaphorical load of 'learning' remains heavy.)
  • Implications: If AI is 'learning,' we expect it to eventually 'know' and 'understand.' This justifies deploying unfinished software under the guise that it will 'learn' and get better. It masks the fact that the model is static after training (unless retrained). It also obscures the labor: 'training' implies a teacher. Who taught it? The millions of unpaid humans whose data was scraped.

Accountability Analysis:

  • Actor Visibility: Partial (some attribution)
  • Analysis: The text mentions 'human design and oversight' generally, but regarding 'training,' it obscures the source of the 'large datasets.' WHO collected them? WHO decided to scrape the internet without consent? The passive 'model using large datasets' hides the aggressive data extraction practices of the companies building these models.

7. Creative Partner​

Quote: "Generating initial drafts... naming ideas... other creative assets that workers can then refine"

  • Frame: Software as muse/collaborator
  • Projection: Projects the human capacity for 'creativity' and 'ideation' onto a stochastic parrot. Suggests the AI is 'thinking up' names or ideas. In reality, it is retrieving high-probability combinations of tokens found in the training data. It attributes the spark of invention to a process of statistical retrieval.
  • Acknowledgment: Direct (Unacknowledged) (Lists 'Creative assistance' as a functional category. Treats 'naming ideas' as an act the AI performs.)
  • Implications: This devalues human creativity by equating it with pattern recombination. It raises copyright risks that the text ignoresβ€”if the 'creative asset' is a near-copy of a training example, who is liable? Framing it as a 'partner' encourages users to anthropomorphize the tool, potentially leading to emotional attachment or over-trust in the originality of the output.

Accountability Analysis:

  • Actor Visibility: Hidden (agency obscured)
  • Analysis: WHO owns the 'creative' output? The text implies the AI generates it and the worker refines it. This obscures the legal reality of copyright (which currently requires human authorship) and the economic reality that the 'creative' output is often derivative of unpaid human artists' work in the training set.

8. Directing the System​

Quote: "Directing AI effectively... guide the system toward better outcomes."

  • Frame: Software as subordinate employee
  • Projection: Maps the manager-employee relationship onto the user-tool relationship. 'Directing' and 'guiding' implies the system has some autonomy or momentum that needs steering, rather than being a static function that requires precise syntax. It suggests the AI is 'trying' to go somewhere and needs a nudge.
  • Acknowledgment: Hedged/Qualified (Uses 'guide the system,' which is a mix of mechanical (system) and agential (guide) language.)
  • Implications: This implies that 'prompt engineering' is a soft skill of leadership/management, rather than a technical skill of syntax optimization. It elevates the status of the 'prompter' to a manager of digital workers, which may be a psychological salve for workers whose actual jobs are being devalued. It creates an illusion of control over a black-box system.

Accountability Analysis:

  • Actor Visibility: Ambiguous/Insufficient Evidence
  • Analysis: The text places the onus on the user to 'guide' effectively. If the outcome is bad, the implication is the guidance was poor. This shifts accountability from the tool's capabilities to the user's 'management' skills.

Task 2: Source-Target Mapping​

About this task

For each key metaphor identified in Task 1, this section provides a detailed structure-mapping analysis. The goal is to examine how the relational structure of a familiar "source domain" (the concrete concept we understand) is projected onto a less familiar "target domain" (the AI system). By restating each quote and analyzing the mapping carefully, we can see precisely what assumptions the metaphor invites and what it conceals.

Mapping 1: Conscious Mind (Psychopathology) β†’ Probabilistic Token Generation (Statistical Error)​

Quote: "AI can produce confident but incorrect outputs... Hallucinations"

  • Source Domain: Conscious Mind (Psychopathology)
  • Target Domain: Probabilistic Token Generation (Statistical Error)
  • Mapping: Maps the concept of a mind perceiving non-existent reality (hallucination) onto the generation of low-probability or factually ungrounded text strings. Invites the assumption that the system has a 'belief' system and a 'perception' mechanism, and that errors are temporary psychological breaks rather than structural features of a probabilistic engine. It implies a binary of Truth/Hallucination that doesn't exist in LLMs (which have no concept of truth).
  • What Is Concealed: Conceals the mechanistic reality that all AI output is 'hallucination' in the sense that it is fabricated without reference to external truth conditions. It hides the lack of ground truth in the training process. It also conceals the technical decision to set 'temperature' (randomness) greater than zero, which engineers choose to make outputs 'creative' at the cost of accuracy.
Show more...

Mapping 2: Natural Force / Autonomous Agent β†’ Corporate Deployment of Automation Software​

Quote: "AI is rapidly reshaping the economy"

  • Source Domain: Natural Force / Autonomous Agent
  • Target Domain: Corporate Deployment of Automation Software
  • Mapping: Maps the agency of economic restructuring onto the technology itself. Invites the assumption that the changes in the labor market are a natural evolution or technological determinism driven by the tool's capability, rather than decisions made by humans. It projects 'intent' or 'momentum' onto the software.
  • What Is Concealed: Conceals the boardroom decisions to cut costs, the policy choices to deregulate AI, and the specific corporations (e.g., Microsoft, Google, OpenAI) that are aggressively selling these tools to employers. It hides the profit motive behind the 'reshaping' by presenting it as a technological inevitability.

Mapping 3: Pedagogy / Child Development β†’ Statistical Optimization / Gradient Descent​

Quote: "Training builds the AI model... learning how to assess"

  • Source Domain: Pedagogy / Child Development
  • Target Domain: Statistical Optimization / Gradient Descent
  • Mapping: Maps the human process of education (conceptual understanding, skill acquisition) onto the mathematical process of minimizing a loss function. Invites the assumption that the model 'understands' concepts better over time and can be 'taught' values. It suggests a trajectory toward wisdom.
  • What Is Concealed: Conceals the brute-force nature of the process (calculating billions of correlations). It hides the material reality of the 'curriculum'β€”stolen data, toxic content, and the exploited labor of data annotators in the Global South who actually provide the 'feedback' for the learning.

Mapping 4: Interpersonal Communication (Listener) β†’ Context Window / Attention Mechanism​

Quote: "context... helps shape the AI’s response to better match the user’s needs"

  • Source Domain: Interpersonal Communication (Listener)
  • Target Domain: Context Window / Attention Mechanism
  • Mapping: Maps the social act of listening and understanding intent onto the technical process of weighting tokens within a context window. Invites the assumption that the AI comprehends the user's goal (teleology) rather than just the statistical likelihood of the next word given the previous words.
  • What Is Concealed: Conceals the fact that the 'response' is just a string completion. It hides the mechanical limit of the context window (token limit) and the attention mechanism's inability to actually reason about 'needs.' It masks the lack of shared world-model between user and machine.

Mapping 5: Mechanical Physics (Lever/Amplifier) β†’ Algorithmic Processing​

Quote: "AI tools... are amplifiers of human input"

  • Source Domain: Mechanical Physics (Lever/Amplifier)
  • Target Domain: Algorithmic Processing
  • Mapping: Maps the function of a simple machine (lever, microphone) onto a complex non-linear system. Invites the assumption that the output is just a louder/bigger version of the input, maintaining the human's original intent. It suggests a linear relationship between user intent and system output.
  • What Is Concealed: Conceals the transformative and often distortive nature of the 'black box.' Unlike a megaphone, AI introduces its own biases, errors ('hallucinations'), and structural constraints. The input is not just amplified; it is fundamentally processed through a model of the internet's text, which may twist the human's intent in opaque ways.

Mapping 6: Social Hierarchy / Expertise β†’ Model Confidence / Output Assertiveness​

Quote: "recognizing the limits of AI authority"

  • Source Domain: Social Hierarchy / Expertise
  • Target Domain: Model Confidence / Output Assertiveness
  • Mapping: Maps the social construct of 'authority' (legitimacy, power, expertise) onto the statistical property of high-confidence token prediction. Invites the assumption that the system has authority, even if limited, and that it occupies a role in the decision-making hierarchy.
  • What Is Concealed: Conceals the design choices that give AI its 'authoritative' voice (declarative syntax, lack of 'I don't know' tokens). It hides the fact that the 'authority' is entirely a user projection (the ELIZA effect) reinforced by the interface design, not an intrinsic property of the code.

Mapping 7: Management / Animal Training β†’ Prompt Engineering / Input Optimization​

Quote: "Directing AI effectively... guide the system"

  • Source Domain: Management / Animal Training
  • Target Domain: Prompt Engineering / Input Optimization
  • Mapping: Maps the role of a supervisor directing a subordinate or a handler guiding an animal onto the task of writing text inputs. Invites the assumption that the system has agency/momentum that needs steering. It anthropomorphizes the prompt interaction as a negotiation of meaning.
  • What Is Concealed: Conceals the brittleness of the system. 'Guiding' implies the system can handle vague instructions if nudged; in reality, small syntactic changes can cause massive output failures. It hides the trial-and-error nature of finding the 'magic words' (prompts) that trigger the desired statistical cluster.

Mapping 8: Human Partnership / Collaboration β†’ Human-Computer Interaction​

Quote: "partners... joint and collaborative engagement"

  • Source Domain: Human Partnership / Collaboration
  • Target Domain: Human-Computer Interaction
  • Mapping: Maps the mutual obligation, shared goals, and reciprocal understanding of a partnership onto a user-tool relationship. Invites the assumption that the AI shares the user's goals and is 'invested' in the outcome.
  • What Is Concealed: Conceals the asymmetry. The AI has no goals, no stake in the outcome, and no concept of 'joint' effort. It hides the economic reality: the 'partner' is actually a service rented from a third-party vendor (Big Tech) whose interests (data collection, subscription fees) may diverge from the user's.

Task 3: Explanation Audit (The Rhetorical Framing of "Why" vs. "How")​

About this task

This section audits the text's explanatory strategy, focusing on a critical distinction: the slippage between "how" and "why." Based on Robert Brown's typology of explanation, this analysis identifies whether the text explains AI mechanistically (a functional "how it works") or agentially (an intentional "why it wants something"). The core of this task is to expose how this "illusion of mind" is constructed by the rhetorical framing of the explanation itself, and what impact this has on the audience's perception of AI agency.

Explanation 1​

Quote: "AI systems generate responses by identifying statistical patterns in data, which can result in different outputs from the same input."

  • Explanation Types:

    • Empirical Generalization: Subsumes events under timeless statistical regularities
    • Functional: Explains behavior by role in self-regulating system with feedback
  • Analysis (Why vs. How Slippage): This is a rare moment of mechanistic precision. It explains 'how' (identifying statistical patterns) rather than 'why' (intent). By focusing on 'statistical patterns' and 'probabilistic outputs,' it strips away the illusion of mind and correctly frames the system as a stochastic generator. However, it sits in tension with the rest of the document. It emphasizes the variability/instability of the system ('different outputs from same input'), which counters the 'authority' frame found elsewhere.

  • Consciousness Claims Analysis: The passage avoids consciousness verbs. 'Identifying' is borderline (can be mechanical or cognitive), but 'statistical patterns' grounds it mechanically. It makes no claim that the AI 'knows' the answer, only that it generates responses based on patterns. This is the most epistemicially humble claim in the document. It accurately describes the lack of determinism (temperature settings) without resorting to 'moods' or 'whims.'

  • Rhetorical Impact: This framing reduces trust in the system's reliability (it's just statistics, it varies), which is responsible risk communication. It positions the human as the necessary stabilizer of a chaotic probabilistic process. If audiences believe this explanation, they are less likely to accept AI output as 'truth' and more likely to treat it as a raw material requiring verification.

Show more...

Explanation 2​

Quote: "Contextual framing... helps shape the AI’s response to better match the user’s needs"

  • Explanation Types:

    • Functional: Explains behavior by role in self-regulating system with feedback
    • Intentional: Refers to goals/purposes, presupposes deliberate design
  • Analysis (Why vs. How Slippage): This shifts towards agential framing. While 'helps shape' is functional, 'match the user's needs' implies a teleological understanding within the system. It suggests the AI has a goal (to help the user) and the context helps it achieve that goal. This emphasizes the utility/helpfulness of the agent while obscuring the mechanical reality of token weighting.

  • Consciousness Claims Analysis: This passage projects the 'curse of knowledge.' The author knows what they want (needs), and projects that the AI also holds a representation of those 'needs.' The AI does not know the user has needs; it only processes the additional tokens of the 'context' to narrow the probability distribution of the next token. The verb 'match' suggests a cognitive comparison, not a statistical correlation.

  • Rhetorical Impact: This framing builds relation-based trust. It suggests the AI is 'on your side' and trying to help. It makes the system feel like a responsive partner. This increases the likelihood that users will anthropomorphize the tool and potentially divulge sensitive information to 'help' the AI understand their needs better.

Explanation 3​

Quote: "AI can produce confident but incorrect outputs... Hallucinations"

  • Explanation Types:

    • Dispositional: Attributes tendencies or habits
  • Analysis (Why vs. How Slippage): This frames the error as a character flaw or psychological tendency ('hallucination') rather than a mathematical feature. It emphasizes the behavior (being wrong but confident) while obscuring the mechanism (why it is confident). It creates a 'personality' for the AIβ€”the overconfident mansplainer.

  • Consciousness Claims Analysis: Strong consciousness projection. 'Confident' describes a mental state of certainty. 'Incorrect' implies a binary truth condition the AI failed to meet. 'Hallucinations' implies a mind. A mechanistic description would be: 'The model assigns high probability scores to false token sequences due to training data artifacts.' The text attributes a 'knowing' state (confidence) to a processing state (high log-probability).

  • Rhetorical Impact: This framing makes the AI seem dangerous but intelligent (like a brilliant but unstable genius). It warns the user to be vigilant, but preserves the mystique of the machine's intelligence. If framed mechanistically ('software outputting false data'), it would sound like a buggy product. Framed as 'hallucination,' it sounds like a biological quirk, reducing the vendor's accountability for shipping defective code.

Explanation 4​

Quote: "Training builds the AI model... inference is how the model generates outputs"

  • Explanation Types:

    • Genetic: Traces origin through dated sequence of events or stages
    • Functional: Explains behavior by role in self-regulating system with feedback
  • Analysis (Why vs. How Slippage): This explanation relies on the 'learning' metaphor (geneticβ€”it grew this way). It frames the system's capabilities as the result of an educational process ('training'). This emphasizes the data-driven nature but obscures the human agency in selecting that data. It treats 'training' as a passive absorption of knowledge.

  • Consciousness Claims Analysis: Uses standard industry jargon ('training', 'inference'), which are dead metaphors. However, 'builds the model' is goodβ€”it implies construction. But 'generates outputs' is safer than 'thinks.' This passage is relatively neutral but relies on the 'training' metaphor which implies the system 'learned' a curriculum, hiding the extraction of data.

  • Rhetorical Impact: Establishes the system as an artifact with a history (training). It separates the 'building' phase from the 'using' phase. This helps users understand why the model is static (it was trained in the past). It creates a sense of technical legitimacy.

Explanation 5​

Quote: "Users must learn how to... guide the system toward better outcomes."

  • Explanation Types:

    • Reason-Based: Gives agent's rationale, entails intentionality and justification
  • Analysis (Why vs. How Slippage): This is a reason-based explanation for human behavior interacting with the machine. It posits the human as the rational agent who must 'guide' the machine. It frames the AI as a vehicle or subordinate that responds to reason/guidance. It emphasizes user agency but implicitly grants the AI 'responsiveness.'

  • Consciousness Claims Analysis: The human 'knows'; the AI is 'guided.' The epistemic privilege is with the human. However, 'guide' implies the AI has a trajectory. Mechanistically, the user is 'constraining the search space.' 'Guiding' suggests a gentle hand on a steering wheel, obscuring the precise syntactic engineering often required.

  • Rhetorical Impact: Empowers the user (you are the pilot). It makes AI literacy seem like learning to drive or manage. It constructs a hierarchy where the human is in charge, which is comforting but potentially misleading if the system is actually a black box that defies 'guidance' in unpredictable ways.

Task 4: AI Literacy in Practice - Reframing Anthropomorphic Language​

About this task

This section proposes alternative language for key anthropomorphic phrases, offering more mechanistic and precise framings that better reflect the actual computational processes involved. Each reframing attempts to strip away the projections of intention, consciousness, or agency that are embedded in the original language.

V3 Enhancement: A fourth column addresses human agency restorationβ€”reframing agentless constructions to name the humans responsible for design and deployment decisions.

Original Anthropomorphic FrameMechanistic ReframingTechnical Reality CheckHuman Agency Restoration
AI can produce confident but incorrect outputs... HallucinationsThe model generates text sequences that are factually false but have high statistical probability scores. This occurs because the system predicts the next likely word based on training data patterns without any mechanism to verify factual truth.The model does not 'know' facts or feel 'confidence.' It calculates log-probabilities for tokens. A 'confident' output is simply a token sequence with a high probability weight.Developers at [Company] tuned the model's temperature settings to prioritize fluent, human-like text generation over factual accuracy, creating a trade-off that results in frequent errors.
Artificial Intelligence (AI) is rapidly reshaping the economyAutomated data processing systems are being deployed to automate tasks previously performed by humans.N/A - This is a claim about economic causality, not cognition.Major corporations and employers are choosing to deploy automation software to reduce labor costs and restructure workforce requirements, thereby reshaping the economy.
Contextual framing... helps shape the AI’s response to better match the user’s needsAdding text to the input prompt alters the statistical distribution of the predicted output tokens. More specific input patterns constrain the model's generation to a narrower set of probable responses.The model does not understand 'context' or user 'needs.' It processes the input tokens through an attention mechanism to calculate weights for the next token prediction.N/A - describes computational processes.
Directing AI effectively... guide the system toward better outcomesUsers must optimize their input syntax to trigger the desired pattern completion from the model. Precise phrasing is required to constrain the model's probabilistic output.The system cannot be 'guided' or 'directed' like an agent; it is a function mapping inputs to outputs. 'Better outcomes' are just statistically probable completions given the specific input constraints.N/A - describes user interaction.
recognizing the limits of AI authorityrecognizing that software outputs have no inherent truth value or expertise.The system has no social status or authority. It is a text generation engine. Its output is data, not expert testimony.Users should recognize that developers designed the system to use authoritative, declarative language, creating a false appearance of expertise.
Generating initial drafts... naming ideas... creative assetsRetrieving and recombining text fragments from the training dataset to form new sequences that resemble drafts or names.The model does not 'create' ideas. It samples from a probability distribution derived from existing human-created texts.The model outputs derivatives of work created by human authors in the training set, which the user can then edit.
Training builds the AI modelComputational optimization processes adjust the model's parameters to minimize error rates on a specific dataset.The model does not 'learn' or 'train' like a student; it fits a curve to data points via gradient descent.Engineers build the model by selecting datasets and defining optimization functions.
AI tools... are amplifiers of human inputAI tools process human input through complex statistical models to generate expanded outputs.The tool does not linearly 'amplify' input; it transforms it based on correlations in its training data, often introducing biases or deviations not present in the input.N/A

Task 5: Critical Observations - Structural Patterns​

Agency Slippage​

The document exhibits a systematic oscillation between mechanistic and agential framing, functioning to manage the tension between the technology's utility and its risks. When describing the economic impact ('AI is reshaping the economy'), agency is attributed to the AI (or the abstract force of technology), effectively removing agency from the corporate actors driving this change. This makes the economic disruption appear inevitable. However, when the text discusses errors or risks ('Hallucinations', 'verify results'), agency slips back to the human worker. The user is tasked with 'oversight' and 'judgment.'

A key moment of slippage occurs in the 'Direct AI Effectively' section. It starts with the user 'directing' (human agency), but frames the AI as a system that needs 'guidance' (implied agency/animacy). The 'curse of knowledge' is evident when the author attributes 'understanding context' to the AIβ€”because the author understands the context, they assume the machine processing the text also 'gets it.' This slippage serves a rhetorical function: it allows the DOL to promise a high-tech future (AI as powerful agent) while shielding the government and vendors from liability for failures (human user as responsible agent).

Metaphor-Driven Trust Inflation​

The document constructs authority and trust through the metaphor of 'Literacy' itself. By framing AI usage as 'literacy' (like reading/writing), it naturalizes the technology as a fundamental, neutral skill set that everyone must have, rather than a specific product from private vendors. We don't talk about 'Microsoft Word Literacy'; we talk about 'digital skills.' Elevating proprietary LLM usage to 'Literacy' grants these systems the status of public infrastructure.

Consciousness language ('understands', 'partner', 'assistant') further builds relation-based trust. Users trust a 'partner' differently than they trust a 'calculator.' A partner implies shared goals and mutual care. This creates a dangerous vulnerability: users may extend trust (sincerity, ethical alignment) to a system that only offers performance (statistical probability). The text warns against 'AI authority' explicitly, but implicitly reinforces it by treating the AI as a conversational subject that 'generates ideas' and 'supports decisions.'

Obscured Mechanics​

The metaphors of 'partner', 'reshaping', and 'training' systematically obscure the material and economic realities of AI production. Applying the 'name the corporation' test reveals a void: the text never mentions OpenAI, Microsoft, Google, or Anthropic. It treats 'AI' as a generic resource.

Technically, the text hides the 'black box' nature of the modelsβ€”the fact that even engineers often don't know why a model outputs what it does. By saying the AI 'identifies patterns,' it implies a rational, explainable process. Economically, it obscures the labor theory of value. 'Training' implies the model learned on its own; it erases the billions of words of scraped data from unpaid human creators. 'Reshaping the economy' erases the boardroom decisions to layoff workers. Materially, the environmental cost (energy, water for cooling data centers) is completely absent. The framing benefits the vendors: their products are presented as clean, intelligent, autonomous helpers, stripped of their messy, extractive supply chains.

Context Sensitivity​

Anthropomorphism intensifies in the 'Uses' and 'Delivery' sections compared to the 'Principles' section. Section 1 ('Understand AI Principles') is the most grounded, using terms like 'pattern recognition' and 'probabilistic outputs.' Here, the text establishes technical credibility. However, once the text moves to Section 2 ('Explore AI Uses'), the language shifts to 'Creative assistance,' 'Decision-support,' and 'partners.'

This distribution suggests a strategy: admit the mechanism is statistical to satisfy experts/critics, but use agential metaphors to sell the utility to workers. There is also an asymmetry in capabilities vs. limitations. Capabilities are described agentially ('AI can generate ideas'), while limitations are often described passively or mechanistically ('Hallucinations', 'bias in data'). This makes the 'mind' of the AI seem talented but occasionally mentally ill, rather than a fundamentally limited calculator.

Accountability Synthesis​

Accountability Architecture

This section synthesizes the accountability analyses from Task 1, mapping the text's "accountability architecture"β€”who is named, who is hidden, and who benefits from obscured agency.

The document constructs a massive 'accountability sink' located in the human worker. The 'Accountability Architecture' is clear:

  1. Corporations/Developers: Invisible. They are never named. Their design choices (to release hallucinatory models, to scrape data) are presented as natural facts of the 'AI' tool.
  2. The AI System: Presented as a powerful agent ('reshaping economy') but not a responsible one (it 'hallucinates' innocent errors).
  3. The Worker/User: Hyper-visible. The worker must 'direct,' 'guide,' 'verify,' 'oversee,' 'evaluate,' and 'layer in judgment.'

The text explicitly states: 'Workers remain responsible for the decisions and outputs.' This transfers the liability for the machine's failures onto the person least able to understand or fix them. If the AI discriminates or lies, the worker is at fault for not 'evaluating' it correctly. This serves the interests of the tech industry (limiting liability) and the state (placing adaptation burden on individuals rather than regulation). Naming the actors would shift this: 'Employers are responsible for providing tools that do not fabricate data.' Instead, the text creates a regime where the worker is the blast shield for the AI's errors.

Conclusion: What This Analysis Reveals​

The Core Finding

The DOL's AI Literacy Framework relies on two dominant, interlocking metaphorical patterns: 'AI as Cognitive Agent' (hallucinating, understanding, creating) and 'AI as Autonomous Force' (reshaping, transforming). These are underpinned by the foundational metaphor of 'Literacy,' which naturalizes a commercial product as essential cultural knowledge. The 'Cognitive Agent' pattern is load-bearing; without it, the text's claims about 'partnership' and 'collaboration' collapse into 'human operating a calculator.' By projecting consciousness (understanding, confidence) onto the system, the text constructs a 'mind' that the worker can relate to, masking the stark reality of the 'Autonomous Force' pattern, where that same system is an economic weapon deployed to devalue their labor.

Mechanism of the Illusion:​

The illusion of mind is constructed through a 'bait-and-switch' rhetorical architecture. The text opens with mechanistic concessions ('pattern recognition', 'statistics'), establishing a veneer of technical accuracy. However, it immediately pivots to high-intensity anthropomorphism in the functional sections ('context', 'needs', 'hallucinations'). This exploits the 'ELIZA effect,' where the audience's desire for a communicative partner overrides their knowledge of the mechanism. The 'curse of knowledge' plays a central role: the authors project their own understanding of workforce needs onto the machine, claiming the machine 'understands' those needs. This creates a persuasive feedback loop: because the machine seems to speak fluently (the bait), the user accepts the framing that it thinks fluently (the switch), leading to the acceptance of 'hallucination' as a quirk of genius rather than a failure of product.

Material Stakes:​

Categories: Economic, Regulatory/Legal, Social/Political

The metaphors in this document have concrete economic and legal consequences. Economically, framing AI as a 'partner' that 'reshapes' the economy obscures the specific corporate decisions to replace human labor, making job losses feel like inevitable weather events rather than management choices. This disempowers unions and workers from contesting the deployment of these tools. Legally, the 'hallucination' metaphor and the explicit statement that 'workers remain responsible' creates a liability shield for vendors. If a medical AI fabricates a diagnosis, this framework suggests the doctor (worker) failed to 'verify,' not that the software was defective. Epistemically, the text degrades the concept of 'truth' to 'verified probability,' forcing workers to spend their days acting as 'content moderators' for machines, a lower-value form of cognitive labor than original creation.

AI Literacy as Counter-Practice:​

True AI literacy requires resisting these metaphors through precise, mechanistic language. Reframing 'hallucinations' as 'fabrications' and 'AI reshaping' as 'corporate automation' restores visibility to the actual actors and processes. Replacing consciousness verbs ('knows', 'understands') with mechanistic ones ('processes', 'correlates') forces the user to confront the system's lack of awareness. This practice serves as a form of labor resistance: by refusing to treat the tool as a 'partner,' workers can reject the emotional manipulation that encourages them to train their own replacements. Resistance will come from vendors and policymakers who benefit from the 'magic' of the anthropomorphic frame, as precision threatens the hype cycle driving investment and the liability shields protecting deployment.

Path Forward​

The discourse around AI faces a fork in the road. One path creates a 'Mystified Workforce,' where anthropomorphic language deepens, leading to relation-based trust, automation bias, and the successful transfer of liability to users. The alternative is a 'Mechanistic Realist' approach, where training emphasizes the statistical nature of the tools, 'hallucination' is replaced by 'error rate,' and 'partnership' by 'operation.' This would empower workers to treat the system with appropriate skepticism but might dampen the commercial enthusiasm required for 'reindustrialization.' Institutional changes, such as requiring government agencies to audit their own language for anthropomorphism and mandating 'mechanistic disclaimers' in software interfaces, could support the latter path. The choice is between a workforce that serves the machine's illusion or one that masters the machine's reality.


Extended Processing Summary​

Gemini's Processing Steps
What you are about to read…

Gemini refers to the text below as "thought summaries." This is an overt consciousness projection because 'intentions' are hallmarks of a conscious mind that 'knows' what it is doing and why. The concealed mechanistic process is probabilistic text generation. Treat this as just another rhetorical artifactβ€”a way of making the model's processing legible. When the model writes "I should double check that...", it is mathematically altering its own probability distribution for the next token to be more accurate. The "I" is a functional gear in the machine, not a self-reference. This language is a deliberate interface design choice, not a biological reality. The developers could have programmed the model to output passive logs. Instead, they chose to simulate a narrator.

Analyzing the Framework

I'm currently focused on grasping the core objective: a CDA of the DOL's "AI Literacy Framework." My efforts have concentrated on fully understanding the framework's intent and how it aims to shape discourse. I'm now moving into outlining key CDA components to apply to the text and have started by defining critical elements of the document.

Deconstructing the Metaphors

I'm now diving deep into the linguistic patterns, specifically the metaphors employed within the DOL's framework. I'm isolating agential verbs and the mechanistic acknowledgements to highlight the contrast. Further, I'm focusing on AI-as-tool metaphors and exploring how the document portrays AI's role in the workforce and economy. I'm also examining the implications of anthropomorphism, contrasting consciousness with mechanistic processing. Displaced agency and accountability are also under review.

Examining Mind Metaphors

I'm presently dissecting metaphors, with a focus on instances like "hallucinations" and "understanding context" within the framework. I'm unpacking the implications of anthropomorphizing AI and how these metaphors shape perceptions of agency. I'm analyzing the linguistic framing of these examples and their potential effects on accountability. Specifically, I'm noting the attribution of cognitive states to AI systems.

Isolating Framing Elements

I'm now identifying and analyzing specific textual examples that illustrate key framing elements within the document. I'm focusing on phrases that implicitly ascribe human-like capabilities to AI, like the example of AI "understanding context." I'm evaluating how these examples project agency and deflect accountability, such as the example of AI "reshaping the economy" where it places AI at the forefront of action, obscuring the human actors involved. I'm noting each item and the specific frame at play.

Identifying Key Framing

I've pinpointed specific instances of framing, specifically "AI Authority," which projects authority onto AI. I'm contrasting this with the "Unlock Potential" metaphor which implies the use of AI is like finding a resource. I'm also examining the education frame, apparent in the context of AI "training." These examples contribute to the larger project of dissecting the framework.

Mapping Metaphors Now

I'm now mapping metaphors to source/target domains to reveal concealed aspects. "Training" projects a human-like learning process onto parameter optimization, obscuring data selection accountability. "Confident Outputs" frames probability scores as subjective feelings, deflecting from the probabilistic nature of the result. Furthermore, "hallucination" conceals that all outputs are generated fabrications, implying a truth mode.

Reframing the Text

I'm now auditing explanations of AI within the framework, identifying passages that characterize AI's function. I've noted examples from the text, ranging from statistical pattern identification to contextual influence, classifying each passage. I'm focusing on contrasting these with mechanistic descriptions and identifying areas ripe for reframing. My aim is to mitigate misleading anthropomorphisms. I'm prioritizing passages that require mechanistic grounding.

Reframing Specific Claims

I'm now focusing on specific statements. For example, I'm reframing "AI generates confident outputs" to the more precise "Model generates high-probability tokens that are factually incorrect." Similarly, I'm shifting "AI is reshaping the economy" to "Corporations are deploying automation software to reduce labor costs." Finally, I'm refining "Directing AI... guide the system" to "Optimizing prompt syntax to constrain token generation."

Synthesizing Accountability Frames

I'm now zeroing in on the document's framing of accountability, particularly how the "AI Literacy" concept places responsibility on the worker. I'm focusing on the effect: the developers' role is downplayed, while the burden of adaptation is placed squarely on the individual. I'm considering the economic implications. I'm building this as part of my final synthesis.

Analyzing Consciousness Terms

I'm now examining how the document defines AI cognition. I am evaluating instances of consciousness-laden verbs, especially "recognizing," "identifying," and "understanding," to clarify the attribution of agency. Further, I'm noting specific passages about "Human design and oversight," to better grasp the accountability frames. I'm focusing on "Hallucination."


Analysis Provenance

Run ID: 2026-02-16-the-u-s-department-of-labor-s-artificial-metaphor-z4ywnm Raw JSON: 2026-02-16-the-u-s-department-of-labor-s-artificial-metaphor-z4ywnm.json Framework: Metaphor Analysis v6.4 Schema Version: 3.0 Generated: 2026-02-16T09:14:16.280Z

Discourse Depot Β© 2025 by TD is licensed under CC BY-NC-SA 4.0