Skip to main content

EU vs US - Comparative Analysis of National AI Plans


The Takeaway

The US document treats AI as a Promethean force—quasi-magical in its ability to "usher in a new golden age" or destroy the nation if controlled by adversaries. It attributes high agency to the technology itself ("AI will enable," "AI is reshaping"). The policy response is therefore theological: we must align with this god-like force to win.

The EU document treats AI as a bureaucratic object—a complex industrial output similar to a pharmaceutical drug or a car. It is defined by its "lifecycle" (training, fine-tuning, deployment). It has no inherent agency; it is something to be "trained," "certified," and "adopted." The policy response is managerial: we must inspect and verify the product.

SectionContent
Executive Summary6-paragraph synthesis of the fundamental divergence—two incompatible worlds constructed through incompatible frames
Task 1Dominant frame identification for both documents with Entman's four functions mapped
Task 2AI ontology audit (5 passages each) examining tool/agent/force characterizations
Task 3Agency distribution audit (5 passages each) tracing who acts, who is erased
Task 4Beneficiary/risk mapping tables showing whose interests are centered/silenced
Task 5Values and ideology audit (5 instances each) with alternative framings
Task 6Structured absences analysis—what each document cannot say
Task 7Governance model comparison (state/market/civil society roles)
Task 8Contrastive reframing exercise (4 pairs showing how same situation could be framed)
Task 9Ideological coherence assessment (internal tensions, vulnerabilities)
Task 10Extended 6-paragraph synthetic conclusion

EXECUTIVE SUMMARY: Two Worlds, Two AIs, Two Futures​

The Fundamental Divergence​

These two documents do not merely propose different policies—they construct fundamentally incompatible realities. Reading them in sequence produces a kind of cognitive dissonance: the same technology, "artificial intelligence," appears as two entirely different objects requiring two entirely different responses from two entirely different kinds of political communities.

The EU document constructs AI as a collective infrastructure project—a shared resource to be built, governed, and distributed through coordinated public action. The metaphorical world is one of construction: factories, foundations, pillars, ecosystems. The EU is positioned as a builder, and AI as something that must be deliberately assembled from components (compute, data, skills, regulation) through patient institutional work. The temporal horizon is long; the mood is managerial; the organizing anxiety is falling behind in a process that will unfold over years.

The US document constructs AI as a race to be won—a competition with existential stakes where speed is paramount and obstacles (particularly regulatory ones) must be swept aside. The metaphorical world is one of warfare and competition: dominance, winning, adversaries, strategic assets. America is positioned as a competitor, and AI as a prize to be seized before rivals can claim it. The temporal horizon is immediate; the mood is urgent; the organizing anxiety is losing a contest that is happening right now.

These are not complementary perspectives that could be synthesized. They encode incompatible theories of:

  • What AI is (infrastructure vs. weapon)
  • What government should do (coordinate vs. unleash)
  • What the problem is (fragmentation vs. regulation)
  • Who the relevant actors are (institutions vs. markets)
  • What success looks like (shared prosperity vs. dominance)

The Construction of Crisis​

Both documents construct urgency, but through opposite mechanisms.

The EU's crisis is one of inadequacy. Europe has strengths but has not yet assembled them into sufficient scale. The text is saturated with language of insufficiency: "more still needs to be done," "efforts must accelerate and intensify," "the EU currently lags behind." The problem is structural: fragmented markets, insufficient investment, underutilized potential. The solution is more—more infrastructure, more coordination, more integration. The emotional register is one of anxious optimism: we can succeed if we act together.

The US's crisis is one of threat. America faces adversaries who seek to exploit our technologies and undermine our dominance. The opening quote from President Trump frames the stakes: "it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance." The problem is existential: a "race" that America could "lose." The solution is removal—removal of regulations, removal of obstacles, removal of anything that slows the private sector. The emotional register is one of aggressive confidence: we will win if we get government out of the way.

What is striking is how each document's crisis construction excludes the other's concerns. The EU document barely mentions geopolitical competition with China; the US document barely mentions market fragmentation or coordination failures. Each constructs a problem-space where its preferred solutions are obvious and alternatives are unthinkable.

Agency and Accountability: Who Acts, Who Decides, Who Bears Risk​

The most revealing divergence concerns agency—who can act, who is responsible, and who bears the consequences of AI development.

In the EU document, the primary agents are institutions: the Commission, Member States, the EuroHPC Joint Undertaking, the AI Office, European Digital Innovation Hubs. These institutions act through coordination, funding, standard-setting, and facilitation. The private sector is present but positioned as a beneficiary of public action rather than the primary driver. Workers, citizens, and civil society appear as constituencies whose interests must be protected and whose skills must be developed. The document constructs a complex web of shared responsibility.

In the US document, the primary agent is the private sector, whose innovation must be "unencumbered by bureaucratic red tape." Government's role is to remove obstacles and create conditions for private action. The text repeatedly positions federal agencies not as actors but as enablers: "create the conditions where private-sector-led innovation can flourish." Workers appear, but primarily as resources to be developed for industry's benefit. Citizens barely appear at all. The document constructs a clear hierarchy: private enterprise leads, government follows.

This has profound implications for accountability. In the EU model, when AI causes harm, there is a complex institutional apparatus for determining and allocating responsibility. In the US model, accountability is far more diffuse—if the market leads, and government merely facilitates, who is responsible when things go wrong?

The Regulation Question: Burden vs. Protection​

Perhaps no difference is starker than the treatment of regulation.

The EU document treats the AI Act—Europe's comprehensive AI regulation—as an asset: "a significant asset, with one set of clear rules, including the AI Act, preventing market fragmentation and enhancing trust and security." Regulation is positioned as enabling innovation by creating predictability and trust. The concern is implementation, not existence. The entire section on regulation focuses on facilitating compliance, not reducing regulatory burden.

The US document treats regulation as the enemy. The Biden administration's AI executive order is described as "dangerous." The text repeatedly invokes "red tape," "onerous regulation," and "bureaucratic" obstacles. The recommended actions focus overwhelmingly on removing regulatory constraints: "identify, revise, or repeal regulations, rules, memoranda, administrative orders, guidance documents, policy statements, and interagency agreements that unnecessarily hinder AI development." States with "burdensome AI regulations" should be punished through reduced federal funding.

These positions are not merely different—they are directly contradictory. What the EU celebrates as a competitive advantage, the US condemns as an innovation-killing burden. This divergence reflects fundamentally different theories of what markets need to function well: the EU assumes markets require institutional frameworks and trust infrastructure; the US assumes markets require freedom from government interference.

What Each Document Silences​

The forensic analyst must attend not only to what is said, but to what is not said. Both documents contain structured absences that reveal ideological commitments.

The EU document silences:

  • Geopolitical competition: China is never named. The US appears only as a comparator ("the EU currently lags behind the US and China in terms of available data centre capacity"). The entire geopolitical dimension of AI competition is suppressed.
  • Military applications: Defense appears once, in passing, in a list of sectors. The military uses of AI—central to the US document—are nearly invisible.
  • Fundamental critique: No voice questions whether the AI race should be run at all. The document assumes the desirability of AI development; the only questions are how and how fast.
  • Environmental costs: Energy appears as an infrastructure challenge, but the environmental implications of massive data center expansion are mentioned only as "considerations" to be "taken into account."

The US document silences:

  • Worker displacement as systemic risk: The document acknowledges AI will "transform how work gets done" but positions this as an opportunity requiring "upskilling" rather than a structural threat requiring systemic response. The possibility that AI might eliminate jobs faster than new ones emerge is unthinkable.
  • Concentration of power: The document promotes policies that will dramatically benefit a handful of large technology companies, but never acknowledges the implications for market concentration or democratic governance.
  • International cooperation: Beyond export controls and technology transfer to allies, the international arena appears only as a space of competition. Cooperative governance frameworks are entirely absent.
  • AI rights and consciousness: Despite anthropomorphizing AI throughout ("AI will enable Americans to discover..."), the document never engages with ethical questions about AI systems themselves.
  • Civil liberties: Despite extensive discussion of AI in government, surveillance implications are invisible. The document promotes AI adoption in law enforcement and national security with no discussion of civil liberties constraints.

Material Stakes: Who Benefits, Who Pays​

Both documents present AI as universally beneficial—but careful analysis reveals very different distributions of benefit and risk.

The EU document positions a broad range of beneficiaries: startups and SMEs (who gain infrastructure access), researchers (who gain computing resources), workers (who gain skills), citizens (who gain improved public services), and Member States (who gain competitive industries). Risk-bearers are harder to identify—the document's institutional focus diffuses risk across the collective.

The US document positions a narrower set of primary beneficiaries: "AI innovators," "frontier AI developers," "technology companies," and the "private sector" generally. Workers are positioned as beneficiaries of economic growth, but the primary agents of benefit are clearly capital-holders in the AI industry. Risk-bearers include "adversaries" who will be denied technology, but also—though this is unspoken—workers whose jobs will be transformed and citizens whose data will fuel AI development.

The fiscal implications are also divergent. The EU document envisions massive public investment: €200 billion through InvestAI, €10 billion in supercomputing infrastructure, ongoing funding through Digital Europe and Horizon Europe programs. The US document envisions public investment primarily in defense and security applications, with private capital driving commercial development. The EU model socializes investment costs; the US model privatizes returns while socializing risks.

Theories of National Identity​

Perhaps most profoundly, these documents construct different theories of what a nation is and what collective action means.

The EU document constructs Europe as a collective project—a "continent" that must be built through institutional cooperation. The text repeatedly invokes shared identity: "the European brand of open innovation," "Europe's AI startup and scaleup scene," "the EU's strong AI talent base." National differences are acknowledged but positioned as assets to be coordinated rather than obstacles to be overcome. The vision is federal: a multi-level governance system where EU, national, and local action complement each other.

The US document constructs America as a unified competitor—a single entity in a race against adversaries. The text invokes national identity through competition: "winning the AI race," "global dominance," "unquestioned and unchallenged" supremacy. Internal differences (between states, between parties, between constituencies) are invisible. The vision is unitary: a single nation with a single interest in a zero-sum global competition.

These different national self-conceptions have practical implications. The EU model requires complex coordination mechanisms—hence the proliferation of institutions, initiatives, and frameworks. The US model requires only removal of obstacles to a pre-existing national will—hence the focus on deregulation and the assumption that private enterprise naturally serves national interest.

Implications for Democratic Deliberation​

Both documents claim democratic legitimacy, but their relationship to democratic deliberation differs fundamentally.

The EU document foregrounds process: public consultations, stakeholder dialogues, coordination with Member States. The AI Board, the AI Office, the AI Pact—these are mechanisms for ongoing deliberation about AI governance. The document positions itself as one moment in an ongoing democratic conversation about technology governance.

The US document forecloses deliberation. The document is an "Action Plan"—a set of directives, not proposals. The framing of AI development as a "race" constructs speed as paramount, implicitly delegitimizing the slow processes of democratic consultation. The document's hostility to regulation is also hostility to the deliberative processes through which regulation emerges. The message is clear: we cannot afford to debate; we must act.

This has profound implications for what kind of AI futures become possible. The EU model—whatever its limitations—preserves space for democratic revision. If AI development produces harms, there are institutions positioned to respond. The US model—in its haste to "win"—may foreclose the possibility of democratic course-correction by entrenching private power and delegitimizing public oversight.

Conclusion: Choosing Futures​

These documents do not merely describe different policy preferences—they construct different possible futures. The EU document invites readers into a world where AI is a shared resource, governance is collective, and the goal is prosperity broadly distributed. The US document invites readers into a world where AI is a strategic weapon, governance is obstacle, and the goal is dominance absolutely achieved.

Neither document acknowledges its own contingency. Both present their visions as obvious responses to objective circumstances. The forensic analyst's task is to reveal this contingency—to show that each document's "obvious" conclusions follow from contestable premises, and that different premises would yield different conclusions.

The stakes are not merely analytical. These documents will shape investment, regulation, research, and education across two of the world's largest economies. The worlds they construct will become, through policy implementation, partially real. Understanding what they make thinkable—and what they foreclose—is essential for any democratic deliberation about AI futures.


PART I: DOCUMENT-LEVEL FRAMING ARCHITECTURE

Task 1: Dominant Frame Identification​

Document A: EU "AI Continent Action Plan"​

Dominant Frame: "AI as Collective Infrastructure Project"​

Frame Family: Nurturant Parent / Ecosystem

The EU document operates primarily within a Nurturant Parent frame family—the state as caring coordinator ensuring all members of the European "family" benefit from AI development. This combines with an Ecosystem frame where AI development requires careful cultivation of interconnected elements.

Semantic Frame: BUILDING/CONSTRUCTION + ECOSYSTEM

The dominant metaphor is architectural/constructive. Key lexical choices include:

  • "pillars" (5 mentions): "five key domains," "necessary pillars"
  • "build" (14 mentions): "build large-scale AI data and computing infrastructures"
  • "infrastructure" (50+ mentions): the foundational metaphor
  • "ecosystem" (11 mentions): "AI ecosystem," "thriving ecosystem"
  • "factories" (43 mentions): AI Factories, Gigafactories as productive facilities

Exemplar Quotes:

  1. "For the EU to become an AI Continent, efforts must accelerate and intensify in five key domains" (p.2)

    • AI development as construction project with identifiable components
  2. "AI Factories are dynamic ecosystems that foster innovation, collaboration, and development in the field of AI. They integrate AI-optimised supercomputers, large data resources, programming and training facilities, and human capital to create cutting-edge AI models and applications." (p.4)

    • Mixing BUILDING (factories) and ECOSYSTEM (foster, integrate) metaphors
  3. "These are the necessary pillars for Europe to become the AI Continent." (p.3)

    • Architectural metaphor: AI future requires structural foundation
  4. "The EU must maintain its own distinctive approach to AI by capitalising on its strengths and what it does best." (p.1)

    • Collective identity ("its own") with optimization frame

Entman's Four Functions:

FunctionEU Framing
Problem DefinitionEurope has not yet assembled its considerable assets into sufficient scale; fragmentation prevents realizing potential
Causal DiagnosisInsufficient coordination, investment, and integration across the single market
Moral EvaluationCollective action is virtuous; fragmentation is wasteful; trustworthy AI aligned with values is both economically and morally superior
Treatment RecommendationBuild shared infrastructure (AI Factories, Gigafactories); invest publicly; coordinate across Member States; implement the AI Act effectively

What the Frame Makes Thinkable/Unthinkable:

Thinkable:

  • Massive public investment in shared infrastructure
  • Regulation as competitive advantage
  • Multi-stakeholder coordination as governance model
  • Skills development as collective responsibility
  • Open-source as strategic asset

Unthinkable:

  • Purely market-driven AI development
  • Deregulation as innovation strategy
  • National competition within Europe
  • AI as military/security priority
  • Fundamental critique of AI development itself

Document B: US "America's AI Action Plan"​

Dominant Frame: "AI as Existential Race for Dominance"​

Frame Family: Strict Father / Nation as Competitor

The US document operates within a Strict Father frame family—the nation as disciplined competitor that must demonstrate strength and self-reliance. This combines powerfully with Nation as Competitor where international relations are zero-sum.

Semantic Frame: RACE/COMPETITION + WAR

The dominant metaphor is agonistic—competitive racing with elements of military conflict. Key lexical choices include:

  • "race" (10 mentions): "global race," "AI race," "winning the race"
  • "dominance" (8 mentions): "global dominance," "technological dominance"
  • "win/winning" (7 mentions): "America's to win," "winning the AI race"
  • "adversaries/adversarial" (11 mentions): nations and threats
  • "national security" (15 mentions): security as primary frame

Exemplar Quotes:

  1. "The United States is in a race to achieve global dominance in artificial intelligence (AI). Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits. Just like we won the space race, it is imperative that the United States and its allies win this race." (p.1)

    • Race frame with binary outcome (win/lose), historical precedent, military stakes
  2. "it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance" (Opening quote from President Trump)

    • Security frame; superlative language ("unquestioned and unchallenged"); dominance as goal
  3. "The AI race is America's to win, and this Action Plan is our roadmap to victory." (p.2)

    • Race + War metaphors; possession ("America's to win"); military language ("roadmap to victory")
  4. "To maintain global leadership in AI, America's private sector must be unencumbered by bureaucratic red tape." (p.3)

    • Competition frame; government as obstacle ("red tape"); private sector as agent

Entman's Four Functions:

FunctionUS Framing
Problem DefinitionAmerica is in an existential race for AI dominance against adversaries (primarily China); regulation threatens to slow us down
Causal DiagnosisPrevious administration imposed "dangerous" regulations; bureaucratic obstacles hinder private sector; adversaries seek to exploit our technologies
Moral EvaluationWinning is virtuous; dominance is morally justified by American values; regulation is suspect; private enterprise serves national interest
Treatment RecommendationRemove regulatory barriers; unleash private sector; build infrastructure rapidly; deny technology to adversaries; export AI to allies

What the Frame Makes Thinkable/Unthinkable:

Thinkable:

  • Deregulation as primary policy tool
  • Private sector as national champion
  • Military/security applications as priority
  • Export controls as strategic weapon
  • Speed over deliberation

Unthinkable:

  • Regulation as enabling innovation
  • International cooperation on AI governance
  • Questioning the AI race itself
  • Distributional concerns as primary
  • Democratic deliberation over AI futures

Comparative Analysis: Two Incommensurable Worlds​

These dominant frames construct fundamentally incompatible problem-spaces.

The nature of the challenge: For the EU, the challenge is internal—coordinating European assets into sufficient scale. For the US, the challenge is external—defeating adversaries in a competitive race. The EU looks inward to fix fragmentation; the US looks outward to vanquish competitors.

The role of government: For the EU, government is the coordinator of collective action—the institution that can mobilize resources, set standards, and ensure benefits are distributed. For the US, government is the obstacle to private enterprise—the source of "red tape" that must be "removed" or "repealed." Same institution, opposite valence.

The nature of AI: For the EU, AI is infrastructure—a public good requiring collective investment, like roads or electricity networks. For the US, AI is a strategic asset—a source of competitive advantage that must be controlled, exported to allies, and denied to adversaries.

The temporal horizon: The EU frame is patient—building infrastructure, developing skills, implementing regulation over years. The US frame is urgent—a race happening now that we could lose at any moment.

What each reveals about the other: The EU frame reveals that the US "race" framing forecloses cooperative possibilities. Why must AI development be zero-sum? What if shared governance could produce better outcomes for all? The US frame reveals that the EU "building" framing may be naively slow. What if adversaries are racing ahead while Europe carefully constructs institutions?

Neither frame acknowledges its own contingency. Each presents its framing as the obvious response to objective circumstances. But the circumstances are themselves constructed through the framing—and different frames would yield different policy worlds.


Task 2: AI Ontology Comparative Audit​

Document A: EU "AI Continent Action Plan"​

Passage 1: AI as Transformational Force​

"AI has just begun to be adopted in the key sectors of our economy, helping to tackle some of the most pressing challenges of our times. While the full impact of this transformational shift is still unfolding, Europe must act with ambition, speed and foresight to shape the future of AI in a way that enhances our competitiveness, safeguards and advances our democratic values and protects our cultural diversity." (p.1)

Ontological Status: Autonomous Force / Tool hybrid

Agency Attribution: AI "helps" (assistive verb) but produces "transformational shift" (autonomous force language). "Full impact... still unfolding" grants AI causal power over future.

Human-AI Relationship: Ambiguous—AI "helps" (humans use it) but also produces "transformation" (it acts independently). Europe must "shape" AI's future, suggesting some human agency over AI's trajectory.

Implications: Positions regulation and governance as reasonable since AI's trajectory can be "shaped." Justifies proactive policy without denying AI's transformative power.


Passage 2: AI as Sector-Specific Tool​

"AI Factories are dynamic ecosystems that foster innovation, collaboration, and development in the field of AI. They integrate AI-optimised supercomputers, large data resources, programming and training facilities, and human capital to create cutting-edge AI models and applications." (p.4)

Ontological Status: Tool/Artifact

Agency Attribution: AI Factories (human institutions) are the agents that "foster," "integrate," and "create." AI models are outputs—things produced, not producers.

Human-AI Relationship: Humans clearly in control. AI is produced through deliberate combination of resources. No autonomous AI agency.

Implications: Positions AI as manufacturable through institutional effort. Justifies massive public investment in the components of AI production.


Passage 3: AI as Economic Engine​

"Accelerating the uptake of AI across all sectors, including the public administration, fosters innovation and is essential to enhance competitiveness and economic growth as well as to reduce administrative burden." (p.13)

Ontological Status: Tool/Enabler

Agency Attribution: "Uptake of AI" (passive—AI is taken up) enables outcomes. AI doesn't act; it is deployed by sectors and administrations.

Human-AI Relationship: Humans adopt AI as means to ends (competitiveness, growth, efficiency).

Implications: AI as instrument of economic policy. Adoption as the primary governance challenge—not AI's intrinsic behavior.


Passage 4: AI as Potential Threat (Implicit)​

"A trustworthy and human centric AI is both pivotal for economic growth and crucial for preserving the fundamental rights and principles that underpin our societies." (p.1)

Ontological Status: Potential Threat (implicit through "trustworthy" qualification)

Agency Attribution: The need for "trustworthy" AI implies AI could be untrustworthy—could threaten "fundamental rights." AI has capacity to undermine values.

Human-AI Relationship: Humans must ensure AI's trustworthiness through design and governance.

Implications: Justifies regulation (AI Act) as necessary safeguard. AI's potential for harm acknowledged but positioned as manageable through proper governance.


Passage 5: AI as Scientific Revolution​

"The European AI Research Council (RAISE) will pool resources that push the technological boundaries of AI and tap into its potential to facilitate scientific breakthroughs. It will support both 'Science for AI', driving the development of next-generation AI technologies, and 'AI in Science', fostering the use of AI for discovery and exploration across a range of scientific disciplines, unlocking cross-pollinations between AI and domain sciences." (p.17)

Ontological Status: Tool/Partner hybrid

Agency Attribution: AI has "potential" (latent capacity) that must be "tapped into" (accessed by humans). AI "facilitates" and is "used for" (instrumental). But "cross-pollinations" suggests more collaborative relationship.

Human-AI Relationship: Scientists use AI; AI enables scientific work. Partnership more than pure tool relationship.

Implications: Justifies research investment. Positions AI as enhancing rather than replacing human scientific capacity.


Document B: US "America's AI Action Plan"​

Passage 1: AI as Revolutionary Force​

"AI will enable Americans to discover new materials, synthesize new chemicals, manufacture new drugs, and develop new methods to harness energy—an industrial revolution. It will enable radically new forms of education, media, and communication—an information revolution. And it will enable altogether new intellectual achievements: unraveling ancient scrolls once thought unreadable, making breakthroughs in scientific and mathematical theory, and creating new kinds of digital and physical art—a renaissance." (p.1)

Ontological Status: Autonomous Force / Agent

Agency Attribution: "AI will enable"—AI is the active subject producing transformative outcomes. Humans are beneficiaries ("Americans"), not agents. AI produces three revolutions independently.

Human-AI Relationship: AI acts; humans receive benefits. No human agency over AI's trajectory is visible.

Implications: Positions AI as irresistible force. Human role is to facilitate AI's emergence, not to direct it. Justifies removing obstacles to AI development.


Passage 2: AI Systems with Preferences​

"We must ensure that free speech flourishes in the era of AI and that AI procured by the Federal government objectively reflects truth rather than social engineering agendas." (p.4)

Ontological Status: Agent with potential intentions

Agency Attribution: AI systems can "reflect" agendas—implying they can embody viewpoints, have biases, potentially pursue goals. AI has something like preferences or orientations.

Human-AI Relationship: Humans must "ensure" AI systems have correct orientations—implying AI systems have orientations that can be right or wrong.

Implications: Justifies government oversight of AI "values." Paradoxically, this anthropomorphization supports intervention despite the document's general deregulatory stance.


Passage 3: AI as National Security Asset​

"Denying our foreign adversaries access to this resource, then, is a matter of both geostrategic competition and national security. Therefore, we should pursue creative approaches to export control enforcement." (p.21)

Ontological Status: Strategic Resource / Weapon

Agency Attribution: AI is "resource"—passive, controlled, weaponizable. No AI agency; entirely instrumental framing.

Human-AI Relationship: Nations control and deploy AI as strategic asset. Pure instrumental relationship.

Implications: Justifies export controls, security restrictions, denial to adversaries. AI as tool of national power.


Passage 4: AI as Threat to National Security​

"The most powerful AI systems may pose novel national security risks in the near future in areas such as cyberattacks and the development of chemical, biological, radiological, nuclear, or explosives (CBRNE) weapons, as well as novel security vulnerabilities." (p.22)

Ontological Status: Potential Threat / Agent

Agency Attribution: AI systems "pose" risks—active construction suggesting AI as threat agent. AI could enable CBRNE development—AI as tool for malicious actors, but also AI as independently risky.

Human-AI Relationship: AI threatens human security; humans must evaluate and manage risks.

Implications: Justifies government evaluation of frontier models. Interestingly creates role for government oversight that contradicts general deregulatory stance.


Passage 5: AI as Worker Complement​

"AI will improve the lives of Americans by complementing their work—not replacing it." (p.2)

Ontological Status: Partner/Tool

Agency Attribution: AI "improves" and "complements"—active but supportive. Explicit denial of AI as replacement (displacement threat).

Human-AI Relationship: AI assists humans; humans remain central. Complementary rather than substitutive.

Implications: Manages worker anxiety. Positions labor displacement concerns as illegitimate. Workers should embrace AI, not fear it.


Comparative Analysis: Divergent AI Ontologies (280 words)​

These documents construct AI as fundamentally different kinds of entities, with profound policy consequences.

The EU consistently treats AI as artifact—something made, produced, manufactured through deliberate combination of resources (compute, data, skills). AI Factories literally manufacture AI. This construction positions human institutions as agents and AI as output, justifying massive public investment in the production apparatus. When the EU acknowledges AI's transformative potential, it frames this as something to be "shaped" by policy—maintaining human agency over AI's trajectory.

The US oscillates between AI as autonomous force and AI as strategic asset. The opening vision positions AI as an independent revolutionary power ("AI will enable...") that produces transformations whether humans want them or not. But the national security sections treat AI as pure instrument—a "resource" to be controlled, exported, or denied. This oscillation serves rhetorical purposes: autonomous-force framing justifies removing regulatory obstacles (don't slow the revolution!), while strategic-asset framing justifies export controls (control the weapon!).

The anthropomorphism gap is significant. The US document attributes far more agency to AI—AI "enables," "improves," "poses risks," can "reflect" agendas. The EU document more consistently treats AI as passive object that is "adopted," "deployed," "applied." This difference matters: anthropomorphized AI is harder to regulate (it has its own trajectory) but easier to mobilize emotionally.

Policy consequences: The EU's artifact-ontology supports comprehensive governance—if we make AI, we can shape it. The US's force-ontology supports deregulation—if AI is coming regardless, our only choice is whether to lead or follow. Neither ontology is inherently correct; both are constructions that enable particular policy responses while foreclosing others.


PART II: AGENCY AND ACCOUNTABILITY MAPPING

Task 3: Agency Distribution Audit​

Document A: EU "AI Continent Action Plan"​

Passage 1: Technology as Historical Force (with Institutional Response)​

"AI has just begun to be adopted in the key sectors of our economy, helping to tackle some of the most pressing challenges of our times. While the full impact of this transformational shift is still unfolding, Europe must act with ambition, speed and foresight to shape the future of AI." (p.1)

Participants:

  • AI (grammatical agent: "helping," "transformational shift")
  • Europe (actor who "must act")
  • Key sectors (site of adoption)
  • Human adopters (erased—passive "adopted")

Agency Strategy: Delegation to technology ("AI has begun," "transformational shift is unfolding") combined with Collectivization ("Europe must act")

Linguistic Mechanism: Nominalization ("transformational shift"), agentless passive ("adopted"), collective noun as actor ("Europe")

Power Analysis: Technology's trajectory constructed as partially autonomous ("still unfolding") but shapeable by collective European action. Individual and corporate actors erased in favor of continental-scale agency.

Interpretive Claim: This construction legitimizes supranational governance by positioning AI as a force requiring collective response beyond any single actor's capacity.


Passage 2: Institutions as Primary Agents​

"The EuroHPC Joint Undertaking will serve as the single-entry point for users across the EU, providing access to computing time and support services offered by any EuroHPC AI Factory." (p.5)

Participants:

  • EuroHPC Joint Undertaking (primary agent: "will serve," "providing access")
  • Users (beneficiaries, passive recipients)
  • AI Factories (instruments/locations)

Agency Strategy: Delegation to institutions; human users are passive recipients

Linguistic Mechanism: Institutional subject with active verbs; users as grammatical objects

Power Analysis: Centralizes agency in European institutions. Individual researchers, companies, startups are positioned as beneficiaries of institutional action, not independent agents.

Interpretive Claim: Constructs EU institutions as the essential intermediaries without which access to AI resources is impossible—legitimizing institutional expansion.


Passage 3: Markets Requiring Public Correction​

"The EU currently lags behind the US and China in terms of available data centre capacity, relying heavily on infrastructure installed in and controlled by other regions of the world, that EU users access via the cloud. While access to innovative and affordable cloud services is vital for EU competitiveness, an excessive dependence on non-EU infrastructure may bring economic security risks and is a concern for European industry, key economic sectors and public administrations." (p.9)

Participants:

  • The EU (actor who "lags," "relies")
  • US and China (implicit competitors)
  • Non-EU infrastructure (threat)
  • EU users, industry, administrations (vulnerable constituencies)

Agency Strategy: Collectivization ("The EU" as unified actor) + Personification of regions

Linguistic Mechanism: Nation-as-person construction; dependency framed as vulnerability; hedged agency ("may bring... risks")

Power Analysis: Constructs market outcomes (cloud infrastructure location) as security problem requiring public response. Private market decisions become matter of collective concern.

Interpretive Claim: Legitimizes state intervention in markets by constructing current market structure as threat to collective security.


Passage 4: Workers as Beneficiaries of System​

"As highlighted in the Union of Skills, Europe's competitive strength lies in its people. A skilled population is essential to respond to today's rapid technological transformations and ensure the EU's future prosperity and competitiveness." (p.18)

Participants:

  • Europe's people (asset/resource)
  • Technological transformations (force requiring response)
  • EU (entity whose prosperity is at stake)
  • Workers (implicit—must be "skilled")

Agency Strategy: Delegation to abstraction ("technological transformations"); workers as passive resources

Linguistic Mechanism: Workers positioned as "population" (collective, manageable), "strength" (resource), requiring skills to "respond" (reactive, not proactive)

Power Analysis: Workers are assets to be developed, not agents making demands. Their interests are subsumed into "EU prosperity."

Interpretive Claim: Constructs worker interests as aligned with competitiveness, eliding potential conflicts between worker welfare and competitive pressure.


Passage 5: Regulation as Collective Achievement​

"The EU has adopted the AI Act to create the conditions for a well-functioning single market for AI, ensuring free circulation across borders and harmonised conditions for access to the EU's market. It also ensures that AI developed and used in Europe is safe, respects fundamental rights and is of the highest quality – a selling point for European providers." (p.21)

Participants:

  • The EU (agent: "has adopted")
  • AI Act (instrument achieving multiple goals)
  • European providers (beneficiaries)
  • Fundamental rights (values protected)

Agency Strategy: Collectivization ("The EU" as unified actor); regulation as positive achievement

Linguistic Mechanism: Active voice for EU action; regulation positioned as "creating conditions" and "ensuring" outcomes; commercial framing ("selling point")

Power Analysis: Regulation constructed as both protective (rights) and competitive (market access). No tension acknowledged between these goals.

Interpretive Claim: Legitimizes regulation by constructing it as simultaneously serving values and commercial interests—an unusual synthesis that forecloses the regulation-as-burden framing.


Document B: US "America's AI Action Plan"​

Passage 1: Private Sector as National Champion​

"To maintain global leadership in AI, America's private sector must be unencumbered by bureaucratic red tape." (p.3)

Participants:

  • America's private sector (agent requiring freedom)
  • Global leadership (goal)
  • Bureaucratic red tape (obstacle/threat)
  • Government (implicit source of red tape)

Agency Strategy: Personification (private sector as unified actor with needs); Erasure of specific corporate actors

Linguistic Mechanism: "America's private sector" collectivizes diverse corporate interests; "unencumbered" positions regulation as physical constraint; "red tape" dysphemism for regulation

Power Analysis: Constructs private enterprise as natural agent of national interest. Government regulatory action becomes obstacle to national success.

Interpretive Claim: Identifies private sector interests with national interests, legitimizing deregulation as patriotic policy.


Passage 2: Government as Obstacle-Remover​

"Achieving these goals requires the Federal government to create the conditions where private-sector-led innovation can flourish." (p.3)

Participants:

  • Federal government (actor whose role is "creating conditions")
  • Private sector (primary agent: innovation that "flourishes")
  • Goals (national objectives)

Agency Strategy: Inversion—government positioned as servant to private sector's needs; Delegation of innovation to private actors

Linguistic Mechanism: "Create conditions" positions government as enabler, not leader; organic metaphor ("flourish") naturalizes private sector growth

Power Analysis: Subordinates government agency to private sector flourishing. Government's proper role is facilitation, not direction.

Interpretive Claim: Constructs limited government as natural and proper, delegitimizing active government role in AI development.


Passage 3: Adversaries as Existential Threat​

"Denying our foreign adversaries access to this resource, then, is a matter of both geostrategic competition and national security." (p.21)

Participants:

  • Our foreign adversaries (threat)
  • This resource [AI compute] (strategic asset)
  • We/us (implied national actor)

Agency Strategy: Personification of adversaries; Nation as Person construction

Linguistic Mechanism: Possessive "our" creates in-group; "foreign adversaries" dysphemism creates out-group; "denying access" positions US as controller

Power Analysis: Constructs international relations as zero-sum conflict. Other nations are either allies or adversaries; no neutral category.

Interpretive Claim: Securitizes AI policy, legitimizing restrictions and controls that would be questioned in purely commercial frame.


Passage 4: Workers as Resources Requiring Development​

"The Trump Administration supports a worker-first AI agenda. By accelerating productivity and creating entirely new industries, AI can help America build an economy that delivers more pathways to economic opportunity for American workers." (p.6)

Participants:

  • Trump Administration (actor with supportive stance)
  • AI (agent that "accelerates," "creates," "helps")
  • America (builder)
  • American workers (beneficiaries)

Agency Strategy: Personification of AI as helper; Delegation of economic transformation to AI

Linguistic Mechanism: AI as subject with active verbs; workers as grammatical objects; "pathways" suggests workers must navigate structures they don't create

Power Analysis: Workers positioned as passive beneficiaries of AI-driven transformation. Their agency is in developing skills to navigate opportunities, not in shaping the transformation itself.

Interpretive Claim: Constructs worker welfare as dependent on AI success, aligning worker interests with capital's interest in AI development.


Passage 5: China as Ideological Threat​

"Led by DOC through NIST's Center for AI Standards and Innovation (CAISI), conduct research and, as appropriate, publish evaluations of frontier models from the People's Republic of China for alignment with Chinese Communist Party talking points and censorship." (p.4)

Participants:

  • DOC/NIST/CAISI (actors: research, evaluate)
  • Chinese frontier models (objects of investigation)
  • Chinese Communist Party (ideological threat)
  • [American AI] (implicit contrast: objective, truthful)

Agency Strategy: Personification of Chinese AI as ideologically aligned; Delegation of political values to technical systems

Linguistic Mechanism: AI models positioned as having "alignment" with political positions; implicit contrast with "objective" American AI

Power Analysis: Constructs AI as carrying national ideology. Chinese AI is suspect; American AI is presumptively objective.

Interpretive Claim: Positions American AI as neutral/objective against ideologically contaminated foreign alternatives—a remarkable claim given the document's own clear ideological commitments.


Comparative Analysis: Divergent Agency Architectures (290 words)​

These documents distribute agency through fundamentally different architectures with distinct accountability implications.

Who consistently acts:

  • EU: Institutions act. The Commission proposes, the AI Office coordinates, the EuroHPC Joint Undertaking deploys, Member States participate. Agency is distributed across a complex institutional web.
  • US: The private sector acts. Government's role is removing obstacles, creating conditions, facilitating. When government does act directly, it's in security domains (export controls, military applications).

Who is consistently passive:

  • EU: The private sector is a beneficiary—startups, SMEs, and innovators gain access to resources that institutions provide. Workers are populations to be skilled.
  • US: Government is positioned as obstacle to be reduced. Workers are resources whose interests align with AI development. Other nations are competitors or threats.

Accountability implications:

In the EU model, accountability flows through institutional channels. If AI development produces harms, there are identifiable institutions responsible for governance (AI Office, national authorities). The complex multi-level structure diffuses accountability but also creates multiple points where responsibility can be assigned.

In the US model, accountability is far more diffuse. If the private sector leads and government merely facilitates, who is responsible when things go wrong? The answer is either "market forces" (impersonal) or "adversaries" (external). This construction shields domestic corporate actors from accountability while centralizing blame on foreign threats or regulatory "burdens."

The treatment of workers is particularly revealing. Both documents claim to serve worker interests, but neither positions workers as agents with voice. In the EU, workers are skills to be developed. In the US, workers are beneficiaries of pathways to be navigated. Neither document imagines workers as political actors who might shape AI policy through collective action.


Task 4: Beneficiary and Risk Mapping​

Document A: EU "AI Continent Action Plan"​

ConstituencyPositioned as Beneficiary?Positioned as Risk-Bearer?Has Agency/Voice?Key Quotes
Workers/LaborYes (skills development)No (displacement minimized)No (object of policy)"We need to reinforce AI skills, including basic AI literacy and diverse talent, throughout the EU" (p.3)
Business/IndustryYes (primary beneficiaries)NoLimited (through consultations)"AI innovators – startups, scaleups, SMEs" positioned as primary beneficiaries of AI Factories
Researchers/AcademiaYes (strong)NoYes (through RAISE)"Resource for AI Science in Europe (RAISE), will pool resources for AI scientists" (p.2)
Citizens/PublicYes (improved services)No (risks unmentioned)No"AI in areas like healthcare can bring transformative benefits to wellbeing" (p.13)
GovernmentYes (efficiency)NoYes (primary agent)"public sector must enhance its capabilities" (p.1)
Military/SecurityMinimal mentionNoNoSingle mention in sector list (p.13)
Marginalized GroupsYes (accessibility mention)NoNo"AI has the potential to be a powerful tool for preventing and combatting discrimination" (p.13)
International PartnersYes (EuroHPC Participating States)NoLimitedCandidate countries mentioned as partners
Adversaries/CompetitorsN/A - absentN/AN/AUS and China mentioned only as comparators for data centre capacity

Document A Analysis (150 words):

The EU document constructs an expansive beneficiary class: startups and SMEs gain infrastructure access; researchers gain computing resources; workers gain skills; citizens gain improved public services; Member States gain competitive industries. Risk-bearers are remarkably hard to identify—the document's institutional framing diffuses risk across the collective European project.

However, the distribution is not equal. Primary beneficiaries are clearly AI innovators (mentioned repeatedly with specific access provisions) and researchers (with dedicated initiatives like RAISE). Workers are beneficiaries of skills programs but not agents in AI governance. Citizens are nearly invisible except as passive recipients of improved services.

The absence of explicit risk-bearers is itself ideological: if no one bears risk, the AI project appears costless. Job displacement, market concentration, privacy erosion—these potential costs are structurally absent, making the EU's AI agenda appear universally beneficial.


Document B: US "America's AI Action Plan"​

ConstituencyPositioned as Beneficiary?Positioned as Risk-Bearer?Has Agency/Voice?Key Quotes
Workers/LaborYes (new opportunities)Yes (indirect - displacement acknowledged)No"AI will improve the lives of Americans by complementing their work—not replacing it" (p.2)
Business/IndustryYes (primary beneficiaries)NoYes (primary agent)"America's private sector must be unencumbered by bureaucratic red tape" (p.3)
Researchers/AcademiaYes (NAIRR, compute access)NoLimited"increase the research community's access to world-class private sector computing" (p.5)
Citizens/PublicMinimalNoNoAlmost entirely absent as constituency
GovernmentYes (efficiency, security)NoYes (in security domains)"transformative use of AI can help deliver the highly responsive government the American people expect" (p.10)
Military/SecurityYes (primary beneficiary)NoYes (extensive)"Drive Adoption of AI within the Department of Defense" (entire section, p.11-12)
Marginalized GroupsAbsentAbsentNoNo mention
International PartnersYes (allies)NoNo"export American AI to allies and partners" (p.20)
Adversaries/CompetitorsNo (target of denial)Yes (threat construct)N/A"Denying our foreign adversaries access" (p.21)

Document B Analysis (150 words):

The US document constructs a narrower primary beneficiary class centered on the private sector and military/security establishment. The private sector is repeatedly positioned as the agent whose freedom to innovate serves national interest. The military receives extensive attention with dedicated sections on defense adoption.

Workers are ambiguously positioned—nominally beneficiaries of "new pathways to economic opportunity" but also implicitly bearing displacement risk that the document strenuously minimizes ("complementing... not replacing"). Citizens barely appear except as recipients of government services.

The document's most striking feature is its explicit construction of adversaries as risk-bearers: foreign nations will be "denied" technology and subjected to export controls. Risk is externalized to competitors rather than acknowledged as distributed across American society. This externalization serves ideological purposes—it makes domestic AI development appear costless by locating costs outside the national community.


Comparative Analysis (240 words)​

These documents construct strikingly different benefit/risk distributions that reveal underlying value hierarchies.

Who is centered: The EU centers institutions and innovators in a broadly distributed benefit structure where many constituencies gain from collective investment. The US centers the private sector and military in a narrower structure where corporate and security interests are primary.

Who bears risk: The EU structurally erases risk-bearers—its institutional framing diffuses costs across the collective, making them invisible. The US externalizes risk to adversaries—foreign nations who will be denied technology. Neither document seriously engages with risks to domestic workers, citizens, or marginalized communities.

Whose interests align: The EU constructs European interests as aligned across constituencies through shared infrastructure. The US constructs national interest as identical to private sector interest—what's good for American tech companies is good for America. This identification is asserted, not argued.

The marginalized group gap: The EU briefly mentions disability accessibility; the US mentions nothing. Neither document substantively engages with how AI affects already-marginalized populations—communities of color, low-income workers, surveilled populations.

What this reveals: Both documents construct AI development as primarily beneficial, with costs either diffused (EU) or externalized (US). This construction forecloses serious engagement with distributional concerns. Who loses from AI development? In these documents, the answer is either "no one in particular" (EU) or "our adversaries" (US)—a convenient politics that evades difficult domestic trade-offs.


PART III: VALUES, IDEOLOGY, AND NATURALIZATION

Task 5: Values and Ideology Audit​

Document A: EU "AI Continent Action Plan"​

Instance 1: "Trustworthy" as Governing Value​

"A trustworthy and human centric AI is both pivotal for economic growth and crucial for preserving the fundamental rights and principles that underpin our societies." (p.1)

Lexical Feature Type: Semantic prosody + Presupposition trigger

Alternative Framings:

  1. "Safe and controllable AI" (emphasizes risk management)
  2. "Powerful and competitive AI" (emphasizes capability)
  3. "Democratically accountable AI" (emphasizes governance)

Value System: Positions "trust" as the central value mediating between economic and rights concerns. Presupposes AI can be both economically valuable AND rights-respecting—foreclosing potential trade-offs. "Human centric" positions humans as proper center of AI development, distinguishing from pure efficiency focus.

Whose Perspective?: European regulatory establishment; rights-based traditions. Contested by: pure market perspectives; efficiency maximizers; those who see trade-offs between trust and speed.


Instance 2: "Ecosystem" as Organizing Metaphor​

"AI Factories are dynamic ecosystems that foster innovation, collaboration, and development in the field of AI." (p.4)

Lexical Feature Type: Metaphorical framing

Alternative Framings:

  1. "AI production facilities" (industrial, mechanical)
  2. "AI innovation clusters" (geographic, competitive)
  3. "AI research networks" (academic, collaborative)

Value System: Organic metaphor naturalizes growth, interdependence, and balance. "Ecosystem" implies natural development requiring cultivation rather than engineering. Legitimizes patient, nurturing governance over rapid disruption.

Whose Perspective?: Environmental sensibility; systems thinking; long-term planning orientation. Contested by: those who see tech development as disruptive innovation, not organic growth.


Instance 3: "Single Market" as Asset​

"the EU's large single market is a significant asset, with one set of clear rules, including the AI Act, preventing market fragmentation and enhancing trust and security in the use of AI technologies." (p.3)

Lexical Feature Type: Semantic prosody (positive valuation)

Alternative Framings:

  1. "Regulatory uniformity constrains innovation" (negative)
  2. "Bureaucratic harmonization" (negative)
  3. "Common regulatory framework" (neutral)

Value System: Market integration and regulatory harmonization as inherently valuable. "Preventing fragmentation" positions diversity as problem. Assumes single rules are clearer than multiple rules—contestable in practice.

Whose Perspective?: EU federalists; large firms benefiting from scale; compliance professionals. Contested by: national sovereignty advocates; small firms for whom compliance is burden; regulatory diversity proponents.


Instance 4: "Strategic Autonomy" as Goal​

"The EU is determined to avoid the fragmentation of its single market and to enhance its capabilities to reduce dependencies on critical technologies and strengthen sovereignty in cutting edge semiconductors." (p.7)

Lexical Feature Type: Presupposition trigger + Metaphorical framing

Alternative Framings:

  1. "Protectionism" (negative)
  2. "Industrial independence" (neutral)
  3. "Technological nationalism" (critical)

Value System: "Dependencies" positioned as vulnerability; "sovereignty" as proper condition. Presupposes technology supply chains are matters of collective autonomy, not market efficiency.

Whose Perspective?: Economic security advocates; industrial policy proponents. Contested by: free trade advocates; efficiency maximizers; those who see global integration as desirable.


Instance 5: "Open Innovation" as European Brand​

"the European brand of open innovation is showing results. Computing power in the EU is publicly accessible through the European network of cutting-edge supercomputers" (p.1)

Lexical Feature Type: Stance marker (positive evaluation) + Semantic prosody

Alternative Framings:

  1. "Insufficient investment in proprietary capabilities"
  2. "Public subsidy of research"
  3. "Distributed computing resources"

Value System: "Open" as inherently positive; "publicly accessible" as virtue. Constructs European distinctiveness through openness, contrasting with (implied) closed American or Chinese models.

Whose Perspective?: Open source advocates; public investment supporters; European identity builders. Contested by: proprietary technology defenders; those who see openness as competitive weakness.


Document B: US "America's AI Action Plan"​

Instance 1: "Dominance" as Legitimate Goal​

"it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance." (Opening quote)

Lexical Feature Type: Stance marker (extreme intensification) + Presupposition trigger

Alternative Framings:

  1. "Leadership" (softer, implies others following voluntarily)
  2. "Competitiveness" (implies ongoing contest, not permanent state)
  3. "Excellence" (focuses on quality, not relative position)

Value System: Superlative language ("unquestioned and unchallenged") positions anything less than absolute supremacy as failure. "Dominance" explicitly zero-sum; presupposes international relations as power hierarchy.

Whose Perspective?: National security establishment; hegemony theorists; American exceptionalists. Contested by: international cooperation advocates; those who see mutual benefit possible; critics of American hegemony.


Instance 2: "Red Tape" as Regulatory Dysphemism​

"America's private sector must be unencumbered by bureaucratic red tape." (p.3)

Lexical Feature Type: Dysphemism + Metaphorical framing

Alternative Framings:

  1. "Regulatory safeguards" (positive)
  2. "Legal requirements" (neutral)
  3. "Accountability mechanisms" (positive)

Value System: "Red tape" positions all regulation as bureaucratic waste. "Unencumbered" suggests regulation as physical constraint on natural freedom. Presupposes private sector naturally knows best; government oversight is interference.

Whose Perspective?: Deregulation advocates; business interests; libertarian perspectives. Contested by: worker advocates; consumer protection supporters; environmental regulators; those who see regulation as enabling trust.


Instance 3: "Race" as Organizing Metaphor​

"The United States is in a race to achieve global dominance in artificial intelligence... Just like we won the space race, it is imperative that the United States and its allies win this race." (p.1)

Lexical Feature Type: Metaphorical framing + Historical analogy

Alternative Framings:

  1. "Building" a technology sector (constructive)
  2. "Developing" AI capabilities (neutral)
  3. "Cultivating" an AI ecosystem (organic)

Value System: Race metaphor constructs zero-sum competition; winner takes all; speed paramount. Space race analogy invokes Cold War nationalism, government-led mobilization (interestingly contradicting deregulation emphasis).

Whose Perspective?: Cold War nostalgists; national competition proponents; urgency advocates. Contested by: international cooperation advocates; those who see technology development as positive-sum; pace critics.


Instance 4: "Free Speech" in AI Context​

"We must ensure that free speech flourishes in the era of AI and that AI procured by the Federal government objectively reflects truth rather than social engineering agendas." (p.4)

Lexical Feature Type: Presupposition trigger + Dysphemism

Alternative Framings:

  1. "AI systems should be fair and unbiased"
  2. "AI should avoid amplifying harmful content"
  3. "AI moderation serves community standards"

Value System: "Free speech" positioned as threatened; "social engineering agendas" as dysphemism for content moderation or bias mitigation. Presupposes "objective truth" exists and can be encoded; presupposes current moderation is ideological contamination.

Whose Perspective?: Conservative critics of tech moderation; free speech absolutists. Contested by: content moderation advocates; those who see bias mitigation as necessary; critics of "objective truth" claims.


Instance 5: "American Values" as Universal Standard​

"It is essential that these systems be built from the ground up with freedom of speech and expression in mind, and that U.S. government policy does not interfere with that objective." (p.4)

Lexical Feature Type: Presupposition trigger + Universalization

Alternative Framings:

  1. "Systems should reflect diverse values" (pluralist)
  2. "Systems should be locally adaptable" (contextual)
  3. "Systems should balance competing interests" (trade-off acknowledging)

Value System: Positions specifically American constitutional values as proper foundation for AI. Presupposes American values are universal or should be universally applied. "Built from the ground up" suggests values must be architectural, not added.

Whose Perspective?: American exceptionalists; free expression advocates; tech companies resisting varied national regulations. Contested by: pluralists; those who see values as culturally variable; non-American perspectives.


Comparative Analysis: Incommensurable Value Systems (280 words)​

These documents naturalize fundamentally different—and in some ways incompatible—value systems.

What the EU treats as self-evident:

  • Collective action through institutions is the proper response to technological change
  • Regulation creates trust, which enables markets
  • European identity is a meaningful frame for AI governance
  • "Trustworthy" AI is both possible and commercially valuable
  • Public investment in shared infrastructure is legitimate and necessary

What the US treats as self-evident:

  • Private enterprise naturally serves national interest
  • Regulation impedes innovation
  • International relations are zero-sum competition
  • America must dominate or lose
  • Government's proper role is to remove obstacles

Where they directly contradict:

The treatment of regulation is starkly opposed. The EU's "single set of clear rules... preventing market fragmentation" is the US's "bureaucratic red tape" that must be eliminated. The EU's AI Act "selling point" is the US's "dangerous" Biden-era regulatory overreach. These cannot both be right—either regulation enables markets through trust, or regulation impedes markets through constraint.

The treatment of international relations similarly diverges. The EU positions itself as a builder working alongside (and in comparison with) the US and China. The US positions itself as a competitor who must "win" and "dominate." Cooperation is central to the EU vision; competition is central to the US vision.

What each would contest in the other:

The EU would challenge the US assumption that private sector interest naturally aligns with national interest—the EU sees market failures requiring correction. The US would challenge the EU assumption that institutional coordination adds value—the US sees institutional processes as slowing innovation that markets would naturally accelerate.

Neither document acknowledges that its value system is contestable. Both present their values as obvious responses to shared circumstances—obscuring that the circumstances themselves are constructed through these contested values.


Task 6: Structured Absences and Silences​

Document A: EU "AI Continent Action Plan"​

1. Absent Constituencies​

Geopolitical competitors: China is mentioned exactly once—as a comparator for data centre capacity. The US appears similarly. Neither is treated as an adversary, threat, or even significant actor. The entire geopolitical dimension of AI competition is suppressed.

Military/defense interests: Defense appears in a single list of sectors (p.13). The military applications of AI—a central concern in most national AI strategies—are nearly invisible. This silence is particularly notable given Europe's historical discussions of "strategic autonomy."

Civil society organizations: NGOs, advocacy groups, and civil society are entirely absent. The public consultation process is mentioned, but civil society as an organized force in AI governance does not appear.

Global South: Developing nations appear nowhere except as potential participants in EuroHPC. How European AI development affects or involves non-European, non-wealthy nations is unaddressed.

2. Unacknowledged Risks​

Job displacement: The word "displacement" does not appear. Workers are positioned only as beneficiaries requiring skills development. The possibility that AI might eliminate jobs faster than it creates them is structurally foreclosed.

Market concentration: The document promotes AI development through large-scale facilities (Gigafactories) requiring "significant investments" accessible only to major actors. The concentration implications—already visible in AI markets—go unmentioned.

Surveillance and civil liberties: AI in public administration and healthcare raises surveillance concerns nowhere acknowledged. The document treats all AI adoption as beneficial.

Environmental costs: Energy efficiency and sustainability are mentioned as considerations, but the massive energy demands of AI training and inference—and their environmental implications—receive minimal attention relative to their scale.

3. Foreclosed Alternatives​

Non-competitive AI development: The document assumes Europe must compete globally. Alternatives—AI moratoriums, cooperative international governance, degrowth approaches—are unthinkable.

Worker-led governance: Workers are objects of skills policy, never subjects of AI governance. Union involvement, worker representation on AI boards, or worker ownership models do not appear.

Strong precautionary approaches: The AI Act is positioned as sufficient safeguard. Stronger precautionary measures—moratoria on certain applications, mandatory impact assessments with veto power—are foreclosed.

4. Interrupted Causal Chains​

The document traces AI development → economic growth → prosperity, but stops there. It does not trace:

  • Economic growth → distributional questions → who actually benefits
  • AI adoption → workplace transformation → worker experience
  • Data centre expansion → energy consumption → environmental consequences

Document B: US "America's AI Action Plan"​

1. Absent Constituencies​

Organized labor: Unions are never mentioned. Workers appear as resources requiring development, but labor organizations as stakeholders with voice are absent.

Citizens as political actors: Citizens appear only as consumers of government services. As democratic participants in AI governance, they are invisible.

Marginalized communities: No mention of racial justice, disability rights, or other equity concerns. The document's worker section mentions "all Americans" but engages no specificity about differential impacts.

International community (non-adversaries): Beyond allies (who receive American technology) and adversaries (who are denied it), the international community does not exist. Cooperative governance frameworks are absent.

2. Unacknowledged Risks​

Concentration of corporate power: The document's policies will massively benefit a small number of large technology companies. This concentration—and its implications for democracy and markets—is never acknowledged.

Surveillance expansion: Extensive discussion of AI in government, military, and law enforcement includes no discussion of surveillance implications or civil liberties constraints.

AI safety as technical challenge: The document acknowledges AI systems may "pose novel national security risks" but treats this as evaluation problem, not governance challenge. The possibility that some AI development should not proceed is foreclosed.

Democratic erosion: AI's implications for democracy—disinformation, manipulation, concentration of power—receive no attention.

3. Foreclosed Alternatives​

Regulation as enabling: The framing of regulation as "red tape" forecloses the possibility that regulation could enable innovation through trust-building.

International cooperation: AI governance as global cooperative project is unthinkable. The only international frame is competition or alliance-based technology transfer.

Public AI development: Government's role is facilitating private enterprise. Public development of AI as public infrastructure (like public roads or utilities) does not appear.

Precautionary approaches: The document's urgency framing forecloses any approach that would slow AI development for assessment or democratic deliberation.

4. Interrupted Causal Chains​

The document traces: deregulation → private sector flourishing → American dominance → [prosperity assumed]

It does not trace:

  • Deregulation → reduced accountability → potential harms
  • Private sector leadership → profit maximization → distributional consequences
  • Dominance → what then? (The goal itself is left unexamined)
  • Export controls → ally dependencies → geopolitical complications

Comparative Analysis: Shared and Divergent Silences (270 words)​

Silences both documents share:

Neither document seriously engages with job displacement as structural risk. Both treat worker concerns as skills problems solvable through training. The possibility that AI might create unemployment faster than economies can absorb it—a genuine concern among economists—is foreclosed in both.

Neither document engages with concentration of power. The EU promotes Gigafactories accessible to major actors; the US promotes deregulation that benefits large incumbents. Both documents' policies will likely increase market concentration, but neither acknowledges this.

Neither document positions workers as governance agents. Both treat workers as resources to be developed, not stakeholders with voice in AI governance decisions.

Where one speaks to what the other silences:

The EU's emphasis on regulation as enabling speaks to what the US silences: the possibility that framework rules create trust that enables markets. The US's complete rejection of this view silences a perspective central to European economic governance.

The US's emphasis on geopolitical competition speaks to what the EU silences: the reality that AI development has military applications and that great power competition shapes technology trajectories. The EU's near-complete erasure of this dimension is a remarkable absence.

What the different silences reveal:

The EU's silences reveal an institutionalist optimism—if we build the right structures, benefits will flow and risks will be managed. The silences protect this optimism from challenges.

The US's silences reveal a market-nationalist faith—if we unleash the private sector and dominate internationally, all will be well. The silences protect this faith from distributional and democratic challenges.

Both sets of silences serve to make AI development appear more consensual and less contested than it actually is.


PART IV: GOVERNANCE MODELS AND POLICY ARCHITECTURES

Task 7: Governance Model Analysis​

Document A: EU "AI Continent Action Plan"​

1. Role of the State​

The state (at EU and national levels) is positioned as coordinator, investor, and standard-setter. Government creates shared infrastructure, pools resources, sets rules, and ensures access.

Key quotes:

"The EU must maintain its own distinctive approach to AI by capitalising on its strengths"

"The Commission President set out this vision at the AI Action Summit in Paris when she announced InvestAI, an initiative to mobilise EUR 200 billion for investment in AI"

The state is activist but not directive—it creates conditions rather than picking winners.

2. Role of the Market​

Private enterprise is positioned as beneficiary and implementer of public investment. Startups, SMEs, and large firms gain access to infrastructure the state creates. Market actors execute innovation within frameworks the state establishes.

The market alone cannot create necessary infrastructure:

"an excessive dependence on non-EU infrastructure may bring economic security risks"

Market failures require public correction.

3. Role of Civil Society​

Civil society is nearly absent. Workers are mentioned as skills populations; citizens as service recipients. Organized civil society—NGOs, unions, advocacy groups—does not appear as governance participant.

The public consultation process is mentioned but not foregrounded:

"The Commission invites stakeholders to share their views on the Apply AI Strategy as part of the public consultation"

4. Regulatory Philosophy​

Regulation is positioned as competitive advantage enabling trust:

"the EU's large single market is a significant asset, with one set of clear rules, including the AI Act, preventing market fragmentation and enhancing trust and security"

The AI Act is an "asset," not a burden. Implementation, not reduction, is the challenge:

"there is a need to facilitate compliance with the AI Act, particularly for smaller innovators"

5. International Orientation​

The international arena is characterized by comparison and partnership, not competition:

"The EU seeks – through proactive bilateral and multilateral engagement with partner countries – to lead global efforts on AI"

Adversaries do not exist in this frame. The US and China are mentioned only as comparators. International cooperation is presumed desirable.


Document B: US "America's AI Action Plan"​

1. Role of the State​

The state is positioned as facilitator and obstacle-remover in commercial domains, but as active leader in security domains.

Commercial:

"Achieving these goals requires the Federal government to create the conditions where private-sector-led innovation can flourish"

Security:

"Drive Adoption of AI within the Department of Defense" (entire section)

This bifurcation is notable: government is suspect in commercial regulation but essential in military/security applications.

2. Role of the Market​

The private sector is positioned as primary agent of national interest:

"America's private sector must be unencumbered by bureaucratic red tape"

"the Federal government should create the conditions where private-sector-led innovation can flourish"

Market leadership is presumed to serve national interest. No mechanism explains how private profit-seeking aligns with public benefit.

3. Role of Civil Society​

Civil society is entirely absent. No unions, NGOs, advocacy groups, or organized public interests appear.

Workers are positioned as resources requiring development:

"AI will improve the lives of Americans by complementing their work"

Citizens are invisible except as recipients of government services.

4. Regulatory Philosophy​

Regulation is positioned as burden and threat:

"bureaucratic red tape" "onerous regulation" Biden AI order described as "dangerous"

The document's regulatory section focuses almost entirely on removing requirements:

"identify, revise, or repeal regulations, rules, memoranda, administrative orders, guidance documents, policy statements, and interagency agreements that unnecessarily hinder AI development"

5. International Orientation​

The international arena is characterized by competition and alliance:

"Pillar III: Lead in International AI Diplomacy and Security"

Other nations are either allies (who receive technology) or adversaries (who are denied it):

"Export American AI to Allies and Partners" "Denying our foreign adversaries access to this resource"

International cooperation outside alliance frameworks does not appear.


Comparative Analysis: Two Theories of Governance (340 words)​

These documents embody fundamentally different theories of governance that cannot be reconciled.

The EU theory: Collective action problems require institutional solutions. Markets alone cannot create necessary infrastructure or sufficient trust. The state's role is to coordinate, invest, and set rules that enable markets to function within a framework of shared values. Regulation creates the trust that enables commerce; public investment creates the infrastructure that enables innovation; institutional coordination overcomes fragmentation that markets cannot solve. This is a social market economy approach where state and market are complementary.

The US theory: Markets naturally serve national interest; government is primarily obstacle. The state's commercial role is to remove barriers that bureaucracy has erected. The exception is security, where government has legitimate active role. Regulation is presumptively harmful; private enterprise is presumptively beneficial; government's job is to unleash what markets would naturally do without interference. This is a neoliberal nationalist approach combining market fundamentalism with security-state activism.

The treatment of regulation is the starkest divide. The EU positions the AI Act as "asset"; the US positions the Biden AI order as "dangerous." The EU sees "one set of clear rules" as enabling; the US sees rules as "red tape" to be cut. These are not merely different emphases—they are incompatible theories of how markets function.

The treatment of international relations similarly diverges. The EU operates in a world of cooperation and comparison; the US operates in a world of competition and dominance. For the EU, international bodies are venues for leadership; for the US, they are sites of adversary influence to be countered.

The role of civil society is notable in both cases for its absence, but for different reasons. The EU's institutional focus leaves little room for grassroots actors—governance happens through official channels. The US's market-security focus leaves no room for non-market, non-state actors—the only relevant players are companies and governments.

What each would criticize in the other: The EU would see US deregulation as naive market fundamentalism that ignores market failures. The US would see EU institutionalism as bureaucratic sclerosis that slows innovation. Neither criticism is fully articulated in these documents, but both are implicit in their divergent governance visions.


Task 8: Contrastive Reframing Exercise​

Reframing 1: The Regulation Question​

Original Frame (US):

"To maintain global leadership in AI, America's private sector must be unencumbered by bureaucratic red tape. President Trump has already taken multiple steps toward this goal, including rescinding Biden Executive Order 14110 on AI that foreshadowed an onerous regulatory regime."

Frame label: Regulation-as-Burden Emphasizes: Government as obstacle; regulation as constraint; private sector as naturally beneficial

Alternative Frame (as it might appear in EU document):

"To maintain global leadership in AI, America's technology sector requires clear rules that create trust and enable market access. The previous administration's AI executive order established necessary frameworks for responsible innovation that many European partners sought to align with."

Alternative frame label: Regulation-as-Enablement Emphasizes: Rules as trust-creating; regulation as market-enabling; international alignment as asset

Policy Divergence:

  • Responsibility: US frame places responsibility on government (to get out of way); EU frame places responsibility on industry (to comply with enabling rules)
  • Actions: US frame leads to deregulation; EU frame leads to implementation support
  • Benefits/costs: US frame benefits companies avoiding compliance costs; EU frame benefits companies gaining market trust

Epistemic Trade-offs: Regulation-as-Burden makes visible the costs of compliance but obscures the trust benefits of frameworks. Regulation-as-Enablement makes visible market failures but obscures compliance costs for innovators.


Reframing 2: International Relations​

Original Frame (EU):

"The EU seeks – through proactive bilateral and multilateral engagement with partner countries – to lead global efforts on AI by supporting innovation, ensuring trust through guardrails, and developing the global governance on AI."

Frame label: International-Cooperation Emphasizes: Partnership; multilateral governance; shared trust frameworks

Alternative Frame (as it appears in US document):

"America must impose strong export controls on sensitive technologies. We should encourage partners and allies to follow U.S. controls, and not backfill. If they do, America should use tools such as the Foreign Direct Product Rule and secondary tariffs to achieve greater international alignment."

Frame label: International-Competition Emphasizes: Control; denial to adversaries; coercive alignment

Policy Divergence:

  • Responsibility: EU frame positions international community as co-responsible; US frame positions America as controller
  • Actions: EU frame leads to multilateral negotiations; US frame leads to export controls and tariffs
  • Benefits/costs: EU frame distributes benefits of cooperation; US frame concentrates American dominance

Epistemic Trade-offs: Cooperation frame makes visible shared benefits but obscures real geopolitical competition. Competition frame makes visible strategic stakes but obscures mutual gains from cooperation.


Reframing 3: Worker Positioning​

Original Frame (US):

"The Trump Administration supports a worker-first AI agenda. By accelerating productivity and creating entirely new industries, AI can help America build an economy that delivers more pathways to economic opportunity for American workers."

Frame label: Workers-as-Beneficiaries-of-Growth Emphasizes: Productivity gains; new industries; opportunity pathways; workers as passive recipients

Alternative Frame (as might appear in labor-centered document):

"AI development must be governed with worker voice at every stage. By involving workers in decisions about automation, firms can identify which AI applications enhance human work versus which displace it, ensuring that productivity gains are shared rather than captured by capital alone."

Alternative frame label: Workers-as-Governance-Agents Emphasizes: Worker voice; decision-making participation; distributional questions; workers as active agents

Policy Divergence:

  • Responsibility: US frame places responsibility on AI (to create opportunities); alternative places responsibility on governance (to include workers)
  • Actions: US frame leads to skills training; alternative leads to worker representation requirements
  • Benefits/costs: US frame benefits capital (flexibility); alternative benefits labor (voice)

Epistemic Trade-offs: Beneficiaries frame makes visible economic growth but obscures distributional conflict. Agents frame makes visible power asymmetries but obscures efficiency gains from flexibility.


Reframing 4: AI Infrastructure​

Original Frame (EU):

"The Commission President set out this vision at the AI Action Summit in Paris when she announced InvestAI, an initiative to mobilise EUR 200 billion for investment in AI in line with the political priorities of the Competitiveness Compass."

Frame label: Public-Investment-as-Enabler Emphasizes: Collective investment; public mobilization; European coordination

Alternative Frame (as it appears in US document):

"The establishment of a single AI Gigafactory is estimated to require significant investments... these AI Gigafactories will be implemented through public-private partnerships and innovative funding mechanisms... private proponents would be responsible for financing the remaining amount"

(Note: US also has infrastructure investment, but frames it differently):

"America's path to AI dominance depends on changing this troubling trend [energy stagnation]... we need to 'Build, Baby, Build!'"

Frame label: Private-Enterprise-with-Public-Facilitation Emphasizes: Private sector as primary builder; government as facilitator; speed and urgency

Policy Divergence:

  • Responsibility: EU frame positions public sector as investor; US frame positions private sector as builder
  • Actions: EU frame leads to public infrastructure programs; US frame leads to permitting reform
  • Benefits/costs: EU frame socializes investment (taxpayer cost, public benefit); US frame privatizes returns (private profit, public facilitation)

Epistemic Trade-offs: Public investment frame makes visible market failures but obscures government failures. Private enterprise frame makes visible government inefficiency but obscures market failures and distributional consequences.


PART V: SYNTHESIS AND IMPLICATIONS

Task 9: Ideological Coherence Assessment​

Document A: EU "AI Continent Action Plan"​

1. Internal Coherence​

The EU document presents a highly coherent ideological vision organized around institutional coordination for collective benefit. The agency patterns (institutions act), value systems (trust, cooperation, rights), and governance models (state-as-coordinator) align consistently.

Points of strain:

  • The document invokes "competitiveness" throughout while simultaneously promoting cooperation—these goals can conflict when Europe cooperates internally to compete externally
  • The near-absence of geopolitical competition sits awkwardly with the strategic autonomy goal
  • The treatment of workers as skills populations rather than governance agents contradicts the "human-centric" rhetoric

2. Dependencies​

The EU argument depends on several assumptions:

  • Institutional coordination adds value (vs. slowing innovation)
  • Regulation creates trust (vs. imposing costs)
  • European identity is meaningful for AI governance (vs. national or sectoral identity)
  • AI development can be "trustworthy" and "human-centric" (vs. inherently destabilizing)

3. Vulnerabilities​

The frame is vulnerable to:

  • Concrete examples of EU bureaucracy slowing innovation relative to competitors
  • Evidence that regulatory fragmentation persists despite harmonization efforts
  • Events revealing geopolitical competition the document suppresses
  • Worker mobilization demanding voice rather than accepting skills training

4. Stability​

The frame is moderately stable. Its institutional anchoring provides resilience—EU governance structures will continue regardless of individual document framings. However, the suppression of geopolitical competition creates a vulnerability; events (trade conflicts, technology denial) could destabilize the cooperative framing.


Document B: US "America's AI Action Plan"​

1. Internal Coherence​

The US document presents a partially coherent ideological vision organized around competitive dominance through market unleashing. However, significant tensions exist.

Points of strain:

  • The document promotes deregulation while extensively discussing government's role in security—why is government competent in security but obstacle in commerce?
  • The "race" framing invokes collective national action, but the policy relies on private enterprise—how do private profit motives guarantee national benefit?
  • The document claims to be "worker-first" while positioning workers as passive beneficiaries—genuine worker-first policy would involve worker voice
  • Free speech advocacy contradicts calls to evaluate Chinese models for "alignment with Chinese Communist Party talking points"—who determines ideological alignment?

2. Dependencies​

The US argument depends on several assumptions:

  • Private sector interest naturally aligns with national interest
  • Deregulation accelerates innovation (vs. reducing trust)
  • AI development is a race that can be "won"
  • American AI is objective while foreign AI is ideological

3. Vulnerabilities​

The frame is vulnerable to:

  • Evidence that deregulation produces harms (accidents, discrimination, market failures)
  • Worker mobilization against displacement
  • International cooperation succeeding despite competition framing
  • Recognition that AI development is ongoing process, not race with finish line

4. Stability​

The frame is moderately unstable. Its internal contradictions (government good in security, bad in commerce; workers as priority, workers without voice) create tension. The race metaphor is vulnerable to events: what happens if China "wins" some metrics? Does the frame adjust or collapse?


Comparative Assessment (240 words)​

The EU document presents a more internally coherent ideological vision, though neither is fully consistent.

The EU's coherence derives from its institutional anchoring. The state-as-coordinator model, the regulation-as-enablement theory, and the collective action framing mutually reinforce each other. The silences (geopolitical competition, worker agency) are consistent with the frame's institutional optimism. The tensions that exist (competitiveness vs. cooperation) are manageable within the frame.

The US's partial coherence reflects deeper contradictions. The neoliberal commercial frame (government as obstacle) sits awkwardly with the security-state frame (government as essential). The race metaphor suggests collective national action, but the policy mechanism is private enterprise unleashing. The "worker-first" rhetoric contradicts the actual positioning of workers. These tensions are not managed within the frame but simply coexist.

Destabilization scenarios:

The EU frame could be destabilized by: concrete technology denial from competitors proving the geopolitical competition it suppresses; internal European failures of coordination proving the institutional model insufficient; worker movements demanding voice rather than accepting skills training.

The US frame could be destabilized by: AI harms traceable to deregulation; economic disruption with clear corporate beneficiaries and worker costs; recognition that "winning the race" is undefined and possibly meaningless; successful international cooperation demonstrating alternatives to competition.

The EU frame is more stable because its institutional anchoring provides persistence mechanisms. The US frame is more volatile because its internal contradictions and external vulnerabilities compound each other.


Task 10: Synthetic Conclusion​

Paragraph 1: Divergent Worlds​

These documents construct two fundamentally different political worlds organized around incompatible visions of AI, nation, and governance.

The EU document invites readers into a world of collective construction. Here, AI is infrastructure to be built through patient institutional coordination. Europe is a "continent" that must be assembled from its constituent parts—Member States, institutions, firms, researchers, workers—through deliberate architecture. The mood is one of anxious optimism: Europe has strengths, but these strengths must be organized into sufficient scale. The future is open but uncertain; success requires wise coordination.

The US document invites readers into a world of competitive struggle. Here, AI is a strategic asset in a zero-sum race for global dominance. America is a unified nation that must triumph over adversaries through the unleashed power of its private sector. The mood is one of urgent confidence: America will win if government gets out of the way. The future is binary—victory or defeat—and victory requires speed above all.

These are not merely different policy approaches. They are different realities constructed through different framings of what AI is, what nations are, and what collective action means. A reader inhabiting the EU world would find the US framing bewildering in its aggression; a reader inhabiting the US world would find the EU framing naive in its cooperationism. Neither document acknowledges that its reality is constructed; both present their visions as obvious responses to shared circumstances.

Paragraph 2: Agency and Accountability​

The documents distribute agency and accountability through architectures that have profound implications for democratic governance.

In the EU architecture, institutions act while individuals and firms receive. The Commission proposes, Member States coordinate, the AI Office implements, EuroHPC deploys. This creates identifiable accountability: if AI governance fails, there are institutions to blame and reform. But it also creates democratic distance: citizens influence AI policy only through the attenuated mechanisms of EU governance.

In the US architecture, the private sector acts while government facilitates and adversaries threaten. Corporations innovate; government removes obstacles; workers develop skills; enemies are denied technology. Accountability is diffuse: if AI development produces harms, who is responsible? Not government (it only facilitated); not corporations (they did what markets reward); not adversaries (they are external). The architecture is designed to shield domestic actors from accountability by externalizing blame.

In both documents, workers and citizens lack agency. The EU positions workers as skills populations requiring development; the US positions workers as beneficiaries of opportunities they did not shape. Neither positions workers or citizens as governance agents—as actors who might shape AI policy through collective action. This shared silence reveals a shared assumption: AI governance is for elites (institutional or corporate), not for democratic participation.

Paragraph 3: What Each Makes Thinkable​

Each document enables certain policy options while foreclosing others.

The EU document makes thinkable:

  • Massive public investment in shared AI infrastructure
  • Regulation as competitive advantage enabling market trust
  • Multi-stakeholder coordination as governance model
  • Skills development as collective responsibility
  • Open-source approaches as strategic asset
  • Patient institution-building over immediate results

These possibilities follow naturally from the EU's construction of AI as infrastructure, governance as coordination, and Europe as collective project.

The US document makes thinkable:

  • Comprehensive deregulation as innovation policy
  • Private sector leadership as national strategy
  • Export controls as competitive weapon
  • Military AI development as national priority
  • Speed over deliberation in AI development
  • Dominance as legitimate policy goal

These possibilities follow naturally from the US's construction of AI as strategic race, governance as obstacle-removal, and America as unified competitor.

What is remarkable is how each document's thinkable options are the other's unthinkable ones. EU-style public investment is invisible in the US frame; US-style deregulation is invisible in the EU frame. This mutual foreclosure reveals that these are not merely different policies but different problem-spaces that construct different possibility horizons.

Paragraph 4: What Each Forecloses​

The forensic analyst must attend to what each document makes unthinkable, unaskable, or invisible.

Both documents foreclose:

  • Fundamental questioning of AI development (should the race be run at all?)
  • Worker agency in AI governance (what if workers shaped automation decisions?)
  • Substantive engagement with distributional concerns (who actually wins and loses?)
  • Democratic deliberation over AI futures (what if citizens decided?)
  • Strong precautionary approaches (what if we slowed down to assess?)

The EU additionally forecloses:

  • Geopolitical competition as organizing frame
  • Military applications as priority
  • Market-led development without institutional coordination

The US additionally forecloses:

  • Regulation as enabling innovation
  • International cooperation beyond alliances
  • Public AI development as public good
  • Security concerns about corporate concentration

These foreclosures are not accidental. They are structural features of the frames that shape what can be thought within each document's logic. A question that cannot be asked within a frame is more thoroughly suppressed than a question that is asked and answered badly.

Paragraph 5: Material Stakes​

These framings have material consequences that will shape lived experience.

The EU's approach will likely produce:

  • Substantial public investment in AI infrastructure (€200 billion mobilized)
  • Regulatory frameworks that shape AI development trajectories
  • Skills programs that channel workers into AI-related fields
  • Research initiatives that direct scientific inquiry
  • Institutional structures that will persist and evolve

The distributional consequences depend on implementation. If infrastructure access is genuinely democratized, benefits may be broadly shared. If captured by large actors, concentration may increase despite public investment. The EU frame's institutional optimism obscures these contingencies.

The US's approach will likely produce:

  • Deregulation that removes constraints on corporate AI development
  • Military AI programs that advance strategic capabilities
  • Export controls that shape global technology access
  • Concentration of AI power in large incumbent firms
  • Rapid AI deployment with limited accountability mechanisms

The distributional consequences are more predictable: policies that remove constraints on corporations while promoting their "flourishing" will benefit corporate actors. Workers are promised "pathways to opportunity" but given no structural power to ensure those pathways materialize.

The material difference: the EU approach creates potential for democratic course-correction through persistent institutions. The US approach may foreclose course-correction by entrenching private power and delegitimizing public oversight.

Paragraph 6: Implications for Democratic Deliberation​

These documents reveal the state of AI policy discourse in two major democracies—and the picture is troubling for democratic governance.

Neither document models genuine democratic deliberation. The EU's public consultations are mentioned but not foregrounded; the US's Action Plan is presented as fait accompli. Both documents assume that elites (institutional in EU, corporate in US) will govern AI; citizens are recipients, not authors, of AI futures.

The EU document at least preserves space for democratic revision. Its institutional focus creates mechanisms through which policy could be altered—the AI Board could revise guidance, the AI Office could modify implementation, Member States could adjust national strategies. The frame's institutional optimism may be naive, but institutions can be sites of democratic contestation.

The US document more actively forecloses democratic deliberation. Its "race" framing constructs speed as paramount, delegitimizing the slow processes of democratic consultation. Its deregulatory agenda removes sites of public governance. Its celebration of private sector "flourishing" positions corporate decisions as beyond democratic reach. The message is clear: democracy cannot afford to debate AI; we must act.

What these documents share is perhaps most concerning: neither centers the question of who should decide AI futures. Both assume that current arrangements (EU institutions, US corporations) should govern AI development. Neither asks whether AI's transformative potential requires new forms of democratic participation, new mechanisms for citizen voice, new structures of accountability.

The forensic analysis reveals that both documents, despite their differences, constrain the horizon of democratic possibility in AI governance. Whether through institutional channeling or corporate unleashing, both locate AI decision-making beyond effective citizen reach. This shared feature may be the most consequential absence of all: the absence of democracy as a genuine force in shaping AI futures.


Analysis Completed: December 5, 2025
Analytical Framework: Multi-method forensic discourse analysis integrating CDA, Political Framing, and AI Ontology Analysis