Skip to main content

CDA-Spicy-Lite: Our agreement with the Department of War

Title: Our agreement with the Department of War

Author: Unknown

Type: press release

Published: 2026-02-28


Task 1: Agency and Accountability Audit​

About this task

This task examines how agencyβ€”the power to act, decide, and be held accountableβ€”is linguistically engineered within the text. It identifies mechanisms (passive constructions, nominalizations, personified abstractions) that manage perception of who acts and who is acted upon, classifying strategies as erasure, delegation, diffusion, inversion, collectivization, or personification.

Adverbial Erasure of Culpability​

Quote: "the AI system shall not be intentionally used for domestic surveillance of U.S. persons"

  • Participant Analysis: The 'AI system' is the grammatical subject (the thing not being used), while the actual human operators (DoW personnel) are deleted from the sentence. The 'U.S. persons' are the passive targets.
  • Agency Assignment: Obscured through adverbial modification. The word 'intentionally' creates a loophole where 'accidental' or 'systemic' surveillance lacks an agent to blame.
  • Linguistic Mechanism: Passive construction combined with modal/adverbial qualification.
  • Agency Strategy: Erasure
  • Power Analysis: This benefits the Department of War by setting a standard of proof (intent) that is nearly impossible to meet in complex algorithmic systems, shielding officials from accountability for 'collateral' data harvesting.
  • Counter-Voice: Department of War personnel shall not use the AI system to surveil U.S. persons, and the Department accepts full liability for any data captured.

Nominalization of Data Extraction​

Quote: "including through the procurement or use of commercially acquired personal or identifiable information."

  • Participant Analysis: The participants (the companies selling the data and the government agents buying it) are replaced by the nouns 'procurement' and 'use'.
  • Agency Assignment: Diffused. By turning actions into abstract nouns, the text hides the transactional relationship between the surveillance state and the data brokerage industry.
  • Linguistic Mechanism: Nominalization
  • Agency Strategy: Diffusion
  • Power Analysis: This naturalizes the existence of a 'commercial' market for private lives, framing it as a resource to be 'procured' rather than a series of privacy violations by specific actors.
  • Counter-Voice: The Department will not buy personal data from private brokers who have harvested it from citizens.

Personification of the 'Safety Stack'​

Quote: "Our deployment architecture will enable us to independently verify that these red lines are not crossed"

  • Participant Analysis: The 'architecture' (a set of code) is the agent that 'enables'. The humans (OpenAI) are positioned as passive recipients of this enablement.
  • Agency Assignment: Delegated. Agency is shifted from human ethical oversight to a technical 'architecture'.
  • Linguistic Mechanism: Abstract actor/Personification
  • Agency Strategy: Delegation
  • Power Analysis: This protects OpenAI from moral scrutiny by suggesting that safety is an automated, objective byproduct of 'architecture' rather than a subjective human decision.
  • Counter-Voice: We will personally monitor the Department's activities and stop them if they violate our ethics.

Inversion of Defensive Agency​

Quote: "we want the people defending the United States to have the best tools."

  • Participant Analysis: The military is framed as 'people defending' (active protectors), while the tools (AI) are just neutral objects.
  • Agency Assignment: Inverted. The text frames the deployment of war-making AI as a defensive necessity for the safety of the 'defenders' rather than an offensive capability of the state.
  • Linguistic Mechanism: Functionalization (framing actors by their supposed benevolent function).
  • Agency Strategy: Inversion
  • Power Analysis: This frames any opposition to the contract as an attack on the safety of 'the people defending' us, effectively weaponizing patriotism to silence dissent.
  • Counter-Voice: We want to provide the Pentagon with advanced AI to increase their lethality and strategic dominance.

Collectivized Responsibility for Risks​

Quote: "our technology is going to introduce new risks in the world"

  • Participant Analysis: The 'technology' is the agent of 'risk'. The 'world' is the passive victim. OpenAI's decision to release the technology is erased.
  • Agency Assignment: Personification of technology. It is not OpenAI creating risks; the technology is simply 'going to introduce' them, as if it were a natural phenomenon.
  • Linguistic Mechanism: Abstract actor/Inanimate agent
  • Agency Strategy: Personification
  • Power Analysis: This strategy absolves the corporation of responsibility for the 'risks' they profit from, framing them as inevitable consequences of progress rather than choices.
  • Counter-Voice: We are choosing to release a technology that we know will endanger global stability.

Task 2: Ideology and Common Sense Audit​

About this task

This task audits lexical choices, identifying where seemingly neutral words smuggle in contested values, assumptions, or hierarchies. It examines what worldview a phrase wants the reader to accept as "common sense" and explores alternative framings.

The 'Frontier' as Manifest Destiny​

Quote: "leaders from the frontier AI labs"

  • Lexical Feature Type: Metaphorical framing

Ideological Work: The term 'frontier' invokes the myth of the American Westβ€”a lawless space waiting to be 'tamed' or 'conquered' by 'pioneers'. It naturalizes expansion and frames resistance as being 'behind the times'.

Inclusion/Exclusion: Positioned as rational/normal: The labs as explorers. Marginalized: Those whose lives are the 'territory' being explored/disrupted.

Alternative Framings​

PhrasingWorldview CenteredMakes Visible
"Unregulated high-scale AI companies"Regulatory/Public interestThe lack of oversight and the experimental nature of the tech.
"Global compute-monopolies"Anti-trust/Economic justiceThe massive concentration of power and resources.

Technicalizing Ethics as a 'Safety Stack'​

Quote: "we retain full discretion over our safety stack"

  • Lexical Feature Type: Metaphorical framing/Euphemism

Ideological Work: By calling ethics a 'stack', it turns political and moral choices into a technical engineering problem. It suggests that 'safety' is a software component that can be 'run' like a program.

Inclusion/Exclusion: Normalizes the 'expert' engineer; erases the 'citizen' or 'ethicist'.

Alternative Framings​

PhrasingWorldview CenteredMakes Visible
"Our internal censorship and control filters"Civil libertiesThe proprietary and opaque nature of the rules.
"Corporate veto power"Political/DemocraticThe fact that a private company decides what is 'safe' in war.

Nationalist Exception for Surveillance​

Quote: "shall not be intentionally used for domestic surveillance of U.S. persons"

  • Lexical Feature Type: Semantic prosody/Exclusionary framing

Ideological Work: The term 'U.S. persons' creates a moral boundary. It naturalizes the idea that surveillance is only a problem when it happens to Americans, making the surveillance of the rest of the world 'common sense'.

Inclusion/Exclusion: Inclusion: U.S. citizens as the only subjects with rights. Exclusion: The entire global population.

Alternative Framings​

PhrasingWorldview CenteredMakes Visible
"Global mass surveillance"International human rightsThe fact that everyone not in the US is a legitimate target.
"Non-citizen data harvesting"Global South/Migrant rightsThe hierarchical valuation of human privacy based on citizenship.

The 'Red Line' as Arbitrary Boundary​

Quote: "We have three main red lines that guide our work"

  • Lexical Feature Type: Metaphorical framing

Ideological Work: The 'red line' metaphor suggests a firm, courageous moral stance. It hides the fact that these lines are chosen by the company and can be redefined as 'operational requirements' shift.

Inclusion/Exclusion: Positions the company as a moral arbiter; excludes the victims of 'lawful' AI warfare.

Alternative Framings​

PhrasingWorldview CenteredMakes Visible
"The current limits of our branding risk"Critical/EconomicThat these lines are moved based on public relations.
"Minimum legal compliance thresholds"LegalisticThe fact that these aren't 'moral' but 'legal' floors.

Task 3: Positioning and Solidarity Audit​

About this task

This task analyzes how texts construct social positions and relationships between speaker and audience, power-holders and the powerless. It examines the implicit "we" and "they"β€”who is positioned as authority, who as complicit, who is erased.

The 'Inclusive We' of National Interest​

Quote: "We believe strongly in democracy. ... we want the people defending the United States to have the best tools."

  • Positioning Mechanism: Pronoun strategies/Inclusive 'We'
  • Relationship Constructed: Creates a false alliance between a multi-billion dollar corporation and the 'average' citizen. It positions OpenAI as a patriotic servant rather than a profit-seeking entity.
  • Whose Reality Wins: The corporate reality that 'what's good for OpenAI is good for democracy' wins.
  • Power Consequences: It makes questioning the OpenAI-Pentagon partnership seem 'anti-democratic' or 'unpatriotic'.

Constructing the 'NaΓ―ve' Competitor​

Quote: "Other AI labs have reduced or removed their safety guardrails... We think our approach better protects"

  • Positioning Mechanism: Social distance/Comparative evaluation
  • Relationship Constructed: Positions OpenAI as the 'mature', 'responsible' adult in the room, while competitors are reckless. It builds a relationship of trust with the government based on superior 'safeguards'.
  • Whose Reality Wins: OpenAI's definition of 'safety' (technical filters) is naturalized as the only legitimate version.
  • Power Consequences: Reinforces OpenAI's bid for a monopoly on 'responsible' government contracts.

The Department of War as a Moral Peer​

Quote: "the Department made clear it shares our commitment"

  • Positioning Mechanism: Presupposition of shared values
  • Relationship Constructed: Constructs the Department of War (a military agency) as a peer organization with the same moral 'commitments' as a tech lab. It erases the inherent conflict between military 'efficiency' and 'privacy'.
  • Whose Reality Wins: The Department's stated intentions are treated as objective reality, ignoring historical precedents of mission creep.
  • Power Consequences: Validates the military's self-image, making critical oversight appear redundant because they 'already share the commitment'.

The FAQ as Paternalistic Reassurance​

Quote: "Why are you doing this? ... we think the US military absolutely needs strong AI models"

  • Positioning Mechanism: Register/Formality (FAQ format)
  • Relationship Constructed: Positions the company as an authority answering 'questions' from a concerned but less-informed public. It mimics transparency while actually delivering a scripted marketing message.
  • Whose Reality Wins: The necessity of AI-militarization is presented as an objective fact ('absolutely needs').
  • Power Consequences: Forecloses debate on whether the military should use AI at all by shifting the focus to how it will be 'safely' implemented.

Task 4: Discourse Strategies​

About this task

This task identifies overarching strategic patternsβ€”the key moves that the text makes to accomplish its ideological work. Each strategy must cite instances from Tasks 1-3 and articulate material consequences.

Techno-Moral Exceptionalism​

Cited Instances: Technicalizing Ethics as a 'Safety Stack', Constructing the 'NaΓ―ve' Competitor

Linguistic Patterns: Combining technical jargon ('safety stack', 'deployment architecture') with moral imperatives ('believe in democracy', 'red lines'). Reference to 'safety trained models' as a inherent quality of the tech.

Ideological Function: It constructs a reality where ethical problems are 'solved' through better code. It protects the power of the tech company to set the terms of war and surveillance by claiming their tech is 'inherently' safer than others.

Material Consequences: Justifies the funneling of massive public funds to specific 'safety-oriented' firms while bypassing traditional democratic oversight of weapon systems.

Counter-Discourse: A discourse of 'Political Accountability' that rejects 'safety' as a technical metric and demands public control over military AI use cases.

The Domestic/Foreign Moral Binary​

Cited Instances: Nationalist Exception for Surveillance, Adverbial Erasure of Culpability

Linguistic Patterns: Repeated use of 'U.S. persons', 'domestic surveillance', and 'US military mission'. Contrast between the 'Fourth Amendment' and 'foreign intelligence purpose'.

Ideological Function: It constructs 'safety' and 'privacy' as national commodities rather than human rights. It protects the state's power to act violently or invasively abroad as long as it maintains a thin veneer of legality at home.

Material Consequences: Enables the development of global surveillance infrastructures that target non-US nationals, fueling international instability and digital colonialism.

Counter-Discourse: A 'Global Human Rights' discourse that evaluates AI impact on all human beings regardless of citizenship or 'operational requirements'.

Task 5: Structural Mystification Audit​

About this task

This task applies three Critical Theory concepts:

  • Reification (LukΓ‘cs): Social relations appear as natural objects
  • Social Amnesia (Jacoby): Historical struggles are systematically forgotten
  • False Separation (Adorno): Structural issues framed as individual problems

Part A: Reification Analysis​

Naturalizing the 'AI Arms Race'​

Quote: "in the face of growing threats from potential adversaries who are increasingly integrating AI technologies"

  • Reification Mechanism: Geopolitical competition is presented as an autonomous force ('growing threats', 'increasingly integrating') rather than a series of escalation choices by state actors.
  • What's Obscured: The role of the US and OpenAI in initiating and accelerating this very arms race. It hides the fact that 'threats' are often reactions to one's own military expansion.
  • De-Reification: If we recognize this 'race' as a human-made policy choice, we can imagine de-escalation, treaties, and international bans on autonomous weapons.

Part B: Social Amnesia Analysis​

Erasure of the Surveillance State's History​

Quote: "The Department also affirmed that our services will not be used by Department of War intelligence agencies like the NSA."

  • What's Forgotten: The history of the NSA's illegal mass surveillance (e.g., PRISM, Snowden revelations). The text 'forgets' that the NSA's entire mandate is foreign intelligence that frequently sweeps up domestic data.
  • Function of Amnesia: Forgetting past abuses allows the text to present the 'new agreement' as a clean slate. Remembering would make the 'affirmation' from the DoW seem meaningless.

Part C: False Separation Analysis​

The 'Safety' vs. 'Mission' Divide​

Quote: "We wereβ€”and remainβ€”unwilling to remove key technical safeguards to enhance performance on national security work."

  • False Separation: It frames 'safety' and 'national security performance' (lethality/efficiency) as two separate dials that a technician can adjust.
  • What's Actually Structural: The 'safety' of an AI system is inseparable from its use in a war machine. A 'safe' system that accurately identifies a target for a missile is still a tool of violence.
  • Ideological Function: It allows OpenAI to claim moral purity (via 'safeguards') while still profiting from the 'performance' of war. It privatizes the 'ethics' of the tool while socializing the 'violence' it enables.

Synthesis: How These Mechanisms Work Together​

Reification, amnesia, and false separation work together to create a 'closed' logic where the militarization of AI appears both inevitable and morally managed. Reification frames the AI arms race as an external 'weather' pattern OpenAI must help us 'survive'. Social amnesia erases the track record of the 'Department of War'β€”a name that itself is a chilling return to pre-1947 terminology, yet presented as a neutral partner. False separation then chops the resulting violence into 'technical segments' (the safety stack), ensuring that no one is responsible for the 'totality' of the machine. This prevents a collective structural imagination where the public might reject the marriage of Silicon Valley and the Pentagon entirely. It hides the totality of a 'Permanent War Economy' by focusing the reader on 'classifiers' and 'cloud deployments'.

Conclusion: Stakes and Counter-Discourse​

About this section

This section synthesizes the analysis: naming the ideology, tracing material stakes, and providing counter-discourse examples.

Ideology and Material Stakes​

The core worldview constructed here is 'Technological Realism': the belief that technology dictates the course of history and that the only moral choice is to ensure 'democratic' corporations (like OpenAI) are the ones building the tools of state violence. This serves the political project of 'Military-Industrial Integration 2.0', where the boundary between private software labs and the state's kill-chain is dissolved. Materially, this benefits OpenAI through massive government contracts and 'supply chain' dominance, and it benefits the Department of War by granting them 'classified' AI capabilities with a 'progressive' marketing shield. The losers are global civil liberties and non-US citizens, who are explicitly excluded from the 'red lines'. Collective action becomes difficult because the discourse frames the partnership as a defensive necessity for 'democracy' itself, positioning dissent as a threat to national security.

Counter-Discourse Principles​

A discourse of resistance would start with De-reification: naming OpenAI and the Pentagon as active co-conspirators in a new arms race, not victims of a 'growing threat'. It would employ Historical Memory, reminding readers that 'contractual protections' have never stopped the NSA or the Pentagon from mission creep. It would use Structural Thinking to show that a 'cloud-only deployment' of a war-tool is still a war-tool, regardless of where the servers sit. Finally, it would Center the Marginalized by asking how this technology will be used in the Global South, where 'U.S. persons' protections do not apply, thereby exposing the nationalist hypocrisy of the 'safety' framework.

Counter-Discourse Examples​

OriginalReframedStakes Shift
"the AI system shall not be intentionally used for domestic surveillance of U.S. persons"We are providing the government with a mass-surveillance engine, while including legal loopholes ('intent') that allow for continued data harvesting of citizens and total surveillance of non-citizens.Shifts from 'legal protection' to 'intentional loophole', making the systemic nature of surveillance visible.
"we want the people defending the United States to have the best tools."We are choosing to increase the lethality and automated reach of the U.S. military to secure our position as a primary defense contractor.Replaces the 'defensive' euphemism with a 'profit/lethality' reality, highlighting corporate agency.
"Our deployment architecture will enable us to independently verify that these red lines are not crossed"We are hoping that our software filters can catch the abuses of a military agency that has a documented history of bypassing oversight.Exposes the 'safety stack' as a fragile and likely ineffective check on state power.

Extended Processing Summary​

Gemini's Processing Steps
What you are about to read…

Gemini refers to the text below as "thought summaries." This is an overt consciousness projection because "intentions" are hallmarks of a conscious mind that "knows" what it is doing and why. The concealed mechanistic process is probabilistic text generation. Treat this as just another rhetorical artifactβ€”a way of making the model's processing legible. When the model writes "I should double check that...", it is mathematically altering its own probability distribution for the next token to be more accurate. The "I" is a functional gear in the machine, not a self-reference. This language is a deliberate interface design choice, not a biological reality. The developers could have programmed the model to output passive logs. Instead, they chose to simulate a narrator.

Analyzing the Announcement

I'm currently dissecting a hypothetical 2026 announcement concerning an OpenAI agreement with the Department of War. I'm focusing on critical discourse analysis. The language used, the implied power dynamics, and potential implications are being scrutinized. My primary aim is to identify any hidden agendas or biases within this fictional scenario.

Deconstructing the Discourse

I'm now applying a Critical Discourse Analysis schema to the hypothetical OpenAI announcement. I've pinpointed the tone as a blend of professionalism and defensiveness, particularly when comparing to Anthropic. I'm focusing on key players such as OpenAI ("we"), the Department of War, and U.S. persons, alongside key themes like safety, legality, and national security. I've noted a significant linguistic shift back to "Department of War". Furthermore, I've begun identifying critical instances of nominalization, passive voice, and diffused agency.

Identifying Key Rhetoric

I'm now identifying rhetorical mechanisms within the OpenAI announcement. I've noted the delegation of agency to the "system," framing it as the subject rather than the general/operator. Key features, like "guardrails" versus "surveillance", "U.S. persons", "frontier AI labs", and the "safety stack," are under scrutiny. Inclusive "we," partnerships with the DoW, and competitive marginalization also stand out. Expert positioning is also a critical mechanism.

Evaluating Expert Positioning

I'm now examining how the OpenAI announcement frames "AI expert involvement." It excludes public oversight by suggesting that only "cleared" experts can understand and monitor the technology. My analysis has led me to evaluate the agreement's "moral superiority through technicalization" by using terms such as "safety stack" to suppress ethical debate, and American Exceptionalism as an ethical boundary, focusing on "U.S. persons" while ignoring global impact. I will use the "ideological detective" to write the synthesis and conclusion sections.cd


info

Raw JSON: 2026-03-03-our-agreement-with-the-department-of-war-cda-spicy-lite-vwuot7.json Framework: CDA-Spicy-Lite v1.0 Generated: 2026-03-03T09:45:03.237Z

Discourse Depot Β© 2025 by TD is licensed under CC BY-NC-SA 4.0