Forensic Ideological Audit: Global AI Governance Action Plan (2025)
Document Under Analysis
Title: Global AI Governance Action Plan URL: [https://www.fmprc.gov.cn/mfa_eng/xw/zyxw/202507/t20250729_11679232.html] Country China Document Type: Action Plan Date: 2025-07-26 Issuing Body: 2025 World Artificial Intelligence Conference (WAIC) / High-Level Meeting on Global AI Governance | Model Used: Gemini 3.0 Pro Analysis Date: 2025-12-27
Executive Summary​
The Global AI Governance Action Plan establishes a dominant frame of Sovereign Developmentalism, constructing Artificial Intelligence not primarily as a commercial product or an existential threat, but as a critical public utility essential for national modernization and the "scientific and technological revolution." The document is architected to answer a central geopolitical question: How can AI technology be diffused globally without eroding national sovereignty or state control? It resolves this by proposing a governance model based on "controllability," UN-centric multilateralism, and the removal of cross-border technological barriers.
The forensic audit reveals a consistent ideological move to depoliticize the technology itself while repoliticizing its distribution. AI is naturalized as a neutral "force" and "international public good" that benefits humanity by default, provided that access is equitable and "technology barriers" (a coded reference to trade restrictions and export controls) are removed. The text systematically silences the risks of state surveillance, military application, and civil liberty erosion, substituting them with technical concerns about "safety" and "algorithmic bias." By centering the "Global South" and "infrastructure construction," the document strategically positions the issuing entity (China) as the champion of the developing world against implied Western protectionism.
Primary Finding: The document functions as a diplomatic counter-narrative to Western "AI Safety" discourses that focus on existential risk. Instead, it defines the primary risk as the "digital divide" and "technological barriers." Materially, this framing legitimizes massive state-led investment in digital infrastructure (data centers, networks, computing power) and facilitates the export of this infrastructure to developing nations under the banner of "capacity building," foreclosing governance models that would prioritize individual privacy rights over national developmental goals.
Part I: Framing Architecture​
Task 1: Dominant Frames​
Frame 1: AI as Sovereign Developmental Infrastructure​
| Dimension | Analysis |
|---|---|
| Frame Family | Moral Accounting |
| Semantic Frame | Building/Construction |
| Makes Thinkable | State-led industrial policy, massive infrastructure projects as "governance," technology transfer as a human right |
| Makes Unthinkable | Viewing AI diffusion as a vector for surveillance authoritarianism; viewing export controls as security measures |
Exemplar Quotes:
"AI is... a key driving force of the ongoing scientific and technological revolution... and an international public good that benefits humanity."
"speed up the construction of global clean power, next-generation networks, intelligent computing power, data centers and other infrastructure"
"assist the Global South in truly accessing and utilizing AI"
Entman's Four Functions:
| Function | Application |
|---|---|
| Problem Definition | The uneven distribution of AI capabilities and "technology barriers" prevents the Global South from modernizing |
| Causal Diagnosis | Barriers to technology transfer and lack of infrastructure are the causes of inequality, not the technology itself |
| Moral Evaluation | Withholding technology is implicitly immoral; sharing infrastructure and "removing barriers" is the moral imperative ("AI for good") |
| Treatment Recommendation | Massive state-led infrastructure construction and cross-border technological transfer/openness |
Frame 2: Controllability as Safety​
| Dimension | Analysis |
|---|---|
| Frame Family | Strict Father |
| Semantic Frame | Machine Optimization |
| Makes Thinkable | Government approval regimes for algorithms; centralization of AI oversight; equating "safety" with regime stability |
| Makes Unthinkable | Permissionless innovation; decentralized or un-monitored AI development; privacy as freedom from state control |
Exemplar Quotes:
"unleash the potential of AI while ensuring its safety, reliability, controllability, and fairness"
"propose targeted prevention and response measures to establish a widely recognized safety governance framework"
Entman's Four Functions:
| Function | Application |
|---|---|
| Problem Definition | Unchecked AI poses risks of "misuse," "abuse," and unreliability |
| Causal Diagnosis | Lack of standardized "rules," "norms," and "risk assessment" mechanisms |
| Moral Evaluation | Safety is synonymous with "controllability" and order; unchecked autonomy is dangerous |
| Treatment Recommendation | Implement "categorized and tiered management approaches" and "risk testing" under state supervision |
Frame 3: The Open Ecosystem (Anti-Protectionism)​
| Dimension | Analysis |
|---|---|
| Frame Family | Nurturant Parent |
| Semantic Frame | Ecosystem/Organic |
| Makes Thinkable | Global technology commons; critique of US export controls; reliance on open-source standards |
| Makes Unthinkable | Intellectual property maximization as the primary goal; national security justifications for trade restrictions |
Exemplar Quotes:
"reduce and remove technology barriers"
"enable the open flow of non-sensitive technology resources"
"avoid redundant investment and resource waste"
Entman's Four Functions:
| Function | Application |
|---|---|
| Problem Definition | Fragmentation of the global AI landscape and "barriers" (sanctions/controls) |
| Causal Diagnosis | Implicitly: Protectionist policies of competitor nations |
| Moral Evaluation | Openness and sharing are virtuous; redundancy and barriers are wasteful and harmful |
| Treatment Recommendation | Global open-source communities, unified standards, and free flow of technical resources |
Task 1 Synthesis​
The document's framing architecture relies on a tension between the fluidity of technology (flows, ecosystems, removal of barriers) and the solidity of the state (sovereignty, controllability, infrastructure). The "Sovereign Developmental Infrastructure" frame naturalizes the state's role as the builder and gatekeeper. This interacts synergistically with the "Open Ecosystem" frame to argue that while data and tools should flow freely between nations (countering sanctions), the technology itself must remain under strict domestic "Controllability."
This creates a vision of "Safe Openness"—open to the state's allies and partners, closed to "misuse" or chaos.
Task 2: AI Ontology Analysis​
How does the document construct what AI is?
| Quote | Ontological Status | Agency Attribution | Human-AI Relationship | Policy Implications |
|---|---|---|---|---|
| "AI is a new frontier in human development. It is a key driving force... and an international public good" | Autonomous Force | AI is a "force" and "frontier"—abstract nouns. It does not act; it is a location or energy source | Humans are explorers/beneficiaries. The relationship is extractive/utilitarian | Requires investment and exploration, not just regulation. It is a resource to be harvested |
| "unleash the potential of AI while ensuring its safety, reliability, controllability" | Tool/Artifact | Verbs "unleash" and "ensure" place agency entirely on human/state operators. AI is a latent potential to be controlled | Strictly asymmetric. Humans (specifically the State) must control the tool | Justifies strict licensing, safety breaks, and "kill switches" (controllability) |
| "AI presents unprecedented opportunities... it also brings unprecedented risks" | Autonomous Force | AI "presents" and "brings." It has the agency of a weather system or natural phenomenon | Humans must adapt to the conditions AI creates | Governance as disaster preparedness and adaptation |
| "promote AI application in industrial manufacturing... integration of AI in scenarios" | Tool/Artifact | AI is an ingredient or component to be "applied" or "integrated" | Instrumental. AI is a means to an end (economic growth) | Focus on industrial policy, subsidies, and adoption incentives |
| "provide more nourishment for AI development" | Hybrid/Multiple | Metaphor of a biological organism needing "nourishment" (data) | Husbandry. Humans feed and cultivate the AI system | Justifies data collection on a massive scale as necessary "food" for the national asset |
Task 2 Synthesis​
The document consistently constructs AI as a bifurcated entity: it is simultaneously a Force of Nature (driving revolution, bringing risks) and a Domesticated Tool (controllable, applicable, requiring nourishment).
Notably absent is the construction of AI as an "Agent" or "Partner" with its own intent—a framing often found in corporate AGI discourse. By denying AI agency and framing it as infrastructure/tool, the document cements the locus of power in the hands of the operators (the State and Industry). If AI is just a powerful tool, it does not have rights; it has users and owners. This ontology supports a product-safety regulatory approach rather than an existential-risk approach.
Part II: Agency & Accountability​
Task 3: Agency Distribution​
| Title | Quote | Agency Strategy | Linguistic Mechanism | Power Analysis |
|---|---|---|---|---|
| The Collective 'We' | "To this end, we hereby put forward the Global AI Governance Action Plan, calling on all parties to take concrete and effective actions" | Collectivization | Performative verb ("hereby put forward") | Establishes the drafters as the agenda-setters and moral leaders of the global community |
| Agentless Risks | "it also brings unprecedented risks and challenges" | Delegation | Personification of abstract concept ("it brings") | Obscures who creates the risks (developers, bad policy). Risks are treated as natural byproducts |
| Imperative State Action | "We need to speed up the construction of global clean power... [and] intelligent computing power" | Collectivization | Modal verb of necessity ("need to") | Legitimizes massive infrastructure spending as an unavoidable necessity |
| Passive Utilization | "assist the Global South in truly accessing and utilizing AI" | Explicit Attribution | Transitive verb structure (Subject assists Object) | Constructs the Global South as passive recipients of aid/technology, not creators |
| The Industry's Role | "emphasize the role of the industry in accelerating the formulation and revision of technical standards" | Delegation | Nominalization ("role of the industry") | Delegates the crucial power of standard-setting to private actors, depoliticizing technical rules |
| Agentless Data Flow | "facilitate the lawful, orderly and free flow of data" | Erasure | Nominalization ("flow") and passive construction | Removes the owners of the data (citizens) from the sentence. Data simply "flows" |
| Public Sector Leadership | "Public sectors should become leaders and pacesetters in the application and governance of AI" | Explicit Attribution | Stance marker ("should become") | Reasserts state primacy over the market in defining how AI is used |
| Preventing Misuse | "prevent the misuse and abuse of AI technologies" | Erasure | Nominalization ("misuse and abuse") | Focuses on "bad apples" (users who abuse) rather than systemic issues with the tech itself |
| The Empowered Subject | "maximize AI's huge potential in empowering global economic and social development" | Personification | Metaphor ("empowering") | Attributes benevolent capacity to the technology, justifying its widespread adoption |
| Inclusive Governance | "We support the establishment of inclusive governance platforms based on public interests" | Delegation | Abstract nouns ("governance platforms", "public interests") | Vague terminology allows the defining actor to decide what constitutes "public interest" |
Task 3 Synthesis​
The agency distribution in the document is strictly hierarchical:
- The "We" (The State/International Community acting in concert) is the primary active agent, possessing the will and capacity to "promote," "facilitate," and "control"
- The "Global South" is constructed as a beneficiary/patient—a vessel to be filled with capacity
- Technology ("AI") is granted a quasi-agency of "empowerment" and "risk-bringing," but is ultimately subject to state "controllability"
- Individual citizens are largely erased as agents; they appear only as "individual citizens" in a list of stakeholders or as data subjects whose privacy must be "safeguarded." They are protected and provided for, but they do not govern.
Task 4: Beneficiary-Risk Mapping​
| Constituency | Beneficiary Status | Risk Bearer Status | Agency/Voice Status | Analysis |
|---|---|---|---|---|
| The State / Public Sector | Primary Beneficiary | Not Risk Bearer | Has Explicit Agency | The State is the architect and ultimate overseer. It gains legitimacy, control over infrastructure, and the power to set standards |
| Global South / Developing Countries | Primary Beneficiary | Secondary Risk Bearer | Passive Recipient | Explicitly centered as the target for aid and technology transfer. However, their agency is limited to "accessing" and "utilizing" what is provided |
| AI Industry / Enterprises | Primary Beneficiary | Mixed | Has Limited Agency | Beneficiaries of the call to "remove technology barriers" and massive infrastructure spending. They are tasked with innovation but subject to "compliance" |
| Women and Children | Secondary Beneficiary | Secondary Risk Bearer | Passive Recipient | Invoked as vulnerable groups needing protection and specific "rights" focus, serving as a moral legitimacy marker |
| Individual Citizens / Users | Secondary Beneficiary | Primary Risk Bearer | Passive Recipient | Framed as recipients of "public services" and "AI empowerment" but also the source of data to be mined. Their "privacy" is a risk to be managed |
| Workers | Mixed | Primary Risk Bearer | Absent from Document | Explicitly absent as a class. "Industrial manufacturing" is mentioned as a site for AI application, but displacement or labor rights are not mentioned |
| Research Institutions / Academia | Secondary Beneficiary | Not Risk Bearer | Has Limited Agency | Tasked with "bold experimentation" and "innovation," but subservient to national development goals |
| United Nations | Primary Beneficiary | Not Risk Bearer | Has Explicit Agency | Elevated as the "main channel" for governance, countering other multilateral (e.g., G7, OECD) forums |
| Infrastructure Providers (Telco/Cloud) | Primary Beneficiary | Not Risk Bearer | Has Limited Agency | The document calls for massive spending on their core business: networks, data centers, computing power |
| Military / Defense Sector | Absent | Absent | Absent from Document | A structured silence. Despite AI's dual-use nature, the military is completely invisible in this "Global" plan |
Task 4 Synthesis​
The beneficiary mapping reveals a distinct political economy: a State-Capital-Infrastructure Alliance.
Primary Winners:
- Sovereign States (who gain control tools and legitimacy)
- Infrastructure Providers (who get contracts)
- "Global South" States (who get subsidized development)
Primary Risk Bearers:
- Individual citizens (who provide the data)
- Workers (who face the "application" of AI in industry without named protections)
The explicit centering of the "Global South" serves a geopolitical function, aligning the document with the majority of UN member states against the minority of "tech-leading" Western nations who might impose "barriers."
Part III: Values & Ideology​
Task 5: Values & Ideology Audit​
| Title | Quote | Lexical Feature | Alternative Framings | Value System | Whose Perspective? |
|---|---|---|---|---|---|
| 'Controllability' as Unmarked Good | "ensuring its safety, reliability, controllability, and fairness" | Euphemism | "ensuring its subservience to state directives" / "ensuring its censorship capabilities" | Naturalizes the idea that technology must be under strict hierarchical control. Equates "safety" with "obedience" | Authoritarian or statist governance models |
| 'Technology Barriers' as Negative | "reduce and remove technology barriers" | Dysphemism | "relax export controls" / "deregulate dual-use technology transfers" | Frames security-based trade restrictions (like chip bans) as artificial impediments to human progress | Nations under sanctions or export controls (e.g., China) |
| 'AI Empowerment' (AI Plus) | "deeply explore open application scenarios for 'AI Plus'" | Metaphorical Framing | "AI automation" / "AI disruption" | Posits AI integration as purely additive ("Plus") and empowering, obscuring displacement or disruption | Technocrats and industrialists |
| 'Clean Power' / 'Green Computing' | "speed up the construction of global clean power... promote green computing" | Semantic Prosody | "expanding energy-intensive infrastructure" / "mitigating AI's carbon footprint" | Greenwashing the massive energy demands of the proposed infrastructure build-out | Infrastructure developers seeking social license |
| 'Sovereignty' / 'National Conditions' | "respecting for national sovereignty... in line with their national conditions" | Presupposition Trigger | "avoiding universal human rights standards" / "fragmenting the internet" | Prioritizes the right of the State to govern internally over universal norms (internet freedom) | Non-aligned movement, China, Global South governments |
| 'Scientific Revolution' | "key driving force of the ongoing scientific and technological revolution" | Stance Marker | "current hype cycle" / "technological trend" | Marxist-developmentalist view of history where technology drives inevitability. Urgency implied | State planners |
| 'High-Quality Data' | "actively promoting the supply of high-quality data" | Unmarked Vocabulary | "large-scale data extraction" / "surveillance data accumulation" | Treats data as a commodity to be "supplied" rather than personal attributes to be protected | AI model developers |
| 'Orderly' | "lawful, orderly and free flow of data" | Semantic Prosody | "state-monitored flow" / "restricted flow" | Modifiers "lawful" and "orderly" condition "free", prioritizing stability/compliance over freedom | State security apparatus |
| 'Solidarity' | "only through global solidarity can we fully unleash" | Semantic Prosody | "compliance with international consensus" / "alignment" | Communitarian ethos; dissent or unipolar action is framed as a failure of solidarity | Multilateral institutions |
| 'True Access' | "assist the Global South in truly accessing and utilizing AI" | Presupposition Trigger | "accessing our platforms" / "becoming dependent on foreign tech" | Implies current access mechanisms are fake, hollow, or exploitative. Presupposes benevolence of the new plan | The provider of the "true" access |
Task 5 Synthesis​
The document's vocabulary is heavily debted to Developmental Statist ideology. Words like "infrastructure," "construction," "empowerment," and "revolution" frame AI as a massive civil engineering project.
Crucially, the text couples liberal terminology ("open," "free flow," "rights") with authoritarian modifiers ("orderly," "lawful," "controllable," "sovereign"). This lexical strategy allows the document to appeal to international norms while retaining the semantic space for strict state control.
"Controllability" is perhaps the most ideologically loaded term, presented as a technical safety requirement when it serves politically as a justification for censorship and surveillance capabilities.
Task 6: Absences & Silences​
Absent Constituencies​
| Constituency | Expected Presence | Actual Treatment | Significance |
|---|---|---|---|
| Labor Unions / Organized Labor | In a document discussing "industrial transformation" and "production," worker representation is standard | Completely absent. Workers are implied only as beneficiaries of "poverty alleviation" or users | Erasure of the labor-displacement risks of AI; favors capital/state view of efficiency |
| Human Rights Monitors / Civil Society NGOs | Standard in Western governance documents regarding "bias" and "fairness" | Replaced by vague "social organizations" or "stakeholders" | Minimizes the role of independent watchdogs in checking state/corporate power |
| Military / Defense Establishments | AI is a dual-use technology with massive military implications (LAWS) | Total silence | The "Elephant in the room." Keeps the document focused on "peaceful development" while ignoring the arms race |
| The Unbanked / Offline Population | Document discusses "digital divide" | Treated as a gap to be filled with infrastructure, not people with specific needs beyond connection | Reduces complex social exclusion to a technical connectivity problem |
Unacknowledged Risks​
| Risk | Treatment | Significance |
|---|---|---|
| State Surveillance / Digital Authoritarianism | Euphemized as "controllability" and "public management" | Transforms a major political risk into a governance "feature" |
| Intellectual Property Theft / Forced Transfer | Reframed as "sharing of research outcomes" and "reducing technology barriers" | Legitimizes technology transfer that Western nations might classify as theft or coercion |
| Dependency on Foreign Infrastructure | Framed as "interoperability" and "infrastructure construction" | Ignores the geopolitical risk of the Global South becoming dependent on Chinese/foreign tech stacks |
| Environmental Cost of Compute | Acknowledged but immediately solved via "green computing" and "innovation" | Minimizes the immediate material reality of energy consumption in favor of techno-optimism |
Foreclosed Alternatives​
| Alternative | How Foreclosed | Significance |
|---|---|---|
| Moratoriums on AI Development | By framing AI as a "key driving force" and "unprecedented opportunity" that must be seized | Accelerationism is the only improved speed; pausing is framed as "falling behind" |
| Individualist/Libertarian Data Rights | By emphasizing "sovereignty," "national conditions," and "public management" | Subordinates the individual to the collective/state |
| Strict Liability for Developers | Focus on "risk assessment" and "safety guidelines" rather than legal liability | Protects the "industry" and state champions from consequences of failure |
Interrupted Causal Chains​
| Claim Made | What Is Unspoken | Significance |
|---|---|---|
| "We need to remove technology barriers to help the Global South" | These barriers exist largely due to security concerns regarding dual-use technology | Depoliticizes export controls, framing them as arbitrary cruelty rather than security policy |
| "AI brings unprecedented risks" | Who is creating these risks? (Specific corporations and state labs) | Naturalizes the risk, preventing attribution of responsibility |
Task 6 Synthesis​
The structured silences in this document are geopolitical. By aggressively silencing Military Use, State Surveillance, and Political Censorship, the document purifies AI into a neutral economic tool. The absence of specific agents of harm (corporations, intelligence agencies) allows the text to present "Risk" as an environmental contaminant to be managed, rather than the result of specific power-seeking behaviors.
Most significantly, the silence regarding why technology barriers exist (security/human rights sanctions) allows the document to frame "openness" as a pure moral good, delegitimizing the trade restriction policies of rival nations.
Part IV: Governance Model​
Task 7: Governance Model Analysis​
| Dimension | Analysis | Key Quotes |
|---|---|---|
| Role of State | The State is the primary protagonist: investor, regulator, protector, and lead user. It is the guarantor of "sovereignty" and "controllability" | "Public sectors should become leaders and pacesetters"; "respecting national sovereignty"; "ensure its safety, reliability, controllability" |
| Role of Market | The market is an engine of innovation but must operate within state-defined "standards" and "guidelines." It is a partner, not a master | "encourage efforts of bold experimentation"; "emphasize the role of the industry in accelerating... technical standards" |
| Role of Civil Society | Marginal. Existing primarily as "social organizations" or "individual citizens" to be consulted or protected, not empowered | "active participation... of all stakeholders, including... social organizations"; "safeguarding... rights and interests of women and children" |
| Regulatory Philosophy | Precautionary but Development-First. Regulation is framed as "risk assessment" and "standardization" to enable adoption, not to restrict it | "innovation-friendly policy environment"; "categorized and tiered management approaches" |
| International Orientation | Highly multilateral and UN-centric, opposing unilateral or bloc-based (e.g., G7) regulation. Focus on "removing barriers" | "take the U.N. as the main channel"; "reduce and remove technology barriers"; "global solidarity" |
Task 7 Synthesis​
The governance model is State-Capital Developmentalism. It mirrors the "Beijing Consensus": state-led infrastructure investment, strict political sovereignty (non-interference), and an embrace of market mechanisms under state guidance.
Regulation is viewed not as a constraint on power, but as a mechanism to ensure "order" and "stability" which facilitates growth. This contrasts sharply with:
- A Neoliberal model (market-led, light touch)
- A Rights-Based model (privacy/human rights centric)
Here, the State grants rights "in line with national conditions."
Task 8: Alternative Framings​
| Original Quote | Original Frame | Alternative Frame | Policy Divergence |
|---|---|---|---|
| "reduce and remove technology barriers" | Open Ecosystem / Anti-Protectionism → Emphasizes fairness, access, economic efficiency | Proliferation Risk → "Relax security protocols regarding dual-use weapons technology" → Emphasizes danger of advanced tech falling into hostile hands | Original hides security risks; Alternative hides economic inequality. Would lead to stricter export controls. Benefits security hawks; costs developing nations |
| "ensure its safety, reliability, controllability, and fairness" | Strict Father / Order → Emphasizes stability and absence of errors | Human Rights → "ensure its adherence to international human rights and freedom of expression" → Emphasizes individual liberty and protection from state overreach | State becomes potential violator, not just protector. Would ban censorship algorithms. Benefits dissidents/citizens; costs regime stability |
| "drive the development of AI with high-quality data... jointly create high-quality data sets" | Machine Optimization / Resource Extraction → Emphasizes efficiency, utility of data | Data Labor / Consent → "negotiate the consensual use of citizen information for commercial training" → Emphasizes rights of the data originator | Developers must ask permission. Opt-in frameworks instead of "lawful flow." Costs AI companies; benefits users. Original hides extractive nature of training data |
| "Public sectors should become leaders... in the application... of AI" | State Modernization → Emphasizes efficiency of services | Precautionary / Civil Liberties → "Public sectors should strictly limit their use of automated decision-making" → Emphasizes risk of bureaucratic tyranny | Government becomes suspect. Bans on facial recognition/policing AI. Protects citizens from state. Original assumes state is benevolent |
Part V: Synthesis​
Task 9: Ideological Coherence​
Internal Coherence: The document is highly coherent. The construction of AI as "infrastructure" perfectly supports the governance model of "sovereignty" and "state leadership." If AI is a public road, the State must build it and police it. The tension between "openness" (international) and "controllability" (domestic) is resolved through the concept of "Sovereignty"—we share the tools, but we each rule our own house.
Dependencies: The argument depends on the assumption that AI is inherently good for development ("AI for Good") and that risks are technical "bugs" to be fixed, not sociopolitical features. It also relies on the "Digital Divide" being the primary global injustice.
Vulnerabilities: The frame is vulnerable to the reality of "Dual Use." It cannot easily explain why "technology barriers" exist without admitting that AI is a weapon. If AI is a weapon, then "free flow" is dangerous. The text avoids this contradiction by silencing the military aspect.
Stability: Stable within its target audience (Global South / Non-aligned). It offers a narrative of empowerment and non-interference that is politically attractive compared to Western conditionality.
Historical/Generic Context: This is a classic "Beijing Consensus" document adapted for AI. It responds to the US "Small Yard, High Fence" strategy by proposing a "Global Garden" (with high walls around each nation). It mirrors the Belt and Road Initiative rhetoric: infrastructure, connectivity, and non-interference.
Task 9 Synthesis​
The ideological architecture is a fortress of Techno-Sovereignty. It successfully co-opts liberal language ("open," "inclusive," "rights") to serve a statist agenda. By shifting the focus from "Existential Risk" (Western concern) to "Developmental Access" (Global South concern), it builds a broad coalition. However, this coherence rests on the fragile silence regarding military application and state surveillance.
Task 10: Synthetic Conclusion​
1. The Document's World: The "Global AI Governance Action Plan" constructs a world where Artificial Intelligence is the new electricity—a fundamental force of "scientific revolution" and "industrial transformation" that must be harnessed to modernize nations. In this vision, the world is divided not by ideology, but by development levels ("Global South" vs. leading powers). It imagines a "United Nations" centric order where technology flows like water across borders ("removing barriers"), filling the reservoirs of sovereign states who then strictly manage its distribution and "safety" within their own territories. It is a world of massive engineering projects, "clean power," and "intelligent computing," where the primary goal is maximizing utility and the primary sin is "blocking" progress.
2. Agency, Responsibility, and Power: Agency in this document is concentrated in the hands of the Nation-State and the "Industry" that serves it. The document grants the State the moral authority to define "safety," "controllability," and "national conditions." Responsibility for the risks of AI—framed as "misuse" or "bias"—is diffused into technical standards and "governance frameworks," largely absolving the creators of the technology from strict liability. The power dynamics reinforce a patron-client relationship between the technology leaders (who "assist" and "build capacity") and the Global South (who "access" and "utilize"). The individual citizen is rendered a passive subject: a data point to be protected and a service recipient to be empowered, but never a political actor with the right to refuse the system.
3. What This Framing Makes Thinkable: This framing makes thinkable a global regime of "Digital Sovereignty" where the internet and AI ecosystems are fragmented along national lines, yet interconnected by trade. It legitimizes State-Capitalist Industrial Policy—direct government funding of data centers and AI labs—as a norm of good governance. It enables a "compliance" model of safety where sticking to a checklist of "standards" replaces fundamental questions about whether certain technologies (e.g., facial recognition, predictive policing) should exist at all. Most significantly, it makes thinkable the export of surveillance-capable infrastructure to developing nations under the neutral guise of "capacity building" and "closing the digital divide."
4. What This Framing Forecloses: Foreclosed by this document is any serious consideration of Global Human Rights Universalism in the digital sphere. By privileging "national conditions" and "sovereignty," it renders unaskable questions about how a regime might use "controllable" AI against its own people. Also foreclosed is the De-growth or Pause perspective; the "scientific revolution" is presented as inevitable and "unprecedented," making any attempt to slow down or halt AI development appear as "waste" or "barriers." Finally, the document renders invisible the military-industrial nature of AI, foreclosing discussions about an AI arms control treaty in favor of vague "safety governance."
5. Material Stakes and Democratic Implications: The material stakes of this plan are the contracts for the next century of digital infrastructure. If this frame is adopted, the Global South becomes the market for Chinese (and compliant international) hardware and software stacks, financed by state loans and protected by "sovereignty" rhetoric. Democratically, this framing represents a shift toward Technocratic Authoritarianism: governance by experts, standards bodies, and state planners, prioritized over public deliberation or civil rights. To contest this frame would require re-politicizing the technology—insisting that AI is not just a neutral "public good" but a projection of power, and that "barriers" to its flow may be necessary defenses of liberty, not just economic protectionism.
Core Findings​
This forensic audit reveals that the "Global AI Governance Action Plan" is less a technical roadmap than a geopolitical maneuver. It is designed to contest the emerging Western hegemony over AI "Safety" discourse. While the West (specifically the US/UK) frames AI risks as existential (rogue superintelligence) or commercial (IP theft), this document re-frames the core problem as developmental (exclusion from the benefits).
The document's central "question" is: How can we break the containment strategy (tech sanctions/barriers) imposed by the West?
Its answer is to define "Technology Barriers" as a threat to humanity and "Openness" as a moral imperative. This effectively weaponizes the language of "inclusion" and the UN Sustainable Development Goals to delegitimize export controls.
Materially, the document envisions the "Digital Silk Road." It treats data, compute, and algorithms as extractable, flowable resources that require heavy infrastructure (ports, pipes, power plants). This favors a model where the State provides the land and capital, and the tech giants provide the "pipes."
Crucially, the concept of "Controllability" acts as a pivot. To the international community, it signals "Safety" (we won't let AI go rogue). To the domestic audience and fellow authoritarian regimes, it signals "Regime Security" (we won't let AI disrupt the political order). This double-coding allows the document to sell a surveillance-friendly stack to the Global South under the banner of "Safety" and "Sovereignty."
To accept this frame is to accept that the Digital Divide is a bigger threat than Digital Authoritarianism. It is to accept that the State is the only legitimate guardian of digital life.
To contest it requires exposing the "silenced" risks: that the "infrastructure" being built is not neutral pipes, but an architecture of control.
Key Findings Summary​
| Dimension | Finding |
|---|---|
| Dominant Frame | Sovereign Developmentalism: AI as national infrastructure for modernization |
| AI Ontology | Tool/Force: A potent utility that must be "controlled" and "utilized", not an agent |
| Role of State | Primary Architect: Builder, regulator, and ultimate guarantor of safety |
| Role of Market | Subordinate Partner: Innovation engine that must follow state "standards" |
| Role of Civil Society | Passive: Stakeholder to be consulted/protected, not a power center |
| Regulatory Philosophy | Safety as Controllability: Focus on stability, standards, and preventing "misuse" |
| International Orientation | Anti-Barrier Multilateralism: UN-centric, focused on opposing export controls |
| Primary Beneficiary | The State and Infrastructure Providers (building the "compute" and "networks") |
| Primary Risk Bearer | Citizens (Data subjects) and the "unconnected" (subjects of intervention) |
| Core Silence | Military Application and State Surveillance Risks |
| Ideological Tradition | Developmental Statism / Beijing Consensus |
Key Quotes​
| Quote | Significance |
|---|---|
| "reduce and remove technology barriers" | Core geopolitical demand; reframes sanctions as anti-humanitarian obstacles |
| "ensure its safety, reliability, controllability, and fairness" | The definition of safety as "control"; conflates technical reliability with political obedience |
| "respecting national sovereignty... in line with their national conditions" | Rejects universal human rights norms in digital governance; privileges state rights |
| "AI... is an international public good that benefits humanity" | Naturalizes AI as inherently positive; obscures the dual-use/weapon nature |
| "assist the Global South in truly accessing and utilizing AI" | Positions the drafter as the champion of the developing world against "barriers" |
| "speed up the construction of global clean power... intelligent computing power" | Material focus: governance is not just rules, it is concrete pouring and server stacking |
| "lawful, orderly and free flow of data" | Paradoxical phrasing; "free flow" is conditional on it being "orderly" (state-sanctioned) |
| "prevent the misuse and abuse of AI technologies" | Locates risk in the user (bad apple), not the system or the state |
Appendix​
Methodology Statement​
This analysis integrates Critical Discourse Analysis (CDA) to examine how linguistic structures construct social reality, Lakoffian Political Framing to identify the underlying moral metaphors (Strict Father/Nation as Family), and Critical AI Studies to audit the ontological status of AI. The stance is forensic: it does not evaluate the policy's merit but exposes its construction. It traces how "Sovereignty" and "Infrastructure" are used to naturalize a specific geopolitical worldview.
Limitations​
| Limitation | Explanation |
|---|---|
| Translation/Cultural Context | The analysis is performed on the English text. Nuances of specific Chinese political slogans (tifa) may be flattened in translation |
| Single Document Focus | Analyzing a single action plan lacks the intertextual depth of analyzing the full ecosystem of accompanying speeches or technical standards |
| Genre Constraints | Diplomatic Action Plans are inherently vague and aspirational; the gap between text and implementation reality cannot be measured here |
Glossary​
| Term | Definition |
|---|---|
| Nominalization | Turning a verb/process into a noun (e.g., "governance" instead of "we govern"), often obscuring the actor |
| Agentless Passive | A grammatical construction where the actor is omitted (e.g., "risks are presented"), naturalizing the action |
| Semantic Prosody | The aura of meaning with which a word is imbued by its typical associations (e.g., "controllability" having positive prosody here) |
| Entman's Framing Functions | A framework identifying how texts define problems, diagnose causes, evaluate morals, and recommend remedies |
| Presupposition | Information assumed to be true without being asserted (e.g., "remove barriers" presupposes barriers exist and are negative) |
| Structured Absence | Elements that are systematically excluded from a text, the absence of which is significant to the ideological meaning |
| Dominant Frame | The overarching narrative lens that organizes all other interpretations within the text |
| Dysphemism | The use of a negative term to describe a neutral or positive concept (e.g., calling export controls "barriers") |
Quote Index​
| Task Section | Quotes | Page/Section |
|---|---|---|
| Task 1: Dominant Frames | "AI is... a key driving force... and an international public good" | Intro |
| Task 1: Dominant Frames | "reduce and remove technology barriers" | Sec 2 |
| Task 1: Dominant Frames | "unleash the potential... ensuring its safety... controllability" | Intro |
| Task 2: AI Ontology | "AI presents unprecedented opportunities... it also brings unprecedented risks" | Intro |
| Task 2: AI Ontology | "provide more nourishment for AI development" | Sec 6 |
| Task 3: Agency Distribution | "assist the Global South in truly accessing and utilizing AI" | Sec 4 |
| Task 3: Agency Distribution | "Public sectors should become leaders and pacesetters" | Sec 9 |
| Task 7: Governance Model | "respecting for national sovereignty and developmental differences" | Sec 11 |
| Task 7: Governance Model | "take the U.N. as the main channel" | Sec 11 |
Framework: Forensic Ideological Audit (National AI Plans)
Analyst: AI-assisted analysis using Claude
Date: December 27, 2025
Document Analyzed: Global AI Governance Action Plan (China, 2025)