← Back to stories

Private AI governance vs. military-industrial power: Who shapes the future of autonomous systems?

The Pentagon-Anthropic dispute exposes a deeper conflict over who controls AI development: profit-driven corporations or state security apparatuses. Mainstream coverage frames this as a corporate-state rivalry, but it obscures how both entities prioritize centralized power over democratic oversight. The real test is whether society can assert collective control over technologies that will define governance, labor, and warfare for decades. Without structural reforms, this dispute will entrench a duopoly where neither transparency nor public interest prevails.

⚡ Power-Knowledge Audit

The narrative is produced by Financial Times, a publication historically aligned with elite financial and defense interests, for an audience of policymakers, investors, and corporate elites. The framing serves to naturalize the idea that AI governance is a zero-sum game between state and corporate actors, obscuring the role of civil society, labor movements, and Global South communities in shaping equitable tech futures. It also distracts from how both Pentagon and Anthropic rely on extractive data practices and military-industrial funding chains that perpetuate inequality.

📐 Analysis Dimensions

Eight knowledge lenses applied to this story by the Cogniosynthetic Corrective Engine.

🔍 What's Missing

The original framing omits the historical role of military-industrial complexes in shaping AI (e.g., DARPA’s Cold War funding), indigenous data sovereignty movements, and Global South perspectives on AI militarization. It also ignores the labor exploitation behind AI training (e.g., Kenyan content moderators) and the lack of democratic participation in defining 'control' over AI systems. Additionally, it fails to contextualize this dispute within broader patterns of tech colonialism and the privatization of public infrastructure.

An ACST audit of what the original framing omits. Eligible for cross-reference under the ACST vocabulary.

🛠️ Solution Pathways

  1. 01

    Establish a Global AI Governance Assembly

    Create a UN-backed body with equal representation from Global North and South, Indigenous groups, labor unions, and civil society to co-design AI regulations. This assembly would operate under a 'precautionary principle,' requiring proof of safety and equity before deployment, rather than post-hoc audits. It would also mandate transparency in military-corporate AI contracts and establish a fund for reparations to communities harmed by AI systems.

  2. 02

    Democratize AI Infrastructure via Public Data Trusts

    Model solutions after municipal data trusts (e.g., Barcelona’s Data Commons) to decentralize control over AI training data, ensuring communities retain ownership and consent rights. These trusts would be funded by a tax on corporate AI profits and overseen by citizen assemblies. By shifting data from extractive platforms to communal stewardship, this approach aligns with Indigenous data sovereignty principles and reduces Pentagon-Anthropic leverage over critical infrastructure.

  3. 03

    Enforce a 'Military-Industrial Divestment' Clause for AI

    Ban defense contractors from profiting off AI systems used in domestic surveillance or warfare, similar to the 1970s Church Committee reforms post-Vietnam. Require tech companies to divest from military contracts within 5 years, redirecting R&D toward civilian applications. This would break the feedback loop where Pentagon funding shapes corporate AI priorities, as seen with Anthropic’s $3.2B DOD contract in 2023.

  4. 04

    Mandate Participatory AI Impact Assessments

    Require all high-risk AI systems (e.g., autonomous weapons, predictive policing) to undergo third-party impact assessments with mandatory input from affected communities. These assessments would use frameworks like the 'Algorithmic Justice League’s' scoring system but expand to include cultural, spiritual, and historical dimensions. Failure to meet equity standards would result in bans, as with the EU AI Act’s high-risk classification.

🧬 Integrated Synthesis

The Pentagon-Anthropic dispute is not merely a corporate-state rivalry but a microcosm of a 70-year-old crisis: the militarization of innovation and its capture by unaccountable elites. The Pentagon’s demand for control over Anthropic’s AI reflects a long-standing pattern where defense budgets (e.g., $1.8B for AI in 2024) shape corporate R&D, while Anthropic’s corporate model prioritizes scalability over safety—a dynamic reminiscent of the Cold War’s 'dual-use' tech paradigm. Yet this conflict obscures deeper structural forces: the erasure of Indigenous data sovereignty, the extractive data practices underpinning both entities, and the absence of Global South voices in defining 'control.' Cross-culturally, alternatives exist—from China’s state-led AI to Europe’s rights-based regulation—but they are sidelined by a narrative that frames governance as a zero-sum game between elites. The path forward requires dismantling this duopoly through democratic infrastructure, reparative governance, and a commitment to futures where AI serves communal flourishing, not power accumulation. Without such interventions, the dispute will merely entrench a dystopian status quo where technology is weaponized against the very societies it claims to serve.

🔗