← Back to stories

Autonomous systems in Ukraine reveal escalating militarisation of AI, obscuring geopolitical and ethical drivers behind battlefield automation

Mainstream coverage frames battlefield robots as tactical tools, but the deeper systemic issue is the militarisation of AI as a geopolitical strategy, driven by arms races among states and corporations. The narrative obscures how this trend normalises autonomous weapons, diverts resources from diplomacy, and entrenches power asymmetries between technologically advanced nations and conflict zones. It also ignores the long-term risks of AI-driven escalation spirals in warfare.

⚡ Power-Knowledge Audit

The narrative is produced by Western-centric think tanks and academic outlets like The Conversation, often funded by institutions tied to defense industries or allied governments. It serves the interests of military-industrial complexes by framing AI in warfare as inevitable and controllable, while obscuring the lobbying power of arms manufacturers and the strategic agendas of states investing in autonomous systems. The framing also deflects scrutiny from the ethical vacuums in AI governance and the lack of international treaties addressing autonomous weapons.

📐 Analysis Dimensions

Eight knowledge lenses applied to this story by the Cogniosynthetic Corrective Engine.

🔍 What's Missing

The original framing omits the historical precedents of arms races (e.g., nuclear proliferation) and the role of corporate actors (e.g., Palantir, Anduril) in shaping military AI. It also ignores the perspectives of conflict-affected communities, particularly in the Global South, where autonomous weapons could be deployed without oversight. Indigenous and non-Western ethical frameworks on warfare and technology are entirely absent, as are the voices of Ukrainian civilians or Russian soldiers who bear the brunt of these systems.

An ACST audit of what the original framing omits. Eligible for cross-reference under the ACST vocabulary.

🛠️ Solution Pathways

  1. 01

    Ban on Autonomous Weapons Systems (AWS)

    Push for a legally binding international treaty, similar to the Ottawa Treaty banning landmines, to prohibit the development, deployment, and use of fully autonomous weapons. Civil society groups like the Campaign to Stop Killer Robots and the International Committee for Robot Arms Control must lead advocacy, leveraging the UN Convention on Certain Conventional Weapons (CCW) to establish global norms before adoption becomes irreversible.

  2. 02

    Ethical AI Governance in Military Contexts

    Mandate independent ethical reviews for all AI systems used in warfare, with mandatory participation from Global South representatives and Indigenous scholars. Establish oversight bodies with veto power over systems that fail to meet criteria for human accountability, proportionality, and distinction. This requires defunding military AI projects that lack transparency, such as those by Palantir and Anduril, and redirecting funds to civilian oversight.

  3. 03

    Demilitarisation of AI Research

    Divest from military-funded AI research in universities and corporations, replacing it with public-interest AI focused on conflict de-escalation, humanitarian aid, and peacebuilding. The EU’s AI Act could serve as a model by classifying autonomous weapons as 'high-risk' systems, but stronger measures are needed to prevent loopholes for dual-use technologies.

  4. 04

    Indigenous and Local Peacebuilding Initiatives

    Support Indigenous-led peacebuilding models, such as the Māori *Whanaungatanga* (relationship-building) approaches in Aotearoa/New Zealand or the *Jirga* systems in Afghanistan, which prioritise dialogue over militarisation. Fund grassroots organisations in conflict zones to document the impacts of autonomous weapons and develop culturally grounded alternatives to technological warfare.

🧬 Integrated Synthesis

The deployment of autonomous systems in Ukraine is not an isolated tactical innovation but a symptom of a global arms race where states and corporations treat AI as a strategic imperative, echoing historical patterns of technological militarisation. This trend is enabled by a Western-centric narrative that frames AI as a neutral tool, obscuring the geopolitical power structures—from defense contractors to allied governments—that profit from perpetual conflict. The ethical vacuums in AI governance are not accidental but structural, as evidenced by the lack of treaties addressing autonomous weapons despite clear scientific and historical warnings. Cross-culturally, the rejection of such systems is rooted in deep philosophical traditions that prioritise human agency and interdependence, yet these voices are systematically marginalised in favor of a technocratic vision of warfare. The path forward requires dismantling the militarisation of AI through binding treaties, ethical governance, and the amplification of Indigenous and marginalised perspectives, lest we sleepwalk into a future where warfare is governed by algorithms rather than humanity.

🔗