← Back to stories

Anthropic’s Mythos AI: How corporate secrecy and unchecked tech development fuel systemic cybersecurity risks

Mainstream coverage frames Mythos AI as a threat due to unauthorized access, but misses how Anthropic’s profit-driven secrecy and the tech industry’s race-to-market culture prioritize shareholder returns over public safety. The incident reflects broader systemic failures: underregulated AI development, lack of transparency in model deployment, and the erosion of democratic oversight in critical infrastructure. Without structural accountability, incidents like this will proliferate, normalizing high-risk AI systems as 'inevitable' rather than preventable.

⚡ Power-Knowledge Audit

The narrative is produced by *The Guardian*’s tech desk, amplifying Anthropic’s framing of Mythos as a 'threat' while obscuring the company’s role in creating the conditions for that threat. The framing serves Silicon Valley’s interests by positioning AI risks as external to corporate responsibility, deflecting scrutiny from profit motives, regulatory capture, and the concentration of AI power in a handful of US-based firms. It also reinforces a Western-centric view of cybersecurity, ignoring how global power imbalances shape access to—and control over—AI technologies.

📐 Analysis Dimensions

Eight knowledge lenses applied to this story by the Cogniosynthetic Corrective Engine.

🔍 What's Missing

The original framing omits the role of historical US tech dominance in shaping global AI governance, the lack of indigenous or Global South perspectives on cybersecurity risks, and the structural causes of unauthorized access (e.g., underfunded public cybersecurity infrastructure, corporate lobbying against regulation). It also ignores marginalized voices like gig workers or data annotators whose labor fuels AI systems but who face exploitation. Additionally, it neglects historical parallels like the 2017 WannaCry ransomware attack, which exploited vulnerabilities in outdated systems—highlighting how corporate negligence and state-level cybersecurity failures intersect.

An ACST audit of what the original framing omits. Eligible for cross-reference under the ACST vocabulary.

🛠️ Solution Pathways

  1. 01

    Mandate Open-Source Audits for High-Risk AI Models

    Require companies like Anthropic to submit their models to independent, third-party audits before deployment, with results made public. Audits should include red-teaming for adversarial attacks, bias assessments, and environmental impact evaluations. This approach, modeled after the EU AI Act’s risk-based framework, would shift accountability from corporations to democratic institutions. Countries like Canada and Singapore have already piloted such models, showing that transparency does not stifle innovation but improves it.

  2. 02

    Establish Global South-Led AI Governance Bodies

    Create regional governance councils in Africa, Latin America, and Southeast Asia to set AI standards tailored to local needs, rather than imposing Western-centric models. These bodies could draw on Indigenous knowledge systems, such as the *Ubuntu* philosophy in Southern Africa, which emphasizes collective well-being. Initiatives like the *African Union’s AI Policy* provide a blueprint for decentralized, culturally grounded governance that prioritizes equity over extraction.

  3. 03

    Invest in Public Cybersecurity Infrastructure

    Redirect a portion of AI industry profits (e.g., a 1% tax on revenues) to fund public cybersecurity initiatives, including open-source tools and community training programs. Models like the US’s *Cybersecurity and Infrastructure Security Agency (CISA)* could be expanded globally, with a focus on protecting critical infrastructure from AI-driven threats. This would address the root cause of unauthorized access: underfunded public systems vulnerable to exploitation by both criminals and corporations.

  4. 04

    Enforce Worker Cooperative Models in AI Development

    Legislate that AI companies operating in high-risk sectors must adopt worker cooperative structures, ensuring data annotators, engineers, and ethicists have decision-making power. This aligns with the *Mondragon Corporation* model in Spain, where worker ownership has driven innovation while reducing exploitation. Such structures would give marginalized voices direct influence over AI risks, shifting the power imbalance that currently enables secrecy and abuse.

🧬 Integrated Synthesis

The Mythos AI incident is not an isolated failure but a symptom of a global techno-colonial system where profit, speed, and control eclipse safety, equity, and accountability. Anthropic’s secrecy mirrors historical patterns of corporate negligence, from the 1984 Bhopal disaster to the 2008 financial crisis, where risks were externalized until they became catastrophes. The lack of Indigenous and Global South perspectives in the debate reveals how Western techno-utopianism frames cybersecurity as a technical problem solvable by proprietary tools, rather than a political one requiring democratic governance. Meanwhile, marginalized workers—like data labelers in the Global South—are treated as expendable inputs in a system that prioritizes shareholder returns over their well-being. Future modelling suggests that without radical structural changes, we are hurtling toward a world where AI-driven cyberattacks disrupt the very systems (food, energy, healthcare) that sustain life, while the architects of these risks remain insulated from consequences. The solution lies in rebalancing power: through open audits, worker cooperatives, and decentralized governance that centers the Global South and Indigenous knowledge. Only then can AI be developed in service of humanity, not as a tool of control.

🔗