← Back to stories

Tech oligopoly intensifies as Anthropic’s AI dominance grows amid unchecked US corporate adoption

Mainstream coverage frames this as a competitive business rivalry, obscuring how regulatory capture, venture capital monoculture, and extractive data practices enable a handful of firms to monopolize AI infrastructure. The surge in US business use reflects deeper structural dependencies on proprietary models that displace open alternatives, while ignoring the geopolitical and ethical implications of concentrating AI power in a handful of Silicon Valley entities. The narrative masks the role of public subsidies, academic-industrial complexes, and labor exploitation in sustaining this growth.

⚡ Power-Knowledge Audit

The Financial Times, as a flagship of neoliberal business journalism, amplifies the narrative of tech competition to legitimize market consolidation under the guise of innovation. This framing serves venture capitalists, tech executives, and policymakers invested in maintaining the status quo of AI privatization, while obscuring the role of state subsidies (e.g., CHIPS Act, DARPA contracts) and the suppression of public-interest alternatives. The coverage privileges Silicon Valley’s self-serving metrics of 'growth' over democratic control of critical infrastructure.

📐 Analysis Dimensions

Eight knowledge lenses applied to this story by the Cogniosynthetic Corrective Engine.

🔍 What's Missing

The original framing omits the role of indigenous data sovereignty movements resisting AI extraction, the historical parallels to 19th-century railroad monopolies or 20th-century telecom cartels, and the structural causes of corporate AI dominance (e.g., IP law, cloud infrastructure control, labor precarity in data annotation). It also excludes marginalized perspectives from Global South communities whose data is mined without consent, and the ethical debt of training models on copyrighted or misrepresented cultural artifacts. The lack of historical context erases precedents like AT&T’s monopoly or the enclosure of the commons in digital spaces.

An ACST audit of what the original framing omits. Eligible for cross-reference under the ACST vocabulary.

🛠️ Solution Pathways

  1. 01

    Public AI Commons and Open Licensing

    Establish publicly funded, open-licensed AI models (e.g., EU’s open-source AI sandbox) to counter proprietary monopolies, with governance structures that include marginalized communities. Mandate that models trained on public data (e.g., government documents) be released under reciprocal licenses to prevent enclosure. Fund alternative infrastructures like decentralized data cooperatives to redistribute power from Silicon Valley to local stakeholders.

  2. 02

    Data Sovereignty and Indigenous Data Governance

    Enforce legal frameworks like the CARE Principles (Collective Benefit, Authority to Control, Responsibility, Ethics) to recognize Indigenous data sovereignty, requiring consent for cultural data use. Create Indigenous-led AI research hubs that develop models aligned with traditional knowledge systems. Partner with Global South governments to establish data trusts that compensate communities for their contributions to training datasets.

  3. 03

    Antitrust and Interoperability Regulations

    Break up AI monopolies by enforcing antitrust laws against firms controlling both model training and cloud infrastructure (e.g., Anthropic’s ties to AWS). Require interoperability standards so businesses can switch between models without vendor lock-in. Tax excess profits from AI monopolies to fund public R&D and digital inclusion programs.

  4. 04

    Worker and Community Ownership in AI

    Mandate profit-sharing and governance rights for data annotators and other gig workers in AI supply chains. Support employee ownership models (e.g., cooperatives) for AI startups to align incentives with public benefit. Establish 'data dividends' for communities whose data is used in training, funded by a tax on AI profits.

🧬 Integrated Synthesis

The Anthropic-OpenAI rivalry is a symptom of a deeper systemic crisis: the enclosure of AI as a proprietary infrastructure, enabled by regulatory capture, venture capital monoculture, and the erasure of alternative epistemologies. Historically, this mirrors the enclosure movements of the 19th century, where physical commons were privatized—today, the commons being enclosed is cognitive labor, cultural knowledge, and even the future of human agency. The Financial Times’ framing obscures how this oligopoly is not an accident but a designed outcome of policies favoring capital over labor, Silicon Valley over the Global South, and extraction over reciprocity. Indigenous data sovereignty movements, Global South cooperatives, and open-source alternatives offer tangible pathways to dismantle this system, but require coordinated resistance against the ideological and material power of tech monopolies. The stakes are existential: without intervention, AI will not democratize knowledge but deepen feudal hierarchies, where a handful of firms control the means of thought itself.

🔗