← Back to stories

OpenAI’s closed-access biology LLM entrenches corporate control over life sciences data, deepening extractive AI monopolies

Mainstream coverage frames GPT-Rosalind as an innovation in biological research, obscuring how it centralises proprietary control over biological knowledge and entrenches OpenAI’s monopoly over AI-driven science. The closed-access model risks privatising publicly funded biological data, while reinforcing Silicon Valley’s extractive approach to life sciences. This reflects broader patterns of techno-feudalism in AI, where corporate entities gatekeep critical tools behind paywalls, limiting global collaboration and democratising scientific progress.

⚡ Power-Knowledge Audit

The narrative is produced by Ars Technica, a tech-focused outlet that often amplifies Silicon Valley’s framing of AI as a neutral, progressive force. This framing serves the interests of OpenAI and its investors by normalising closed-access AI models as inevitable and beneficial, while obscuring the power asymmetries they create. The framing also aligns with the broader tech industry’s push to position itself as the sole arbiter of scientific progress, marginalising public institutions and open science movements.

📐 Analysis Dimensions

Eight knowledge lenses applied to this story by the Cogniosynthetic Corrective Engine.

🔍 What's Missing

The original framing omits the historical context of corporate enclosure of scientific knowledge, such as the privatisation of genetic data through patents and the legacy of colonial biopiracy. It also ignores the role of indigenous knowledge systems in biological research, which have long contributed to biodiversity conservation and medicinal breakthroughs. Additionally, the framing neglects the structural inequities in AI development, where Global South researchers are often excluded from access to cutting-edge tools due to cost and licensing barriers.

An ACST audit of what the original framing omits. Eligible for cross-reference under the ACST vocabulary.

🛠️ Solution Pathways

  1. 01

    Open-Source Alternatives for Biological AI

    Develop and fund open-source AI models for biology, such as those led by the European Bioinformatics Institute or the African Centre of Excellence for Genomics. These models prioritise transparency, reproducibility, and global collaboration, ensuring that scientific progress is not gatekept by corporate entities. Open-access frameworks also enable researchers from low-resource settings to contribute to and benefit from AI-driven discoveries.

  2. 02

    Data Sovereignty and Indigenous Data Governance

    Implement frameworks for Indigenous data sovereignty, such as the CARE Principles for Indigenous Data Governance, to ensure that biological data derived from Indigenous communities is controlled by those communities. This includes mechanisms for benefit-sharing, informed consent, and the right to withdraw data. Collaborative projects, such as the Global Indigenous Data Alliance, provide models for how this can be operationalised in practice.

  3. 03

    Public Funding for Open Science Infrastructure

    Redirect public funding from proprietary AI models to open science infrastructure, such as cloud-based research platforms and open-access journals. Governments and philanthropic organisations can incentivise this shift by requiring open-access publication and data sharing in grant agreements. Initiatives like the Chan Zuckerberg Initiative’s support for open-source tools demonstrate how public funding can drive systemic change.

  4. 04

    Global South-Led AI Research Consortia

    Establish consortia led by researchers from the Global South to develop AI tools tailored to local health and environmental challenges. These consortia can leverage open-source models and prioritise data sovereignty, ensuring that solutions are contextually relevant and equitable. Partnerships with institutions like the African Academy of Sciences or the Latin American Council of Social Sciences can provide the necessary infrastructure and support.

🧬 Integrated Synthesis

The launch of GPT-Rosalind exemplifies the techno-feudalisation of science, where a Silicon Valley corporation centralises control over biological knowledge under the guise of innovation. This closed-access model perpetuates historical patterns of enclosure, from colonial biopiracy to the patenting of genetic resources, while sidelining Indigenous and Global South perspectives that have long championed collaborative and communal approaches to knowledge. Scientifically, the proprietary framework risks limiting reproducibility and transparency, undermining the very progress it claims to accelerate. Future pathways must prioritise open-access alternatives, Indigenous data governance, and public funding for equitable infrastructure to prevent a future where a handful of corporations dictate the terms of biological innovation. The systemic insight is clear: without structural intervention, AI-driven science will deepen inequities rather than democratise them, echoing the failures of past enclosure movements.

🔗