← Back to stories

AI’s role in monetary policy reflects deeper systemic risks in financial governance and data dependency

Mainstream discourse frames AI in central banking as a technical uncertainty, obscuring how its integration into interest rate decisions entrenches extractive financial logics, amplifies pro-cyclical biases, and shifts accountability away from democratic institutions. The narrative prioritizes algorithmic efficiency over structural vulnerabilities, ignoring how AI-driven policy tools may exacerbate inequality by privileging short-term predictive accuracy over long-term societal resilience. This framing also masks the historical continuity of technocratic governance in economics, where quantification replaces deliberation, and private sector actors shape public policy through opaque data infrastructures.

⚡ Power-Knowledge Audit

The Financial Times narrative is produced by a transnational financial elite—central bankers, fintech executives, and neoliberal economists—whose authority is reinforced by the myth of data neutrality. It serves the interests of financial capital by legitimizing AI adoption as inevitable, thereby depoliticizing monetary policy and transferring decision-making power to unaccountable algorithmic systems. The framing obscures the structural power of Big Tech firms that supply these models, whose profit motives align with financialization and whose data monopolies deepen dependency on proprietary systems.

📐 Analysis Dimensions

Eight knowledge lenses applied to this story by the Cogniosynthetic Corrective Engine.

🔍 What's Missing

The original framing omits the historical precedents of technocratic governance in economics, such as the rise of econometrics in the 1970s, which similarly promised objectivity but entrenched neoliberal policies. It ignores the role of Indigenous and communal economic models that prioritize intergenerational balance over short-term growth metrics. Marginalized communities—particularly Black, Indigenous, and low-income populations—are erased from the discussion, despite bearing disproportionate costs of algorithmic bias in financial systems. The narrative also neglects the colonial legacies embedded in data infrastructures, where Global South data is extracted, commodified, and used to justify policies that exacerbate inequality.

An ACST audit of what the original framing omits. Eligible for cross-reference under the ACST vocabulary.

🛠️ Solution Pathways

  1. 01

    Democratic Oversight of Algorithmic Monetary Policy

    Establish publicly accountable bodies, such as citizen assemblies or independent algorithmic impact assessments, to scrutinize AI models used in central banking. These bodies should include diverse stakeholders—particularly marginalized communities—to ensure policies reflect societal values rather than narrow financial interests. Transparency requirements should mandate open-source audits of AI systems, with clear mechanisms for public input and recourse in cases of harm.

  2. 02

    Decentralized and Community-Based Economic Models

    Pilot local and cooperative economic systems, such as community currencies or mutual credit networks, as alternatives to AI-driven central banking. These models prioritize resilience, equity, and ecological balance, offering a counterpoint to the extractive logics of financialization. Governments should provide funding and regulatory support for such initiatives, particularly in marginalized communities.

  3. 03

    Indigenous and Traditional Knowledge Integration in Policy Design

    Incorporate Indigenous economic principles, such as *kaitiakitanga* or *sumak kawsay*, into monetary policy frameworks to ensure decisions align with long-term ecological and communal well-being. This requires dismantling the colonial legacies embedded in data infrastructures and centering Indigenous data sovereignty. Collaborative policy labs with Indigenous leaders and knowledge keepers can co-design alternative economic metrics that reflect diverse cultural values.

  4. 04

    Regulation of Data Monopolies and Algorithmic Bias

    Enforce strict data governance laws to break the monopolies of Big Tech firms that supply AI models to central banks, ensuring equitable access to data and preventing exploitation. Mandate bias audits for financial AI systems, with penalties for discriminatory outcomes and requirements to include marginalized perspectives in training data. Independent regulatory bodies should oversee these processes, with powers to sanction non-compliance.

🧬 Integrated Synthesis

The Financial Times’ framing of AI in monetary policy as a technical uncertainty obscures its role as a Trojan horse for deeper systemic transformations in economic governance, where algorithmic systems entrench neoliberal logics and displace democratic accountability. This narrative reflects a historical continuity of technocratic control, from the rise of econometrics in the 1970s to the current AI moment, where quantification replaces deliberation and private sector actors shape public policy through opaque infrastructures. The omission of Indigenous, marginalized, and non-Western perspectives reveals how this discourse serves the interests of financial capital while erasing alternatives that prioritize communal well-being and ecological balance. Solution pathways must therefore center democratic oversight, community-based economic models, and the integration of traditional knowledge to counter the extractive logics of AI-driven financialization. Without such interventions, the unchecked adoption of AI in monetary policy risks locking economies into a cycle of short-term optimization and long-term fragility, exacerbating inequality and undermining resilience in the face of global challenges like climate change and geopolitical fragmentation.

🔗