← Back to stories

AI augments instructor feedback in economics education but systemic inequities persist without structural reform

Mainstream coverage frames AI as a tool to enhance instructor efficiency, obscuring how its deployment entrenches existing hierarchies in higher education. The trial reveals that AI-mediated feedback only works when human graders retain control, yet neither the study nor the article interrogates why instructors are overburdened or how AI adoption might exacerbate disparities in teaching labor. The framing ignores the broader political economy of education, where adjunctification and underfunding create conditions where AI is even considered as a solution.

⚡ Power-Knowledge Audit

The narrative is produced by a University of Michigan Engineering study, a bastion of techno-solutionism that frames education as a problem to be optimized rather than a public good to be protected. The framing serves the interests of ed-tech corporations and university administrators seeking to reduce labor costs while maintaining the illusion of innovation. It obscures the power structures that prioritize STEM disciplines over humanities and the ways AI entrenches existing inequalities in access to quality education.

📐 Analysis Dimensions

Eight knowledge lenses applied to this story by the Cogniosynthetic Corrective Engine.

🔍 What's Missing

The original framing omits the historical devaluation of teaching labor, particularly the rise of adjunctification and the gig economy in academia. It ignores the racial and gendered dimensions of grading labor, where women and people of color are disproportionately tasked with emotional and pedagogical labor. Indigenous and Global South pedagogical traditions, which emphasize relational and holistic feedback, are entirely absent. The article also fails to address how AI systems perpetuate biases present in training data, particularly in economics where neoliberal frameworks dominate.

An ACST audit of what the original framing omits. Eligible for cross-reference under the ACST vocabulary.

🛠️ Solution Pathways

  1. 01

    Redesign grading labor through cooperative models

    Implement peer-led feedback systems, where students are trained to provide high-quality, culturally responsive critiques to one another, reducing instructor burden while fostering collaborative learning. Pair this with a living wage for all instructors, including adjuncts, to address the root causes of overwork that make AI an attractive ‘solution.’ Models like Brazil’s ‘pedagogia da autonomia’ or the Danish ‘free school’ system demonstrate how shared responsibility can improve education without relying on technology.

  2. 02

    Develop AI systems grounded in anti-racist and decolonial pedagogies

    Train AI feedback tools on datasets that include diverse cultural contexts and non-Western pedagogical traditions, ensuring that the feedback provided is not biased toward dominant epistemologies. Collaborate with Indigenous scholars and educators from the Global South to co-design these systems, ensuring they align with values of relational learning and communal accountability. This approach would require shifting funding from tech corporations to public universities and community-led initiatives.

  3. 03

    Institute democratic governance of educational technology

    Create faculty-student-administrator committees to oversee the adoption of AI tools in education, ensuring that decisions are made collectively rather than imposed top-down. These committees should include representatives from marginalized groups, such as students with disabilities or first-generation college students, whose needs are often overlooked. Transparency in how AI systems are trained and deployed is essential to prevent the entrenchment of existing power structures.

  4. 04

    Invest in human-centered faculty development

    Shift institutional priorities from ‘efficiency’ to ‘effectiveness,’ investing in faculty training that emphasizes relational teaching methods over algorithmic optimization. Programs like the University of Michigan’s ‘Teaching for Equity and Inclusion’ initiative could be scaled nationally, with a focus on addressing the racial and gendered dimensions of grading labor. This would reduce reliance on AI while improving educational outcomes for all students.

🧬 Integrated Synthesis

The University of Michigan trial reveals a paradox: AI can enhance feedback when used as a tool, but its integration into education is shaped by deeper systemic failures, particularly the adjunctification of faculty labor and the neoliberal redefinition of education as a marketable service. The study’s narrow focus on technical efficiency obscures how AI adoption perpetuates inequalities, from the racialized and gendered distribution of grading labor to the erasure of Indigenous and Global South pedagogies. A systemic solution requires reimagining education as a public good, where technology serves—not replaces—human relationships, and where marginalized voices shape the systems that govern their learning. This demands not just technical fixes but a cultural shift: from viewing AI as a silver bullet to embracing it as one tool among many in a broader movement for educational justice, rooted in the wisdom of diverse traditions and the collective labor of students and educators alike.

🔗