Abstract
Keywords
This article is a part of special theme on Algorithmic Reparation. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/Algorithmic%20Reparation?pbEditor=true
Introduction
At the time of this writing, two movements arise in tandem: algorithmic justice and justice through reparations. The former targets embedded inequities baked into algorithmic systems. The latter seeks redress for historical wrongs and their enduring effects.
First introduced in 2021, algorithmic reparation is a structural and historical orientation toward algorithms in society, committed to correcting inequities through material and symbolic redress (Davis et al., 2021). This is an alternative to algorithmic fairness and its application via fair machine learning (FML)—representing a fairness paradigm that strives for objectivity by removing, bracketing, and otherwise neutralizing the effects of social categories (Corbett-Davies et al., 2023). Both algorithmic fairness and algorithmic reparation share a critique of computing as a vector of injustice. How to address that injustice is the point of departure. Algorithmic fairness assumes a basic meritocracy that has been distorted by deficient computer programs, a problem to be resolved through better data, better models, and better math. From a reparative perspective, however, meritocracy is a myth. This myth protects, conceals, and sustains endemic stratification.
The debate between algorithmic fairness and algorithmic reparation is more than a conceptual disagreement. These are competing frames with implications for both thought and action. As a framing device, algorithmic fairness seeks neutrality by erasing differences and treating everyone the same. In contrast, algorithmic reparation pays attention to difference, surging resources to the margins. While fairness is firmly located in the present, reparation is temporally fluid—recognizing a present that is shaped by the past and devising interventions that alter the future.
Algorithmic reparation is thus a political project of worldbuilding. By worldbuilding, we refer to active efforts that (re)shape the background conditions of social life—those that scaffold interpersonal, organizational, and institutional arrangements in ways that affect experiences, relationships, resources, and opportunities. This embodies the basic sociological premise that social order is never predetermined or inherent, but collectively constituted and achieved.
The timing for a worldbuilding project is imminent, and algorithmic reparation is well-suited to the task. New developments in generative AI have captured public and policy attention across international bodies, resulting in both targeted and sweeping legislation such as the EU AI Act, Biden's White House Blueprint for an AI Bill of Rights, its active displacement by the AI policies of a new Trump administration, and the UN's Resolution on Artificial Intelligence—which together place boundaries around and set standards for AI as tools of both industry and the state. 1 Simultaneously, bills for reparation circulate through US state legislatures (Brooks, 2024; Carol Ammons and Illinois General Assembly 2023; Sanders, 2023). These sit in contest and tension with aggressive efforts to enshrine colorblind logics by, for example, restricting lessons on structural racism within K-12 schools (Alexander et al., 2023) and dismantling university programs for diversity, equity, and inclusion (Flannery, 2024). Meanwhile, wars wage in which AI figures prominently, aiding and justifying violent acts by powerful nations that threaten the existence of their foes (Suchman, 2024). These are “unsettled times” (Swidler, 1986), in which norms and practices are uncertain and ill-formed, such that futures are malleable and subject to reimagining. We submit algorithmic reparation as a mechanism of future-making, integrating movements that are long-standing and robust.
Algorithmic justice and reparative justice
Algorithms are political, as are the AI applications and machine learning models through which algorithms operate. The combined work of investigative journalists, academic researchers, and advocacy groups shows an inextricable tie between societal patterns and technological developments. An entire genre of “harms” literature has emerged, revealing social problems, especially social inequities, perpetuated and intensified through algorithmic systems (Shelby et al., 2023). Welfare recipients are penalized by automated decision tools (Eubanks, 2018), machine vision can’t read dark skin (Buolamwini, 2023), Black women are algorithmically filtered out of online dating markets (Williams, 2024), people with disabilities are rendered suspect through remote test proctoring software (Swauger, 2020), and environmental damages from resource-intensive computing concentrate in places with existing economic and environmental stressors, while benefits stream toward regions that are already well-off (The AI Threats to Climate Change Report, 2024).
That algorithmic systems pose a social justice problem is now axiomatic, motivating various efforts toward mitigation. The field of FML has been leading the charge, developing models to reduce algorithmic bias (Corbett-Davies et al., 2023). Those efforts have been largely ineffective, hampered by techno-solutionism and ahistorical foundations, along with false assumptions of basic meritocracy (which we have elsewhere termed
Reparation is a broad term that refers to recompense, or “repair” for damages borne by individuals and groups at the hands of other individuals and groups. Historian Olúfémi Táíwò distinguishes between two broad takes on reparation:
A constructivist take on reparations is messier than direct. This perspective recognizes the ways interwoven social currents accumulate and compound, with causal factors that are difficult to isolate, quantify, and count (Táíwò, 2022). Evoking and expanding a health-disparities metaphor Benjamin describes structural inequity in terms of harsh weather—an atmospheric effect that soaks, blisters, and wares (Benjamin, 2022). The constructivist approach to reparation is expansive, resisting and exceeding direct lines of blame and recompense, focused instead on improving the climate. Such improvements cannot be achieved by critique alone but require positive proposals for social betterment. Encapsulating both the expansive nature of constructivist reparation and its imperative for deliberate change, Táíwò (2022) challenges us to consider
Heeding Táíwò's call, algorithmic reparation adopts a primarily constructivist position. Through the prism of algorithmic systems, we consider how worlds can be remade at the intersection of computing and society—and how algorithms entangle with laws, institutions, social movements, and revolutions. Within this, direct reparations also hold purpose, situated as one part of a multifaceted whole.
A research program and political project
In 2021, we introduced algorithmic reparation as a counter to the fairness paradigm, building from and in solidarity with, burgeoning critiques of fairness from various corners of critical data studies (Hanna et al., 2020; Hoffmann, 2019; Mohamed et al., 2020). Algorithmic reparation was our positive proposal, one way to turn critique into action. Such a turn is never done in one shot, nor done alone. In the spirit of developing an idea and a concept into something greater, we convened a workshop in 2022 with colleagues from academia, industry, and advocacy groups. Through a series of panels, discussions, and design exercises, we asked participants to scrutinize, interrogate, play with, apply, expand, and refine algorithmic reparation. We then invited those participants to use the workshop as inspiration for scholarly work, bringing their particular expertise to algorithmic reparation across theoretical models and empirical domains. We collect those pieces here, setting the terms and foundations for an ongoing agenda.
A program in progress: overview of the special theme articles
The issue opens with work by Eglash et al. (2024), writing about “artisanal labor” as a site of “generative justice.” In
Two papers in the issue address racial disparities in home ownership as a pernicious and enduring problem now enmeshed with algorithmic systems. Zhang's (2023)
Tracey and Garcia (2024) also focus on housing but do so through the lens of public services for people experiencing homelessness.
Two articles in the collection focus on health and healthcare, though the methodological, empirical, and theoretical approaches vary. In
Big Tech is another topic of concern within this collection. Rakova et al. (2023) focus on enthusiastic and meaningful consent in their paper
Many of the ideas that feature in this collection of articles implicitly or explicitly require legal interventions to support and enforce sociotechnical change. Taking the legal question head on, Doyle et al. (2024) probe the conditions under which algorithmic reparation can be effectively deployed. In
Onward
The overlaps and tensions that emerge within and across papers in this collection highlight the multifaceted nature of algorithmic reparation as both a program of research and a political project. They also demonstrate gaps, frictions, and a need for theoretical refinement. We are equally energized by the advances generated through the collection and by the new challenges those advances sow. The next step is to mold a comprehensive conceptual structure, carving parameters and exploring modes of application. The work of doing so is in progress, with a book on the way that depends, fundamentally, on the body of work represented herein (Davis and Williams, Forthcoming).
