Benjamin Eidelson, Patterned Inequality, Compounding Injustice, and Algorithmic Prediction, 1 Am. J.L. & Equality 252 (2021).
Abstract: If whatever counts as merit for some purpose is unevenly distributed, a decision procedure that accurately sorts people on that basis will “pick up” and reproduce the pre-existing pattern in ways that more random, less merit-tracking procedures would not. This dynamic is an important cause for concern about the use of predictive models to allocate goods and opportunities. In this article, I distinguish two different objections that give voice to that concern in different ways. First, decision procedures may contribute to future social injustice and other social ills by sustaining or aggravating patterns that undermine equality of status and opportunity. Second, the same decision procedures may wrong particular individuals by compounding prior injustices that explain those persons’ predicted or actual characteristics. I argue for the importance of the first idea and raise doubts about the second. In normative assessments and legal regulation of algorithmic decisionmaking, as in our thinking about anti-discrimination norms more broadly, a central concern ought to be the prospect of entrenching harmful and unjust patterns—quite apart from any personal wrong done to the individuals about whom predictions are made.