Skip to content

POSTPONED: Artificial Intelligence, Criminal Justice, and Risk Assessment

November 14, 2022

5:00 pm - 6:30 pm

This event has passed

Zoom

This event will be postponed and is no longer on Monday, November 14.

Join the HLS Law & Philosophy Society for a fascinating talk with Professor David Boonin on “Artificial Intelligence, Criminal Justice, and Risk Assessment: The Right to an Explanation Objection to Opaque Recidivism Prediction Algorithms.” Professor Boonin is a Professor of Philosophy, Chair of the Philosophy Department at the University of Colorado Boulder, and Director of the Colorado Summer Seminar in Philosophy. He is a prolific author and has published widely in the fields of ethics, history of ethics, and applied ethics, including books on race, abortion, punishment, and posthumous harm. His talk will take place on Monday, November 14, from 5:00 to 6:30 p.m. on Zoom. Please use this link to register for the event. If you encounter any problem with registration, please reach out to us at: lawphilosophy@mail.law.harvard.edu.

Here is the talk’s abstract:

Risk assessment tools like COMPAS (Correctional Offender Management Profiling for Alternative Sanction) use sophisticated algorithms to calculate the probability that an offender will commit an additional offense within a certain number of years of the date of the assessment. These calculations are based on information about the offender and their past conduct. Courts and parole boards frequently use such algorithms when making decisions about parole, probation, bail, and even sentencing. The inner workings of these tools remain inaccessibly opaque to the defendants whose fate is in part determined by them when the algorithms in question belong to a private corporation (as is the case with COMPAS) or are driven by advanced forms of artificial intelligence that generate unfathomably complex predictive models. In such cases, it seems plausible to conclude that using these algorithms in these ways is morally objectionable because it seems plausible to suppose that defendants have a moral right to receive an explanation of the reasoning that led to the decisions that were made in their cases and that the opacity of such algorithms prevents them from receiving one.

In this talk, I will discuss two arguments that have been offered in defense of this right to an explanation objection to using opaque risk assessment tools in these ways. The first maintains that using them for these purposes is analogous to other practices that clearly violate a defendant’s due process rights. The second maintains that using them in these ways violates a requirement of state transparency that is a necessary condition for political legitimacy. I will try to show that both arguments are unsuccessful. In addition, I will offer what I believe to be a novel argument in defense of the claim that defendants do not have a right to an explanation of the reasoning that led to the decisions that were made in their cases about parole, probation, bail, and sentencing. The argument is based on the claim that offenders do not have a right to have jurors explain the reasoning that led to their decision to vote to convict them and that if this is so, then they also lack the right to have courts and parole boards explain the reasoning that led to their decisions about parole, probation, bail, and sentencing. This conclusion about the merits of the right to an explanation objection may prove disturbing, but I will argue that the implications of the alternative position are even harder to accept.

Finally, Professor Boonin has kindly shared with us a few optional readings in case you’d like to learn more about this issue:
Alternatively, he has proposed two shorter pieces:
No preparation is needed to attend the talk, and Professor Boonin will not presume that you’ve read this material.

Add to Calendar

November 14, 2022, 5:00 pm - 6:30 pm

+Google Calendar

Upcoming Events