Boris Babic and I. Glenn Cohen, The Algorithmic Explainability ’Bait and Switch’, 108 Minn. L. Rev. (Forthcoming 2023).
Abstract: Explainability in artificial intelligence and machine learning (“AI/ML”) is emerging as a leading area of academic research and a topic of significant regulatory concern. Indeed, a near-consensus exists in favor of explainable AI/ML among academics, governments, and civil society groups. In this project, we challenge this prevailing trend. We argue that for explainability to be a moral requirement — and even more so for it to be a legal requirement — it should satisfy certain desiderata which it currently does not, and possibly cannot. In particular, we will argue that the currently prevailing approaches to explainable AI/ML are (1) incapable of guiding our action and planning, (2) incapable of making transparent the actual reasons underlying an automated decision, and (3) incapable of underwriting normative (moral/legal) judgments, such as blame and resentment. This stems from the post hoc nature of the explanations offered by prevailing explainability algorithms. As we explain, that these algorithms are “insincere-by-design,” so to speak. And this renders them of very little value to legislators or policymakers who are interested in (the laudable goal of) transparency in automated decision making. There is, however, an alternative — interpretable AI/ML — which we will distinguish from explainable AI/ML. Interpretable AI/ML can be useful where it is appropriate, but represents real trade-offs as to algorithmic performance and in some instances (in medicine and elsewhere) adopting an interpretable AI/ML may mean adopting a less accurate AI/ML. We argue that it is better to face those trade-offs head on, rather than embrace the fool’s gold of explainable AI/ML.