Skip to content

I. Glenn Cohen, Informed Consent and Medical Artificial Intelligence: What to Tell the Patient?, 108 Geo. L.J. 1425 (2020).


Abstract: Imagine you are a patient who has been diagnosed with prostate cancer. The two main approaches to treating it in the United States are active surveillance versus the surgical option of radical prostatectomy. Your physician recommends the surgical option, and spends considerable time explaining the steps in the surgery, the benefits of (among other things) eliminating the tumor and the risks of (among other things) erectile dysfunction and urinary incontinence after the surgery. What your physician does not tell you is that she has arrived at her recommendation of prostatectomy over active surveillance based on the analysis of an Artificial Intelligence (AI)/Machine Learning (ML) system, which recommended this treatment plan based on analysis of your age, tumor size, and other personal characteristics found in your electronic health record. Has the doctor secured informed consent from a legal perspective? From an ethical perspective? If the doctor actually chose to “overrule” the AI system, and the doctor fails to tell you that, has she violated your legal or ethical right to informed consent? If you were to find out that the AI/ML system was used to make recommendations on your care and no one told you, how would you feel? Well, come to think of it, do you know whether an AI/ML system was used the last time you saw a physician? This Article, part of a Symposium in the Georgetown Law Journal, is the first to examine in depth how medical AI/ML interfaces with our concept of informed consent. Part I provides a brief primer on medical Artificial Intelligence and Machine Learning. Part II sets out the core and penumbra of U.S. informed consent law and then seeks to determine to what extent AI/ML involvement in a patient’s health should be disclosed under the current doctrine. Part III examines whether the current doctrine “has it right,” examining more openly empirical and normative approaches to the question. To forefront my conclusions: while there is some play in the joints, my best reading of the existing legal doctrine is that in general, liability will not lie for failing to inform patients about the use of medical AI/ML to help formulate treatment recommendations. There are a few situations where the doctrine may be more capacious, which I try to draw out (such as when patients inquire, when the medical AI/ML is more opaque, when it is given an outsized role in the final decision-making, or when the AI/ML is used to reduce costs rather than improve patient health), though extending it even here is not certain. I also offer some thoughts on the question: if there is room in the doctrine (either via common law or legislative action), what would it be desirable for the doctrine to look like when it comes to medical AI/ML? I also briefly touch on the question of how the doctrine of informed consent should interact with concerns about biased training data for AI/ML.