Skip to content

Harvard Law School Statement on Use of AI Large Language Models (like ChatGPT, Google Bard, and CastText’s CoCounsel) in Academic Work, including Exams

Section V, Academic Honesty, of Harvard Law School’s Handbook of Academic Policies states: “Students are expected to abide by the highest standards of honesty and originality in their academic work and related communications and representations.”

Section V.A, Violation of Examination Rules; Dishonesty in Examinations further states:

No student is permitted to use any books, notes, papers, or electronic devices during an in-class examination except with the express permission of the instructor. Sharing of study materials, exchange of information, collaboration or communication of any kind during an in-class examination is not permitted and unless otherwise stated clearly in the examination instructions, is not permitted during a take-home examination.

Section V.B, Preparation of Papers and Other Work—Plagiarism and Collaboration states as follows:

All work submitted by a student for any academic or nonacademic exercise is expected to be the student’s own work. In the preparation of their work, students should always take great care to distinguish their own ideas and knowledge from information derived form sources. The term “sources” includes not only published or computer-accessed primary and secondary material, but also information and opinions gained directly from other people.

Section V provides notice that students violating the School’s expectations regarding academic honesty in exams, papers, or other work will be subject to disciplinary action.

Pursuant to these policies, the use of AI large language models (such as ChatGPT), in preparing to write, or writing, academic work for courses, including papers and reaction papers, or in preparing to write, or writing, exams is prohibited unless expressly identified in writing by the instructor as an appropriate resource for the academic work or exam in the instructor’s course. Instructors permitting use of generative AI outputs may require students to disclose the generative AI outputs relied upon, and further show exactly how and where. If not expressly identified in writing by the instructor, any use of AI large language models will be considered academic dishonesty and not the student’s own work and will be subject to disciplinary action subject to sanctions in accordance with the Law School’s Administrative Board procedures and the Statement of the Administrative Board Concerning Sanctions for Academic Dishonesty.

The Law School’s Administrative Board has provided further elaboration of the School’s academic honesty principles in its Statement of the Administrative Board Concerning Sanctions for Academic Dishonesty. The Board’s statement gives notice that “after weighing these considerations in light of the facts of each individual case, the Board has concluded in the overwhelming majority of academic dishonesty cases that the appropriate sanction is a suspension, usually for one semester.”

Please be aware that the School reserves the right to check individual exams to confirm that use of ChatGPT or other generative AI outputs are not reflected in exam answers.