Skip to content

Dharma Frederick (’06) and Barbara Taylor lead this session on Ethical AI based on a new CLE requirement for lawyers at DLA Piper, designed in collaboration with Casetext. At this workshop you will learn about real, trustworthy applications of generative AI, including legal research, document review, and contract analysis.

In this second session of the TechReg in AI series w/ Alan Raul (see April 9th) we address the issue of how Frontier AI companies assure human control and safety. AI is a potentially hugely transformative technology that is developing substantially outside the government’s direct control. Since under the Administration’s current AI framework major tech companies will be largely responsible for directing and controlling the progress and governance of frontier AI, we survey how these corporate entities have set up their governance structures, instituted compliance measures (legal conformity and safety assessments, risk management frameworks), built in technical measures (evaluations, red-teaming, monitoring), and established organizational measures (risk committees, responsible scaling policies, incident response).

In this third and final session of the TechReg in AI series with Professor Alan Raul, we consider what constitutes an “AI incident” for policy and governance purposes. Who is monitoring and reporting them? How does the concept account for foreseeable harms, near misses, and distinctions between systems performing as intended versus those that are malfunctioning, maliciously compromised, or acting in novel or unexpected manners? As we dig into today’s incident-monitoring ecosystem, we’ll discuss relevant challenges such as underreporting, selection bias, confidentiality, reproducibility and how to translate scattered, anecdotal events into meaningful evidence for risk management and harm prevention.