HLS Beyond & BKC present: Where do Things Stand With the White House AI Action Plan?
Professor Alan Raul will be leading 3 sessions this spring on TechReg in AI under the Trump Administration (see March 12th & April 9th events). This first session will examine the current U.S. federal AI governance landscape under the Administration’s July 2025 AI Action Plan and December 2025 Executive Order 14365 (“Ensuring a National Policy Framework for Artificial Intelligence”), including the Administration’s posture toward the emerging web of state AI laws.
Interfaith Engagement at HLS
The Office of Equal Opportunity (OEO) is bringing staff and students together to hear from Rabbi Getzel Davis and Abby McElroy, who lead the Interfaith Engagement Initiative in the Office of the President. Attendees will hear from Getzel and Abby about how to best engage across faiths and create meaningful opportunities for constructive interfaith dialogue […]
Copyright’s Afterlife: Law, Legacy, and Ownership
This expert roundtable will address important aspects of the post-mortem afterlife of copyright and explore tools and remedies for identifying and rectifying misuses of authors’ creative legacies and ensuring that those legacies are managed in accordance with the wishes of the authors. Invited participants will include playwrights and other theater professionals, as well as scholars […]
HLS Beyond and BKC present: AI Governance and Human Alignment
In this second session of the TechReg in AI series w/ Alan Raul (see April 9th) we address the issue of how Frontier AI companies assure human control and safety. AI is a potentially hugely transformative technology that is developing substantially outside the government’s direct control. Since under the Administration’s current AI framework major tech companies will be largely responsible for directing and controlling the progress and governance of frontier AI, we survey how these corporate entities have set up their governance structures, instituted compliance measures (legal conformity and safety assessments, risk management frameworks), built in technical measures (evaluations, red-teaming, monitoring), and established organizational measures (risk committees, responsible scaling policies, incident response).
HLS Beyond and BKC present: Evidence-Based AI Policy
In this third and final session of the TechReg in AI series with Professor Alan Raul, we consider what constitutes an “AI incident” for policy and governance purposes. Who is monitoring and reporting them? How does the concept account for foreseeable harms, near misses, and distinctions between systems performing as intended versus those that are malfunctioning, maliciously compromised, or acting in novel or unexpected manners? As we dig into today’s incident-monitoring ecosystem, we’ll discuss relevant challenges such as underreporting, selection bias, confidentiality, reproducibility and how to translate scattered, anecdotal events into meaningful evidence for risk management and harm prevention.

