BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Harvard Law School//NONSGML Events//EN
CALSCALE:GREGORIAN
X-WR-CALNAME:Harvard Law School - Events
X-ORIGINAL-URL:https://hls.harvard.edu/calendar/
X-WR-CALDESC:Harvard Law School - Events
BEGIN:VEVENT
UID:20260309T1342Z-1773063747.7534-EO-742932-1@10.73.7.214
STATUS:CONFIRMED
DTSTAMP:20260501T141147Z
CREATED:20260309T124422Z
LAST-MODIFIED:20260309T124422Z
DTSTART;TZID=America/New_York:20260312T122000
DTEND;TZID=America/New_York:20260312T132000
SUMMARY: A Conversation with Nate Soares on the Case for Treating AI as Exi
 stential Risk
DESCRIPTION: A Conversation with Nate Soares\, President of the Machine Int
 elligence Research Institute and Author of If Anyone Builds It\, Everyone D
 ies If anyone builds superintelligent AI\, everyone on Earth will die. That
 ’s the thesis of the New York Times bestseller by Eliezer Yudkowsky and Nat
 e Soares. Named to the New Yorker’s and Guardian’s best books […]
X-ALT-DESC;FMTTYPE=text/html: <p><b>A Conversation with Nate Soares\, Presi
 dent of the Machine Intelligence Research Institute and Author of </b><b><i
 >If Anyone Builds It\, Everyone Dies</i></b></p><p><i><span style="font-wei
 ght: 400\;">If anyone builds superintelligent AI\, everyone on Earth will d
 ie.</span></i><span style="font-weight: 400\;"> That's the thesis of </span
 ><span style="font-weight: 400\;">the New York Times bestseller by Eliezer 
 Yudkowsky and Nate Soares. Named to the New Yorker's and Guardian's best bo
 oks of 2025\, endorsed by figures from Stephen Fry to the former senior dir
 ector of the White House NSC\, and cited in the U.S Congress and House of L
 ords\, the book has had a significant impact on the policy conversation aro
 und AI.</span></p><p><span style="font-weight: 400\;">But how compelling is
  the argument? In this moderated discussion\, Soares will lay out the case 
 for why the current trajectory of AI development poses an existential threa
 t — and then\, we'll have an opportunity to push back. We'll try to identif
 y the key assumptions\, surface the strongest counterarguments\, raise the 
 objections that skeptics and proponents alike are grappling with\, and stre
 ss-test the reasoning in a way that goes deeper than you'd get from just re
 ading the book. The goal is to come away with a more informed and nuanced u
 nderstanding of one of the most important questions of our time.</span></p>
 <ul><li style="font-weight: 400\;"><b>When:</b><span style="font-weight: 40
 0\;"> Thursday\, March 12\, 2026 | 12:20–1:20 PM</span></li><li style="font
 -weight: 400\;"><b>Where:</b><span style="font-weight: 400\;"> 105 Jackson 
 Meeting Room\, Hauser Hall (1st floor)</span></li><li style="font-weight: 4
 00\;"><b>Lunch:</b> Provided</li></ul>
CATEGORIES:Speaker/Panel
LOCATION:Hauser Hall\; 105 Jackson Meeting Room
GEO:0;0
ORGANIZER;CN="Hillel Ehrenreich":MAILTO:dehrenreich@jd27.law.harvard.edu
URL;VALUE=URI:https://hls.harvard.edu/events/a-conversation-with-nate-soare
 s-on-the-case-for-treating-ai-as-existential-risk/
ATTACH;FMTTYPE=image/png:https://hls.harvard.edu/wp-content/uploads/2026/03/Nate-Soares-event-banner-scaled.png
END:VEVENT
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
DTSTART:20260308T070000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
END:VCALENDAR
