BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Harvard Law School//NONSGML Events//EN
CALSCALE:GREGORIAN
X-WR-CALNAME:Harvard Law School - Events
X-ORIGINAL-URL:https://hls.harvard.edu/calendar/
X-WR-CALDESC:Harvard Law School - Events
BEGIN:VEVENT
UID:20250922T1914Z-1758568497.9808-EO-724120-1@10.73.9.45
STATUS:CONFIRMED
DTSTAMP:20260424T034928Z
CREATED:20250922T185151Z
LAST-MODIFIED:20250922T185151Z
DTSTART;TZID=America/New_York:20251001T123000
DTEND;TZID=America/New_York:20251001T133000
SUMMARY: Belief\, Uncertainty\, and Truth in Language Models
DESCRIPTION: What does it mean for a language model to “know” something—and
  how should it communicate uncertainty to the people who use it? In this ta
 lk\, Jacob Andreas\, Associate Professor of Electrical Engineering and Comp
 uter Science at MIT\, will explore new approaches to building language mode
 ls that not only model the world but also model themselves. […]
X-ALT-DESC;FMTTYPE=text/html: <p>What does it mean for a language model to 
 “know” something—and how should it communicate uncertainty to the people wh
 o use it? In this talk\, Jacob Andreas\, Associate Professor of Electrical 
 Engineering and Computer Science at MIT\, will explore new approaches to bu
 ilding language models that not only model the world but also model themsel
 ves.</p><p>Andreas will show how optimizing for coherence and calibration—b
 eyond accuracy alone—can produce models that are both more factually consis
 tent and more reliable in expressing confidence. These advances raise press
 ing questions for governance: How should large models present information? 
 What standards should guide their expression of reliability or doubt?</p><p
 >Moderated by Josh Joseph\, the Berkman Klein Center’s Chief AI Scientist\,
  this conversation will situate cutting-edge technical work on belief and u
 ncertainty in language models within the wider debates about interpretabili
 ty\, trust\, and the responsible use of AI. Navigating computer science and
  the questions it raises for broader society\, the conversation will conclu
 de by examining the policy implications raised by these complex issues.</p>
 <h3>Speakers</h3><p><strong>Jacob Andreas</strong> is an associate professo
 r at MIT in EECS and CSAIL. He did his PhD work at Berkeley\, where he was 
 a member of the Berkeley NLP Group and the Berkeley AI Research Lab. His re
 search aims to understand the computational foundations of language learnin
 g\, and to build general-purpose intelligent systems that can communicate e
 ffectively with humans and learn from human guidance.</p><p><strong>Josh Jo
 seph</strong> is Berkman Klein’s Chief AI Scientist\, the first role of its
  kind at BKC. Josh leads teams to explore measuring and controlling the age
 ncy of AI systems\, benchmarking these systems beyond measures of“intellige
 nce\,” and to help build the infrastructure for in-house AI research. In ad
 dition to being BKC’s Chief AI Scientist\, Josh currently holds appointment
 s as a Visiting Scientist at MIT and a Lecturer on Law at Harvard Law Schoo
 l.</p>
CATEGORIES:Speaker/Panel
LOCATION:Berkman Klein Multipurpose Room (Room 515)
GEO:0.000000;0.000000
ORGANIZER;CN="Jessica Weaver":MAILTO:jweaver@law.harvard.edu
URL;VALUE=URI:https://hls.harvard.edu/events/belief-uncertainty-and-truth-i
 n-language-models/
ATTACH;FMTTYPE=image/png:https://hls.harvard.edu/wp-content/uploads/2025/09/1Speaker_16x9.png
END:VEVENT
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
DTSTART:20250309T070000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
END:VCALENDAR
