One evening in early November 1988, a Ph.D. student from Cornell let loose a computer worm on ARPANET, a precursor to the internet.
The malware, which had been programmed to replicate itself, infected thousands of computers on the fledgling World Wide Web — an estimated 10% of those connected at the time — crashing or disabling machines and prompting many other users and institutions to go offline to prevent further attacks.
The worm’s creator would later tell prosecutors that his intent had not been malicious, but rather experimental, and that his goal had been to probe security weaknesses of the burgeoning internet. For his success, he received three years of probation, several hundred hours of community service, and a fine. But the misguided test is remembered for another reason: It jolted early internet proponents into recognizing the need for organized cybersecurity defenses and stronger laws against computer crimes.
Thirty-eight years later, as companies and organizations introduce a new technology with similar transformative power, artificial intelligence, into the rhythms of everyday life — at the checkout counter, at the doctor’s office, in the courtroom, and even on the battlefield — experts are wondering what fresh dangers AI might pose, and whether American law is fully prepared to reckon with them.
“We need to understand how best to assure that AI remains a safe and productive technology, one that we don’t lose control of, or is not deployed in ways that are catastrophic,” says Lawrence Lessig, the Roy L. Furman Professor of Law and Leadership at Harvard.
Those concerns are at the heart of a course Lessig created with Jack Goldsmith, the Learned Hand Professor of Law. Lessig says that the aim of their fall 2025 semester course, Legal Architecture for AI National Security Contingencies, was to explore the capacity of existing and future law to address problems caused by AI, without stifling innovation or running afoul of free speech and other protections.
Lessig sees several possible areas of national concern related to AI, a category that includes everything from chatbots such as ChatGPT to algorithms that control e-commerce, self-driving technology, medical imaging analysis, financial fraud detection, cyber-security systems, and much more.
“There are threats we could call ‘bad man’ threats, which are those posed by people who gain access to the technology to deploy it for terrible, catastrophic purposes, such as bioweapons, or to disrupt economies, or to conduct massive fraud efforts,” he says. “But the other big concern is what technologists refer to as the ‘runaway threat’ — the idea that as technologies become super intelligent, our capacity to control them can’t be taken for granted.”
Jonathan Zittrain ’95, the George Bemis Professor of International Law, who worked with Goldsmith and Lessig on planning the course, says the time to start thinking about these issues is now.
“ Many of the most difficult and important questions about AI aren’t owned by anyone right now.”
Jonathan Zittrain
“There is no agreement among experts on AI’s current capabilities or its near-term trajectory, much less its longer-term developments and impacts,” Zittrain says. “Many of the most difficult and important questions about AI aren’t owned by anyone right now: how risky certain implementations and courses of development are; who should bear those risks, when they can affect everyone; how much we should anticipate and try to forestall problems versus seeing what develops and responding as more certainty settles in.”
As China, the U.S., and other countries around the world jockey to develop the most advanced technologies, Lessig sees parallels with the nuclear arms race of the previous century. But he worries that nonproliferation might be even more difficult to achieve with AI, given that it is being advanced by not only nation states, but also private actors.
“With nuclear weapons, we understood what the threat was, and we understood how to identify who had them, and what it looked like to protect against unintended launches,” Lessig says. “With this technology, we don’t know what the risks are, and we don’t know how to control them — or even if we can control them.”
Zittrain says that above these questions hover meta concerns about how governments can and should respond. But he is confident that Harvard Law students are well positioned to help find the answers.
“Our students are among many who could play a vital role in limiting the risks and benefits and in coming up with creative ways to seek the best of the technology while avoiding regrettable surprises,” says Zittrain.
Lessig and Goldsmith built their course around three key areas of law: cybersecurity, lawful governmental surveillance, and preparedness and response authorities in times of crisis. Harvard Kennedy School Professor Jake Sullivan also helped teach several classes, bringing an invaluable perspective as the national security adviser under President Joe Biden.
For students Rivka Kosowsky ’27 and Sophia Heimowitz ’27, the course was an exciting opportunity to study under Lessig and Goldsmith — “both big institutional thinkers,” says Kosowsky — and a way to learn more about a rapidly emerging field.
“One of the reasons that I’m excited to be in law school at this specific point in time is because so much is changing in the law and in society, and I’m trying to take every opportunity that I can to learn about these changes as they actively unfold,” Kosowsky says.
Heimowitz says that she appreciated the variety of perspectives shared by other students, the instructors, and the guest speakers who joined each class.
While some classmates were intimately familiar with AI, others, like her, were relative novices before taking the class. She says she came away with a better idea of how AI works, what laws already apply, and where there might be serious gaps in regulation.
“I am particularly interested in issues of authenticity and authentication, and what happens when AI is able to create forgeries that are indistinguishable,” Heimowitz says. “What can you preemptively do to protect people’s trust in institutions, when they can’t tell if something is real?”
While AI may feel as though it is one, easily identifiable thing, Heimowitz says that the course made clear that the technology, and the problems it could unleash, are much more heterogeneous than many assume.
“I can see now that [the] AI problem is very disparate and diverse, and we will need to balance civil liberties with new and existing regulations if we want to tackle the problems ahead,” she says. “I came out motivated that this is an issue that people need to care about.”
Perhaps that is why Kosowsky believes that it is not a question of if, but how, AI will factor into her future career — and that of most of her law school peers.
“AI is everywhere, and will continue to be everywhere, and so I don’t think that we will be able to avoid working in this space in the future,” she says.
Lessig says he and Goldsmith, who plan to offer the course again, hope that students come away with a “context in which they can absorb what they need to learn to be able to attack the problems that they’ll need to attack.”
These problems might come sooner than we’d like, Lessig warns. “And right now, we’re not ready to deal with the threat from this generation of technology.”