What if an architecture emerges that permits constant monitoring; an architecture that facilitates the constant tracking of behavior and movement. What if an architecture emerged that would costlessly collect data about individuals, about their behavior, about who they wanted to become. And what if the architecture could do that invisibly, without interfering with an individual’s daily life at all? … This architecture is the world that the net is becoming. This is the picture of control it is growing into. As in real space, we will have passports in cyberspace. As in real space, these passports can be used to track our behavior. But in cyberspace, unlike real space, this monitoring, this tracking, this control of behavior, will all be much less expensive. This control will occur in the background, effectively and invisibly.

Lawrence Lessig, “The Laws Of Cyberspace,” 1998

It’s been two decades since Harvard Law School Professor Lawrence Lessig published “The Laws Of Cyberspace,” which, in the words of Professor Jonathan Zittrain, “imposed some structure over the creative chaos of what maybe was a field that we’d call cyberlaw.” Lessig’s groundbreaking paper describes four types of constraints that together regulate behavior – law, social norms, the market, and architecture – and argues that due to its special architecture, cyberspace is different from “real” space and thus subject to new possibilities for control by governments and other centers of power. “The world we are entering is not a world where freedom is assured,” Lessig wrote in 1998, but instead, “has the potential to be the most fully, and extensively, regulated space in our history.”

On April 16, the Berkman Klein Center of Internet & Society hosted a special event commemorating the 20th anniversary of the publication of “The Laws of Cyberspace,” with Lessig, Harvard Law School Professors Ruth Okediji and Jonathan Zittrain, and Dr. Laura DeNardis of American University. The panelists reflected on the paper, and where the field of cyberlaw has taken us over the last two decades, and they considered how some of the concerns raised in 1998 might apply today.

“I was sitting on that bench outside the Lewis building,” recollected Okediji of the day 20 years ago when she first read the paper, “and I will never forget both my sense of sheer terror that we were launching something that we had no idea where it would lead us, and then this sense of skepticism: ‘Well, how does he know he’s right?’” She explained that “The Laws of Cyberspace” led to her own work thinking about internet governance, social interaction on the net and the law. “It’s been 20 years, and Larry was right,” she said.

Lessig told the audience that the paper came in part out of a feeling of frustration. He feared that many internet enthusiasts were taking for granted that the freedom the internet allowed in 1998 was the freedom it would always allow, and he wanted to make the point that the regulability of place is a function of its architecture and thus not guaranteed. Without deliberate interventions, the lack of regulation that so many cherished in the early days of the internet could slip away.

“The architecture of the internet as it originally was made it really hard to regulate, but you might imagine the technology evolving to persistently watch everything you’re doing and enable simple traceability,” he said. “All of these evolutions in the architecture increase the regulability of the space, and then we’d need to decide, ‘Do we like that? Do we want that?’”

Lessig explained that even in 1998, governments and private markets seemed to be interested in increasing regulability and the ability to track what people were doing for the purposes of commerce and control.

“Arrangements of technical architecture are arrangements of power,” explained DeNardis. “This often has nothing to do with governments whatsoever.” For example, the World Wide Web Consortium designs accessibility for disabled people into their protocols, she said, which is an example of how technical architecture is determining public interest issues. DeNardis said that often it’s hard for people without a technical background to be involved in decisions like these, but that there’s currently a surge of people from beyond the technical sphere showing interest in participating in the decisions that shape our experience online and affect issues like identity and privacy. However, she said, this increase in public participation coincides with the proliferation of proprietary standards coming out of closed organizations such as the Internet of Things and social media platforms.

Lessig added that as the space of innovation moves into “islands of innovation,” such as the large tech platforms like Google and Facebook, the generativity of innovations become contingent on each platform’s permission, creating the potential scenario where someone would choose not to create something for fear that the company would “pull the rug out.” This is an example of “how technical change and legal ownership work together to change the basic opportunity to innovate,” he said.

DeNardis made the point that while certain platforms might be islands in terms of interoperability, they are tied together in the backend by the third parties that collect and aggregate data about us. It’s important to look below the surface, she said. “That’s where a lot of the power is. The power to do things like censor LGBT people, the power to restrict people based on architecture-embedded intellectual property rights, and the power to monetize us through big data that’s aggregated with companies we’ve never even signed terms of service with.”

Okediji noted that there’s been little innovation in contract law when it comes to technology. “It’s not just that we’re missing the mark in the area of cyberspace. The regimes that surround cyberspace also have not received the attention they should,” she said, suggesting that the rules and norms around what makes a contract and the practice of “signing away all these rights with a click” might not be ideal.

“What troubles me quite significantly is that we have this 911 mentality when it comes to policy,” said Okediji. “Avoiding something in the future requires us to be thinking about it today, not tomorrow when the problem occurs.”  Rather than dealing with problems only as they come up, she said, we need to ask ourselves ‘What’s the vision for what cyberspace should look like 20 years from now?’”