Should YouTube be held to account if its algorithms recommend videos that promote violent and radical content, leading to a deadly terrorist attack? Must Twitter act more quickly to remove extremist posts on its platform — or be held liable in some way for killings inspired by the content?

Since 1996, a law known as Section 230 of the Telecommunications Act has allowed tech companies to avoid liability for content that users create and post on their platforms. Now, Section 230 might be on the chopping block in a pair of cases being heard by the Supreme Court, Gonzalez v. Google and Twitter v. Taamneh. Both involve lawsuits against tech companies for hosting — and even, thanks to display algorithms, promoting — content that plaintiffs claim radicalized people and led them to commit acts of terrorism that resulted in the deaths of their loved ones. In response, the tech companies argue they are shielded from liability under Section 230.

In advance of oral arguments next week, Harvard Law Today spoke with John Palfrey ’01, a former executive director of the Berkman Klein Center for Internet & Society and visiting professor of law at Harvard. Palfrey, who is the President of the John D. and Catherine T. MacArthur Foundation, has long studied technology and its impact on society, and has written critically about Section 230 and its potential to facilitate harm to children and others.

He says that the pair of cases, along with two others the Supreme Court may opt to hear next term, NetChoice v. Paxton and Moody v. NetChoice, could be “the most consequential Supreme Court cases related to the internet in the technology’s history.”


Harvard Law Today: Before we talk about the cases currently before the Supreme Court, could you tell me why Congress felt Section 230 was necessary?

John Palfrey: This is a debate that has been going on for more than 25 years — it’s kind of amazing that it’s been more than a quarter century. It dates back to the discussions in and around the 1996 Telecommunications Act, and the discussion was in part about having a good Samaritan provision in the law that said, ‘How can we ensure that technology platforms can do the right thing when it comes to harmful and potentially harmful content on their services, and how you could do that without making a liability regime that would make it too hard for them to operate.’ The same balance that we’ve been seeking between innovation on the one hand, and the ability to help individuals and to vindicate a wide range of rights on the other, has been the debate since before this became law.

HLT: More than a decade ago, you advocated for modifications to Section 230 to better protect minors in your book with Urs Gasser, “Born Digital: Understanding the First Generation of Digital Natives.” Why did you think that exception was necessary?

From the start, I have felt that Section 230 immunity for tech platforms is just too broad. I understand the impulse of wanting to ensure that innovation can go forward, and that technology platforms are in a hard spot when it comes to the very large scale that they operate on. I’m not unsympathetic to their concerns, but I also think that the law was drafted way too broadly.

Specifically in the book “Born Digital,” we looked at the topic of child protection. And there were some cases around that time that I think made the issue plain to us, which was that if a child had been abducted, and the platform had been the place where the encounter had been set out, there was no way for law enforcement to get access to information that they needed from the platforms. It seemed like child protection, just as one example, would be a place in which you wouldn’t want the law precluding the important work of the government to be able to go forward, or for families to be able to get redress after something bad had happened. You see a version of that, of course, in one of the two cases before the Supreme Court in this instance.

“I understand the impulse of wanting to ensure that innovation can go forward, and that technology platforms are in a hard spot … but I also think that the law was drafted way too broadly.”

HLT: What are the platforms’ strongest arguments against liability in the current cases before the Court?

I think the strongest overall argument has been a policy argument — that society benefits more from the innovation that has come from these technology companies than from making them liable. That is what I call the “cyberlibertarian” argument, and there are strains of that in the amicus briefs filed by some of the think tanks in these cases. It’s essentially a liberty argument, and that’s a serious and legitimate argument in United States law and jurisprudence.

I think there’s also a practical argument, which is to say, it’s very hard for a tech platform to have actual knowledge of things happening, given how big their universe is, and the amount of content they would have to process and so forth. So, holding the technology companies liable for things that they plausibly don’t know about at that moment — that feels like a hard challenge for them.

HLT: What about those who are seeking to hold them liable?

In very simple terms, I think the technology platforms have been held to a different standard than lots of other kinds of companies when it comes to tort liability. And it’s unclear to me at this point in history why we would let the most powerful, wealthiest companies have a free ride, when we have a background law, like for instance tort liability as just as one example, that applies to everybody else. It does not make sense that just by virtue of being a particular type of company — in some cases, enormous, wealthy, powerful entities — they have a safe harbor, and they’re not subject to the same kind of scrutiny everyone else is.

For instance, when the law was passed, someone could place the exact same ad in a physical newspaper and an online platform — say, an ad for sex trafficking — and the newspaper would be liable for it, while the online service would not be liable for it. To my mind, that distinction no longer holds up well.

“It does not make sense that just by virtue of being a particular type of company — in some cases, enormous, wealthy, powerful entities — they have a safe harbor, and they’re not subject to the same kind of scrutiny everyone else is.”

HLT: So, what you’re saying is that even if Section 230 were modified or went away completely, one would still have to prove the elements of a tort before holding a platform liable. Is that right?

Right. All the background law still attaches. If you were to talk to somebody who teaches torts and say, ‘Why should this company not be subject to the tort regime? Does that make sense?,’ I think the answer is ‘Not really anymore.’ And yes, the plaintiff still would have to prove all the elements of the tort claim — we are simply saying that the claim would not be barred at the courtroom door, which is, of course, what technology platform companies today conclude.

HLT: How will the Supreme Court think through these two cases?

Just stepping back a bit, if you combine these two cases with the two the Supreme Court may decide to take up [Moody v. NetChoice and NetChoice v. Paxton], which are looking at some interesting matters of content moderation — these are possibly the most consequential Supreme Court cases related to the internet in the technology’s history. This is an enormously important term, or potentially two terms, depending on how these cases play out. The Court is really going to be asking itself the key hard questions about to what extent technology companies can and should be regulated. And, of course, that’s done through statutory interpretation, but it will have enormous consequences going forward.

HLT: From a legal standpoint, does it matter if the platforms’ algorithms actively promote harmful content, versus users just stumbling onto the content themselves?

You’re pressing on exactly the question that the Court will have to grapple with, which is to what extent does the technology that is actively putting things forward for individuals to see relate to the human moderation that has long been assessed in these cases.

HLT: Would it matter to the analysis if a platform has mechanisms in place to moderate content but fails to do so? Or would that only matter to an eventual tort case, if one were allowed to go forward?

Cases that have taken up Section 230 challenges before have looked at this — to what degree is content moderation permissible while also still ensuring that the platform can be in the safe harbor. You’ll certainly see an assessment of that question. And then, as you suggest, if the plaintiffs were allowed to bring the claim, then you might see similar questions in the subsequent tort case, which is ultimately a second order question to the ones that the Court will be asking here.

“The Court is really going to be asking itself the key hard questions about to what extent technology companies can and should be regulated. … it will have enormous consequences going forward.”

HLT: What impact could these cases have if the justices decide against the tech platforms? Does it depend on whether the Court narrowly tailors its decisions to the specific cases, or instead issues a broader decision on Section 230?

I see what’s going on here as not one case, but potentially as many as four cases, if you were to consider Gonzalez, Taamneh, and the two NetChoice cases. I think the range of possibilities could include the law remaining as it stands today, which I think is unlikely given the complexity of these cases. I think the far other end of the spectrum could be opening the floodgates to massive regulation of tech platforms, including maybe more of a European-style regulatory regime. Again, that’s probably unlikely. Almost certainly, it will be something in between. I would imagine then that Congress may further be compelled to act to fill in some of the blanks, because the Court itself is not a legislative body or administrative body. You also might see subsequent action from administrative agencies, whether that’s the Federal Communications Commission, the Federal Trade Commission, the U.S. Department of Commerce, or others who have jurisdiction over the internet. But my hunch is that no matter what, this will be a watershed moment in terms of regulation of the internet.


Want to stay up to date with Harvard Law Today? Sign up for our weekly newsletter.