On March 6, HLS Professor John Palfrey ’01, vice dean, library and information resources at HLS, and Adam Thierer, a senior fellow at The Progress & Freedom Foundation and director of its Center for Digital Media Freedom, participated in an online debate on Ars Technica on the Communications Decency Act and whether ISPs and social networking sites should be more liable for the things their users post. The debate, The Future of online obscenity and social networks, is included below.

 

When the Communications Decency Act (CDA) was enshrined into law with the passage of the historic Telecommunications Act of 1996, it contained a number of controversial provisions that covered “obscene or indecent” online content. But at the behest of ISPs and others concerned about the potentially stifling effects of possible obscenity suits on the still-young network, the CDA also included 47 U.S.C. Sec. 230, commonly known as Section 230, which shielded “interactive computer service providers” from liability for information posted or published by users of their systems.Although the censorial elements of the CDA were later struck down by the courts, Sec. 230 protections were preserved, and even enhanced, during subsequent legal challenges. Other child safety-oriented laws that Congress passed, such as the Child Online Protection Act of 1998 (COPA), were also struck down as unconstitutional. Currently, therefore, “interactive computer service providers”—which has been interpreted broadly to include almost all types of online services, from ISPs to social networking sites—are largely free from any liability associated with speech or content that some deem objectionable (e.g., indecent, harassing, defamatory, biased, etc).

With Section 230 sheltering online intermediaries from liability for such content or communications, and with other laws like COPA being struck down by the courts, there has been ongoing debate about how the law might be changed, or Section 230 might be tweaked, to encourage more self-policing of online networks for objectionable content. In the exchange that follows, two prominent participants in this debate discuss the idea of altering Section 230 in order to narrow its protections for ISPs. Should congress and the courts try to strike a new balance between children’s welfare and online innovation?

John Palfrey is a Professor of Law and Vice Dean, Library and Information Resources, at Harvard Law School, where he also serves as Faculty Co-Director of Harvard’s Berkman Center for Internet & Society. Adam Thierer is a Senior Fellow at The Progress & Freedom Foundation (PFF) and the Director of PFF’s Center for Digital Media Freedom.

Adam Thierer: John, as you know, I very much enjoyed your new book with Urs Gasser, Born Digital: Understanding the First Generation of Digital Natives, and found much to agree with in its pages. That being said, I must say I was troubled by one particular proposal that you and Urs set forth in your otherwise excellent chapter regarding online child safety concerns. Specifically, I am concerned about your proposal to revise or scale back the immunity provided under Section 230 of the Communications Decency Act (CDA) of 1996 (47 U.S.C. Sec. 230), which shields online service providers from liability for information posted or published by users of their sites or systems.

In your book, you argue that “the scope of the immunity the CDA provides for online service providers is too broad” and that the law “should not preclude parents from bringing a claim of negligence against [a social networking site] for failing to protect the safety of its users.” And you suggest that “there is no reason why a social network should be protected from liability related to the safety of young people simply because its business operates online.” Specifically, you call for “strengthening private causes of action by clarifying that tort claims may be brought against online service providers when safety is at stake,” although you do not define those instances.

I’m troubled by your proposals because I believe Section 230 has been crucial to the success of the Internet and the robust marketplace of online freedom of speech and expression. In many ways—whether intentional or not—Section 230 was the legal cornerstone that gave rise to many of the online freedoms we enjoy today. I fear that the proposal you have set forth could reverse that. It could lead to crushing liability for many online operators—and not just giants like MySpace or Facebook—that might not be able to absorb the litigation costs.

Could you elaborate a bit more about your proposal and explain why you think the time has come to alter Section 230 and online liability norms?

John Palfrey: Thanks, Adam, both for your kind words about the book in general and for your leadership on issues of online freedom. I think you and I agree, very substantially, on the general approach to technology and policy.

I certainly credit CDA 230 as a cornerstone of the legal framework that has enabled the information technology sector to thrive over the past decade. It has also had a crucial part in ensuring that the Internet has become a place where free expression, like innovation, also thrives. Both economic growth (promoted through technological innovation) and free expression are much to be celebrated and supported through careful policymaking. Those who drafted and fought to sustain CDA 230 deserve our thanks.

But I believe that it is time to re-examine how far CDA 230’s immunity extends. A lot has happened over the past decade, and I’m not so sure that those who drafted this provision could have anticipated quite how broadly this immunity would extend over time. To be clear, I think that courts that have extended the immunity fairly broadly (which is the general posture of most courts that have taken up cases at the edges of this area of doctrine) have been right—on the law as it stands—to do so. I think that the law, as written, does extend to shield from liability MySpace, for instance, in the Julie Doe case in Texas, or Craigslist in the cases associated with Section 8 housing, and so forth. I suspect you continue to agree with me up to this point—that the caselaw, by and large, has been rightly decided.

In my view, these types of cases are not rightly decided, though, from the perspective of what the law ought to protect or not to protect. It is here that I suspect you and I part ways.

Take the issue of online safety, on which you and I have been working closely over the past year through the Internet Safety Technical Task Force. Let’s take the hypothetical case of a young person who is physically harmed after meeting someone in an online environment. The young person (or his parents, more likely, I suppose) seeks to bring suit against the service provider involved. In my view, the service provider should not have special protection from such a tort claim. Such a claim should be decided on the merits. Was the service provider negligent or not? I don’t think that the fact that the service provider is offering an Internet-based service, rather than a physically based service, should result in a shield to liability.

My view is that most major social networks, in the United States anyway, would still not be liable in such terrible instances. From what I know, the social networks are taking more and more affirmative steps to make their online environments safer for kids—such that a negligence claim would not reach them. But the claim should not be barred at the courthouse door, in my view. The opposite incentive should be at work: to encourage them to continue their innovation to protect kids.

 

Imagining a reasonable test

Thierer: I appreciate your clarification, but let me raise a few quibbles and additional questions.

First, I agree with your assertion that “A lot has happened over the past decade,” and that “those who drafted this provision could [not] have anticipated quite how broadly this immunity would extend over time.” Of course, one could say the same thing about many laws or constitutional provisions. It could be argued, for example, that the protections afforded by the First Amendment have been read quite broadly in recent decades, and in ways that its drafters might not have envisioned.

But I’m alright with that—both the broad reading of the First Amendment and of Section 230 immunities. Indeed, precisely because there is an intimate relationship between these two principles (speech rights and intermediary immunity) it is my hope that they both continue to be read broadly going forward. That also explains why I’m concerned about tinkering with Section 230 along the lines you suggest.

Let’s use your hypothetical example. The tricky part of it comes down to the unwritten test or standard regarding exactly how much affirmative policing of user behavior we should expect sites to do before they would be free from negligence claims. As you correctly note, “social networks are taking more and more affirmative steps to make their online environments safer for kids.” I think it is reasonable for site users and parents to expect social networking sites to take some steps to provide various safeguards to improve online privacy and safety. But I’m dealing in the realm of social norms and self-regulation here. I don’t want to see that become a bright-line legal test; indeed, I have trouble imagining what that test would look like.

Thus, while it comforts me somewhat to hear you say that negligence claims would not likely reach most site operators, I would need more details about your preferred legal standard for site negligence before I could say more. It sounds like you want the test to be narrowly construed, which is good to hear, but I fear others might read the test much more broadly. For example, as you know from our work together on the Internet Safety Technical Task Force, some lawmakers have suggested that liability should be imposed on social networks that fail to adopt age verification requirements for all users. Other policymakers have suggested that various filtering mandates should be imposed or else face liability. More recently, there’s been talk of screening user comments for harassing or defamatory remarks.

So, what’s the test? How far must a site operator go to guarantee “the safety of young people” and avoid a negligence claim being filed against them?

Importantly, let’s not forget that not all social networking sites are alike or serve the same interests. Some sites will layer on protections because they aim to make parents or privacy-sensitive users more comfortable with their online community. Other sites will take a more hands-off approach and encourage a vibrant exchange of views and expression. What I worry about, therefore, is that a new liability standard might not leave sufficient room for flexibility or experimentation. If Congress altered Section 230 (or the courts tipped the balance) such that negligence claims could be brought too easily, I think that could have a chilling effect on a great deal of legitimate online speech, especially for many smaller social networking sites and up-and-coming operators.

Palfrey: My proposal would be to leave the question of negligence on the part of service providers in such situations to the tort regime. The standard would change over time as risks change and as the best practices for protecting kids change. Take the Julie Doe v. MySpace case from Texas. In that instance, my view would be that the judge (in a revised CDA 230 world) would have said, “yes, plaintiffs, you can bring this claim under the tort regime, but then we will look at whether MySpace takes reasonable steps to protect its users from harm.” The test would be for negligence.

From what we’ve learned from MySpace’s submission to the Task Force, there would not be liability in such a fact pattern given what MySpace is doing today—no chance, in fact, in my view. If a given social network, situated as the defendant in a similar case, were taking no steps to protect kids, or, worse, doing things affirmatively to encourage dangerous behavior, they would be found liable—on a sliding scale—for harm done to the child.

Discovery could be very interesting in cases like these, and could lead to increased understanding about what’s really going on at some of these companies on the safety front, too. (I should note, also, that I disagree with a key line that the judge in the Julie Doe case wrote, in relevant part: “… the Court finds Plaintiffs have failed to state a claim for negligence or gross negligence because MySpace had no duty to protect Julie Doe from Pete Solis’s criminal acts nor to institute reasonable safety measures on its website.”)

Why would one not wish to go this route that I propose? I’m quite sure you are right that one demerit of my proposal is that it will chill speech and innovation to some degree. I don’t say that lightly. It’s absolutely not a good thing that that would happen. My argument is just that we can accept such chilling in favor of greater safety for children online and offline. This is the sort of trade-off, of course, that we make all the time as we tinker with the law. You and I may simply disagree as to where the pendulum ought to come to rest.

Another demerit: this idea is scary, some people would say, because the plaintiffs’ lawyers will rush in and start suing every online service provider. It may lead to greater litigation, true enough. Litigation brings with it costs. I don’t think that’s a reason not to make such a change in the law, though.

The benefits of such a regime would be primarily two-fold. First, social networks would have greater incentive to take more ambitious steps, on an ongoing and dynamic basis, to protect kids from harm that comes to them as a result of their activities online. Second, we might see greater innovation, not less, in terms of technical safety measures to protect kids, as the market is driven by competition among social networks. (I suspect you disagree on this second score, and I submit that you may be right.) Most important, kids would benefit from a safer online experience.

I hasten to add that no single approach, and surely not my proposal, will alone make kids safe online. We need constant vigilance, attention to research into the risks kids face, and continuous innovation. And we need a range of community-based solutions that put parents, teachers, coaches, mentors, kids themselves, law enforcement, social workers, technologists and online service providers to work. No one should be off the hook for doing the right thing for kids safety online.

Intermediaries, off-shoring, and child safety

Thierer: John, in closing, let me first thank you for engaging in this friendly exchange about a challenging issue. It has been very helpful in focusing my thinking as I wrap up a longer paper on the future of Section 230 as well as the next installment of my book on Parental Controls and Online Child Protection.

Your last response does a nice job envisioning, and preemptively addressing, some of my continuing concerns about reopening and tweaking the current 230 regime. I suppose that our primary disagreement comes down to how much faith we have in the tweaked liability regime you have suggested to actually be read fairly narrowly by the courts. The potential for a litigation explosion here troubles me greatly and I’m still hung up on how “negligence” will be defined in this regard. Moreover, I continue to believe there are other ways to improve online safety without messing with liability norms.

Regardless, perhaps you could clarify a few things in your closing remarks.

First, is this just about changing liability norms for social networking sites, or for all online intermediaries? From what you’ve said above and in Born Digital, I’m not clear on this. I’ll assume, however, that your tweaked liability regime would open up other online intermediaries to identical negligence claims. To reiterate what I already said above, I worry that the increased threat of litigation will diminish opportunities for smaller online operators to develop new sites and services if they lack sufficient capital or insurance to indemnify themselves from such lawsuits. Stated differently, a new liability regime could entrench existing market leaders or drive greater consolidation, as only those operators with deep enough pockets would be able to assume the risks associated with new ventures that potentially cater to youth.

Second, I wonder how any new liability regime would cover offshore sites, if at all. How should the law deal with players outside the reach of your tweaked liability regime? I’m not saying all social networking activity would be driven offshore as a result of your recommendation, but it’s worth considering whether we are better off sticking with the existing regime since it keeps most mainstream sites onshore and subject to indirect pressures, industry best practices, and other social norms. By contrast, we would not have as much leverage over online intermediaries if more of them moved offshore as a result of changes to our liability regime.

Third, we’ve been talking here about tweaking Sec. 230 primarily to address online child safety concerns. Would you advocate this same approach as a remedy to other online pathologies? In other words, what’s the applicability to defamation claims against a blog or discussion board? Malware or spam attacks missed by an ISP? Adult content accidentally retrieved via a search engine? And so on. Because, as you know, some lawmakers and academics have suggested that liability norms might need to be revised to address a wide variety of other concerns, including those.

Which leads to my final question: What surprises might be in store if Congress reconsiders Sec. 230? I worry that any congressional reopening of Sec. 230 will involve more than a simple tweaking of the law along the lines you’ve suggested. In fact, I fear that the very existence of Sec. 230 would be at stake and that a significant push for expanded middleman deputization (via a reversal of liability norms) could be the result. Obviously, my hope is that we can avoid that outcome.

With that, I turn it back to you for the final word. Thanks again for taking the time to engage in this exchange with me.

Palfrey: Thanks, Adam. I much appreciate the chance to engage with you on this topic. It’s not an easy matter; there are clear, strong arguments on both sides of the debate as to whether or not we ought to tinker, at this point in history, with CDA 230’s immunity for Internet intermediaries.

To respond to a few of your final critiques/questions:

On your first question, yes, I think the change in the liability regime would apply beyond just social network sites. Which intermediaries should be affected? Most of them, I’d argue. It’s quite right that many small providers of Internet services would bear a greater proportional brunt of the costs of such a change; it’s true, too, that we might see a corresponding drop in innovation or service offerings by small players. I think that this is regrettable, but I’d put it in the category of “acceptable” as demerits go.

(As a side note: browse through the US Code, and you’ll find more than 40 definitions of what an “Internet service provider” or similar intermediary is. We could do with a bit of reconciliation on this score as a general matter in our federal law. Add in applicable state and international laws and the patchwork grows yet more complex.)

On your off-shoring question: well, I’d imagine that the standard rules of jurisdiction would apply. If there are harms done to our citizens, and we can get personal jurisdiction over the relevant parties, I’d expect our law would apply. I recognize that this is more often easier said than done, especially at Internet scale. But it seems to me a problem of our networked, globalized economy more than it is a problem of this particular solution. I sincerely doubt that many of our services would move to other jurisdictions because of such a change in our liability regime, but I could be wrong.

In terms of the carve-outs for this reform, I’m not yet sure on this score. I think that there are at least three ways to adjust CDA 230. One would be to simply adjust the scope so that it doesn’t apply in certain safety matters that we care about particularly (just as copyright and criminal matters are already carved out). Another would be to treat online intermediaries under a single, rationalized safe harbor, which Mark Lemley, among others, has proposed. That might be based on the trademark regime, or on the DMCA 512 notice and takedown regime, and so forth. Another, lighter-weight reform, which might help address other concerns about CDA 230, would be to require service providers to help in legal proceedings and perhaps retaining log files for a limited period, rather than simply to ignore such requests (such as the AutoAdmit case). Each of these ways to reform the law would have demerits. We’d have to proceed with great care.

What problems might the Congress create if we go down this road? You are the one based in Washington, DC. I have no idea, but point taken. I certainly wouldn’t want the liability regime turned completely on its head. The CDA 230 has been a good and important provision during this period of expansion of the networked public sphere. We’d have to be careful in tinkering with it. But in my view, the rethink is important at this stage.