When it comes to the First Amendment, should social media platforms be treated like newspapers, where editorial teams can select which views they wish to feature? Or more like electronic media that have been subject to fairness doctrine or must-carry requirements?
At last week’s Harvard Law School Rappaport Forum, titled “Censorship, Content Moderation, and the First Amendment,” two legal scholars who direct prominent programs on speech at Columbia and Stanford debated how to balance the need for an inclusive and informative internet with one that respects free speech rights.
In 1946, Jerome “Jerry” Rappaport ’49 M.P.A. ’63, helped to launch the Harvard Law School Forum, a storied speaker series. More than seven decades later, in 2020, Rappaport and his wife Phyllis supported the establishment of the Rappaport Forum at Harvard Law to promote and model rigorous, open, and respectful discussion of vital issues facing the world.
According to the Pew Research Center, more than 70 percent of Americans are active on at least one social media platform. And even as concerns about mis- and disinformation and hateful content on these apps grow, so do worries about ceding moderating decisions entirely to the vast and powerful companies that own the platforms.
The event, moderated by Noah Feldman, Harvard’s Felix Frankfurter Professor of Law, featured panelists Jameel Jaffer, adjunct professor of law and journalism, Columbia Law School and executive director of the Knight First Amendment Institute, Columbia University; and Daphne Keller, lecturer on law, Stanford Law School and director of the Program on Platform Regulation at the Stanford Cyber Policy Center.
Feldman began by describing two sets of cases that would shape the day’s discussion. The first involved laws passed by Florida and Texas that regulate how social media platforms may moderate content on their platforms. “The standards that a private company and the social media platforms are ordinarily held to are not First Amendment standards because the First Amendment in the first instance only regulates the government,” said Feldman.
However, by imposing First Amendment standards on private companies, the laws in question, while different in some ways, would require the companies to engage in “far, far less moderation of things like hate speech and misinformation, and possibly even ordinary everyday offensiveness than [social media companies] practice under current circumstances.”
The second case at issue is one in which the Supreme Court will consider whether the Biden administration’s encouragement or pressure on social media platforms to remove COVID misinformation constituted a violation of posters’ free speech rights. “We sometimes call it ‘jawboning,’” Feldman said. “What is meant by this are circumstances where government officials use persuasion, and persuasion that may go up to the line or cross the line of coercive persuasion, to the point where the decision to remove the speech becomes, in law, the speech of the government, and by becoming the speech of the government, is regulated by the First Amendment.”
Keller, who had been a lawyer at Google before joining Stanford, began by acknowledging the complexity of the problem. “It is quite understandable that people want to be able to talk in some of the most important public forums of our age, and they don’t like it when a giant corporation stops them from doing that,” she said, adding that concerns cross the political divide.
It is natural to be apprehensive about platforms’ concentration of power over both private and public discourse, she said. “The things that we once would have said to each other in a church or a bar or a note passed in class are instead passed through these private companies and transmitted digitally and that introduces both a greater capacity for control because they’re there at all because it’s a centralized power, and because they can have tools that automatically detect what words you use, and automatically, if inaccurately, suppress things.”
But Keller disagreed with the impulse of some on both the left and the right to outright dictate how companies deal with speech.
Laws like those in Texas and Florida, “actually introduce some pretty significant state preferences about speech,” she said. “They are not content neutral, they’re not speaker neutral, and they incentivize platforms to do things that will suppress speech, as well as maybe carrying more speech.”
As an example, she pointed to the Texas law’s requirement that platforms remain viewpoint-neutral when considering which speech to take down. “If they want to take down pro-anorexia content aimed at teenagers,” for example, “they might also have to take out anti-anorexia content” too.
“What that does for [users] … is, as the cost of accessing the information you want … you have to also put up with this state-mandated inclusion of the stuff that you didn’t want,” she said. “It is very much changing what it is that users can see and read online at state behest in a way that raises questions not just about platforms’ rights to decide what to do, but about users’ rights to … access information online.”
Keller warned that the end result might be less speech overall: “If a platform is trying to decide how to comply with the viewpoint neutrality mandate, they’ll say ‘you know what, I’d rather have no one talking about racism at all than have to carry both the pro-racist and anti-racist viewpoints. I’m just going to take down a whole lot more speech than I used to.’”
To deal with what Keller called “lawful but awful” online speech – things that most people find abhorrent, the response cannot simply be banning more speech, she insisted. Instead, the better solution would be to deconcentrate power and find “ways to make it so that internet users can decide for themselves what speech rules they want to be subjected to, and have a competitive marketplace of different providers letting you select the Disney flavor of YouTube or the version of Twitter that is curated by a Black Lives Matter affiliated group … or from your church.”
‘Government efforts to rig public discourse’
“I totally disagree with everything that Daphne said,” joked Jaffer as he began his remarks.
“I think it hardly needs to be said that [the aforementioned] cases are going to have an immense effect on the character of the digital public sphere, and therefore on our democracy as well,” Jaffer said, adding that he would also put in the same category a lawsuit against Montana.
Jaffer said he believed that the plaintiffs would prevail in many of those cases. “And in my view, the plaintiffs probably should prevail in most of those cases, because most of them involve what I think can fairly be described as government efforts to rig public discourse.”
“A First Amendment that precluded any and all regulation of social media platforms would make the First Amendment the enemy of the values that we need the First Amendment to protect.”
Jameel Jaffer
Yet despite that strong view, Jaffer said he was concerned about the rationale courts would ultimately use in finding for the plaintiffs. “I’m worried that the courts are constructing a First Amendment that sees every regulatory intervention in this sphere as a form of censorship. And I don’t think that that version of the First Amendment would serve free speech or democracy very well.”
He suggested two categories of interventions: those that promote the values of the First Amendment, such as accountability, self-government, and tolerance, or, alternatively, those that undermine those values. “I think it’s hugely important that First Amendment doctrine continue to be attentive to the possibility that any regulation in this sphere [could have] that intent or that effect,” he said. “But I do think it would be a sad thing, and something terrible for our democracy, if the courts constructed a First Amendment that was indiscriminately deregulatory.”
Jaffer then outlined the main arguments deployed by social media companies in some of the current lawsuits: that regulatory interventions implicating editorial judgment should be subject to the highest level of judicial review, or that any regulation that would be unconstitutional if applied to newspapers should be unconstitutional if applied to their platforms. If courts buy those arguments, he said, “It’s not just the bad laws that we have already identified that will be struck down. It’s also good laws… Those kinds of arguments will preempt legislators from passing laws that I think most of us, no matter what our political views are, would agree make sense.”
These good laws could include statutes promoting privacy, interoperability, and even transparency laws that allow the public to better understand how the platforms make decisions. “A First Amendment that precluded any and all regulation of social media platforms would make the First Amendment the enemy of the values that we need the First Amendment to protect,” said Jaffer.
So, why even have the First Amendment, if people are unable to speak freely in the now-digital places where they gather in today’s world, wondered Feldman. Why can’t the government impose the same standard of free speech on social media platforms that it must follow in other venues?
For one thing, most people would hate that, said Keller. “Kids and grandparents … would go on YouTube and suddenly see a bunch of extreme porn, or go on TikTok and see a bunch of pro-suicide videos… that is not something people would be happy with.”
But regardless of whether such a law would be popular or unpopular, Keller said, there would be a free speech issue – this time, from the perspective of the platforms themselves. “It’s taking away their ability to set any editorial policy at all, which I think is clearly a First Amendment problem.”
Turning to Jaffer, Feldman asked how social media platforms differ from newspapers, which the Supreme Court has said are not required to publish opinions they disagree with.
Like newspapers, social media companies are exercising editorial judgment, acknowledged Jaffer. “But the fact that they are exercising editorial judgment isn’t the end of the analysis,” he continued. “Then there’s the question of, is the public justification for overriding that editorial judgment strong enough to justify overriding it?”
As an example, he sketched a hypothetical where platforms might choose to censor certain political candidates in the runup to an election. “I think you could make a strong case … that in the weeks before an election, the public’s interest in hearing from political candidates should prevail over the interests of Facebook or TikTok in promoting the political candidates that they might prefer at that particular moment in time.”
“You say there needs to be a compelling governmental interest. What about the compelling governmental interest in having the next generation of people, who communicate only on social media for the most part, having free speech?” asked Feldman. In other words, he said, why invent new free speech rules instead of using the ones the Supreme Court already applies to government in other contexts?
“Because there are competing First Amendment rights here,” replied Jaffer. “And number two, overriding those rights in the way that you describe, essentially imposing the First Amendment on private platforms, would result in a public sphere that works for nobody.”
Feldman countered that the rules seem to have functioned well for the public sphere until now. What’s so different about social media?
“Platforms are functioning as a substitute for the public square, and also the nightly news broadcast, and also passing a note in class, and also this long list of means of communication, some of which had those ‘free for all speech’ rules, and some of which didn’t at all,” replied Keller. “I think sacrificing the value that people get from platforms in those other roles in order to turn everything into the free-for-all isn’t justified by the historical analogies.”
She turned again to her point about creating more options for users as the right solution, and one supported by Supreme Court precedent. “We have repeated First Amendment cases saying if the technology permits a resolution where the government can meet its goals by putting more autonomy in the hands of individual users and listeners, then it should do that instead of having a top-down rule,” she said. “I want to make the argument that lawmakers don’t get to just ignore remedies that would increase user autonomy … I think they’re faced with having to look at better ways to give users power over what discourse they choose to participate in.”
What about when the government demands that social media companies take down content it dislikes or believes is misinformation, asked Feldman. Should the First Amendment protect against even mild encouragement by government officials to remove or alter posts?
The panelists agreed that coercion, particularly of smaller platforms, was a serious concern. But Jaffer said that it would also be strange if the government could not use social media to share, for example, research by the Centers for Disease Control with the public. Similarly, “I think it would be a crazy world if the CDC weren’t entitled to reach out to public platforms that already have policies relating to public health misinformation, and say to the platforms, ‘we think those posts are misinformation,’ and then leave it to the platforms to make the final call,’” he said.
“If a platform is trying to decide how to comply with the viewpoint neutrality mandate, they’ll say … ‘I’m just going to take down a whole lot more speech than I used to.’”
Daphne Keller
Finally, in response to an audience question about where third-party harms fit into the speakers’ analyses, Keller said that there are typically three interests at stake in these types of cases: those of the online speaker, those who were harmed by the speaker, and the platform itself. And unfortunately, she added, “There’s this systematic problem where one of the interests is always missing in the way that we litigate these questions.”
For Jaffer, the problem was even bigger. “The fourth actor that’s not there is the democratic public,” he said. “If we want a First Amendment that actually works for democracy, we have to find some way to make the First Amendment care more about the implications of these questions for democracy.”