Skip to content

People

evelyn douek

  • Judge Refuses To Reinstate Parler After Amazon Shut It Down

    January 22, 2021

    A federal judge has refused to restore the social media site Parler after Amazon kicked the company off of its Web-hosting services over content seen as inciting violence. The decision is a blow to Parler, an upstart that has won over Trump loyalists for its relatively hands-off approach to moderating content. The company sued Amazon over its ban, demanding reinstatement. U.S. District Judge Barbara Rothstein sided with Amazon, which argued that Parler would not take down posts threatening public safety even in the wake of the attack on the U.S. Capitol and that it is within Amazon's rights to punish the company over its refusal...Evelyn Douek, a lecturer at Harvard Law School, predicts more battles over online speech will erupt between sites that choose a hands-off approach and Web hosts that demand a more aggressive stance. And that troubles her. "Is that the right place for content moderation to be occurring?" Douek asked. "It's harder to bring accountability to those choices when we don't even know who's making them or why they're being made." In other words, when a Web host has a problem with content on a client's site, usually these discussions are hashed out between the two parties, far from the public light. And Web hosts, unlike social media platforms, are not used to explaining these decisions publicly.

  • Facebook outsources its decision to ban Trump to oversight board

    January 22, 2021

    Facebook said it would refer its decision to indefinitely suspend former president Donald Trump’s Facebook and Instagram accounts to the oversight board, an independent Facebook-funded body composed of international experts that has the power to review — and potentially overturn — the company’s content decisions. The referral would amount to the first major case for the board, which took the first spate of cases it would consider in December. It will also be a key test for a model of external governance for Silicon Valley’s decision-making, one that is being closely watched by experts and policymakers around the world...Facebook said it would continue to suspend Trump’s accounts pending the board’s decision. The social network also asked the board to review its policies on world leaders, whose speech Facebook and other social media companies consider to be newsworthy or in the public interest, and therefore they are given more latitude to make inflammatory comments than everyday users. “This is clearly the right decision,” said Evelyn Douek, a lecturer at Harvard Law School who had argued, along with other academics, that the oversight board should take up the case. “Had Facebook failed to send the case to the board, it would have suggested that even it wasn’t taking its oversight board seriously. Just waiting to see how [Zuckerberg] happened to feel about whether the Trump suspension should be made permanent is not a good model for how these kinds of important decisions should get made.”

  • Inside Twitter’s Decision to Cut Off Trump

    January 19, 2021

    Jack Dorsey, Twitter’s chief executive, was working remotely on a private island in French Polynesia frequented by celebrities escaping the paparazzi when a phone call interrupted him on Jan. 6. On the line was Vijaya Gadde, Twitter’s top lawyer and safety expert, with an update from the real world. She said she and other company executives had decided to lock President Trump’s account, temporarily, to prevent him from posting statements that might provoke more violence after a mob stormed the U.S. Capitolthat day. Mr. Dorsey was concerned about the move, said two people with knowledge of the call...But he had delegated moderation decisions to Ms. Gadde, 46, and usually deferred to her — and he did so again...Since Mr. Trump was barred, many of Mr. Dorsey’s concerns about the move have been realized. Twitter has been embroiled in a furious debate over tech power and the companies’ lack of accountability. Lawmakers such as Representative Devin Nunes, a Republican from California, have railed against Twitter, while Silicon Valley venture capitalists, First Amendment scholars and the American Civil Liberties Union have also criticized the company. At the same time, activists around the world have accused Twitter of following a double standard by cutting off Mr. Trump but not autocrats elsewhere who use the platform to bully opponents. “This is a phenomenal exercise of power to de-platform the president of the United States,” said Evelyn Douek, a lecturer at Harvard Law School who focuses on online speech. “It should set off a broader reckoning.”

  • The Moderation War Is Coming to Spotify, Substack, and Clubhouse

    January 15, 2021

    Glenn Greenwald was pissed. The Columbia Journalism Review had just asked whether Substack should remove the writer Andrew Sullivan from its service. And having recently joined the email newsletter platform himself, Greenwald attacked. “It was only a matter of time before people started demanding Substack be censored,” he said, taking it a step further than the CJR. Last October, Greenwald left The Intercept, a publication he founded, claiming the publication’s editors, who previously hadn’t touched his work, “censored” him ahead of the 2020 election. So he moved to Substack, which advertises itself as a home for independent writing...Substack, Spotify, and Clubhouse’s current perspective on content moderation mirror how Twitter, Facebook, and Google once viewed the practice. Twitter executives initially called themselves “the free speech wing of the free speech party.” Facebook insisted it had no business touching political content. YouTube allowed Alex Jones and other wingnuts to build misinformation empires on its service. Now, Substack CEO Chris Best — reflecting the smaller platforms’ attitude on moderation — told CJR that if you’re looking for him to take an “editorial position” you should find another service. After initially resisting aggressive content moderation (aside from no-brainers like child porn), the bigger platforms have slowly relented. “They are agreeing that they need to moderate more aggressively and in more ways than they used to,” Evelyn Douek, a Harvard Law School lecturer who studies content moderation, told OneZero. And if past is prologue, their path to the current state is worth revisiting.

  • The Lawfare Podcast: Jonathan Zittrain on the Great Deplatforming

    January 14, 2021

    Yesterday, January 13, the House of Representatives impeached President Trump a second time for encouraging the violent riot in the Capitol Building on January 6. And yet, the impeachment is probably less of a crushing blow to the president than something else that’s happened in recent days: the loss of his Twitter account. After a few very eventful weeks, Lawfare's Arbiters of Truth series on disinformation is back. Evelyn Douek and Quinta Jurecic spoke with Jonathan Zittrain, the George Bemis Professor of International Law at Harvard Law School, about the decision by Twitter, Facebook and a whole host of other platforms to ban the president in the wake of the Capitol riot. Jonathan, Evelyn and Quinta take a step back and situate what’s happening within the broader story of internet governance. They talked about how to understand the bans in the context of the internet’s now not-so-brief history, how platforms make these decisions and, of course, Section 230 of the Communications Decency Act. Listeners might also be interested in Zittrain's February 2020 Tanner Lecture, "Between Suffocation and Abdication: Three Eras of Governing Digital Platforms," which touches on some of the same ideas discussed in the podcast.

  • YouTube suspends Trump, days after Twitter and Facebook

    January 14, 2021

    YouTube suspended President Trump from uploading new videos to his official account for at least a week, making the decision days after fellow social media giants Twitter and Facebook shut the president out of his accounts because of concerns his posts will incite violence. The Google-owned video site was the last of the major social media networks to suspend Trump after the attack on the U.S. Capitol...The resistance to removing videos completely has helped allow YouTube to fly under the radar as other social media sites take the heat for allowing misinformation to proliferate on their sites, said Harvard Law School lecturer Evelyn Douek...Researchers tend to focus on text-based Twitter and Facebook, Doueksaid, because video can be more time consuming and labor intensive to sift through. That doesn’t mean there is less misinformation floating around on YouTube, and, in fact, the company has been accused of allowing people to become radicalized on the site by promoting conspiracy theory videos, she added. YouTube’s policy of laying out rules and using a strike system to enforce them is better than ad hoc decision-making by executives, Douek said. “My view of content moderation is companies should have really clear rules they set out in advance and stick to, regardless of political or public pressure,” she said. “We don’t just want these platforms to be operating as content cartels and moving in lockstep and doing what everyone else is doing.”

  • Illustration of a hand holding a mobile phone, pressing a

    Blocking the president

    January 13, 2021

    Harvard Law experts Yochai Benkler and evelyn douek weigh in on the suspension of President Trump’s social media accounts and potential First Amendment implications.

  • Parler, a Platform Favored by Trump Fans, Struggles for Survival

    January 12, 2021

    Parler launched in 2018 as a freewheeling social-media site for users fed up with the rules on Facebook and Twitter, and it quickly won fans from supporters of President Trump. On Monday, it went dark, felled by blowback over its more permissive approach. Amazon.com Inc. abruptly ended web-hosting services to the company, effectively halting its operations, prompting Parler to sue Amazon in Seattle federal court. Other tech partners also acted, crippling operations. Driving the decision was last week’s mob attack on the U.S. Capitol...Amazon explained its decision to shut off services to Parler by citing 98 instances of violent content on the platform that it said violated its rules. Parler said it removed all 98 items, in some instances before Amazon reported them. Determining whether discourse on Parler or the platform’s moderation of it was fundamentally worse than on other forums is “kind of an impossible question, empirically and philosophically,” said Evelyn Douek, a Harvard Law School lecturer who studies content moderation. While a case can be made for app stores and other internet infrastructure providers taking action against platforms that don’t take moderation requirements seriously, Ms. Douek said, the speed of tech companies’ action against Parler raised questions. “If 98 is the threshold, has AWS looked at the rest of the internet,” she said. “Or at Amazon?”

  • The Facebook Oversight Board Should Review Trump’s Suspension

    January 11, 2021

    An op-ed by Evelyn Douek: While Congress works out what form of accountability it will impose on President Trump for inciting insurrection at the U.S. Capitol last week, the president has faced a swift and brutal reckoning online. Snapchat, Twitch, Shopify, email providers and payment processors, amongst others, have all cut ties with Trump or his campaign. And after years of resisting calls to do so, both Facebook and Twitter have suspended Trump’s accounts. This Great Deplatforming has ignited a raucous debate about free speech and censorship, and prompted questions about the true reasons behind the bans—including whether they stem from political bias or commercial self-interest rather than any kind of principle. Luckily, at least one platform has built a mechanism that is intended to allay exactly these concerns. Facebook should refer its decision to suspend Donald Trump’s account to the Oversight Board for review. Ever since Facebook announced the launch of its Oversight Board in May, there have been constant questions about the board’s activities—or lack thereof. Researchers and journalists wanted to know why the board wasn’t weighing in on Facebook’s most controversial decisions and why the board wouldn’t be operational in time for the U.S. 2020 election.

  • Trump Is Banned. Who Is Next?

    January 11, 2021

    An op-ed by Evelyn Douek: It happened slowly, and then all at once. After years of sparring, the internet’s most powerful moderators deplatformed their most famous troll: the president of the United States. Facebook has blocked Donald Trump’s account indefinitely. So have Snapchat, Twitch, Shopify; even one of the Trump campaign’s email providers has cut it off. At the time of writing, Trump still has his YouTube channel, but the company says it is accelerating its enforcement action. It was a Friday Night Massacre of platform bans. But one ban outstrips all others in its symbolism: @realDonaldTrump has been suspended from Twitter, the platform that has defined this president more than any other. The story of the past week in content moderation can be told in two ways. The first is the formalistic myth that platforms want us to believe. In this telling, platforms have policies and principles they hew to; their decisions based on them are neutral, carefully considered evaluations of the rules and the facts. The second is the realist take, in which the posts and tweets of platform executives and spokespeople can be seen as fig leaves, trying to hide that these were, at bottom, arbitrary and suddenly convenient decisions made possible by a changed political landscape and new business imperatives.

  • ‘Free Speech’ Platforms Are Emerging as Facebook and Twitter Suspend Trump

    January 8, 2021

    Four years ago, startup founder Andrew Torba emailed me about a social network he was building, one that offered users near-absolute free speech. A pro-Trump conservative, Torba saw an opening for his venture after Twitter started removing people for harassment. Wary of moderation, he created an alternative. “What makes the entirely left-leaning Big Social monopoly qualified to tell us what is ‘news’ and what is ‘trending’ and to define what ‘harassment’ means?” he told me. “It didn’t feel right to me, and I wanted to change it.” Torba’s network, Gab, debuted a few months ahead of Trump’s 2016 election. Soon after, it grew into an established hangout for right-wing types online...Unlike mainstream social networks, Gab and its counterparts are unresponsive to criticism as a policy, which could create some serious issues down the line. “A lot of the organizing happened on these platforms that explicitly make it a point of pride not to moderate,” Evelyn Douek, a lecturer at Harvard Law School who studies content moderation, told me. “So what lever is there to pull? We’re going to need to work that out.” Facebook and Twitter were the big social media story these past four years. But look for Parler, Gab, and their counterparts to take their place. The story doesn’t end with a ban.

  • The Year That Changed the Internet

    January 5, 2021

    An op-ed by Evelyn DouekFor years, social-media platforms had held firm: Just because a post was false didn’t mean it was their place to do anything about it. But 2020 changed their minds. At the end of May, Twitter for the first time labeled a tweet from the president of the United States as potentially misleading. After Donald Trump falsely insisted that mail-in voting would rig the November election, the platform added a message telling users to “get the facts.” Within a day, Mark Zuckerberg, Facebook’s founder and CEO, had appeared on Fox News to reassure viewers that Facebook had “a different policy” and believed strongly that tech companies shouldn’t be arbiters of truth of what people say online. But come November, between the time polls closed and the race was called for Biden, much of Trump’s Facebook page, as well as more than a third of Trump’s Twitter feed, was plastered with warning labels and fact-checks, a striking visual manifestation of the way that 2020 has transformed the internet. Seven months ago, that first label on a Trump tweet was a watershed event. Now it’s entirely unremarkable. Among the many facets of life transformed by the coronavirus pandemic was the internet itself. In the face of a public-health crisis unprecedented in the social-media age, platforms were unusually bold in taking down COVID-19 misinformation.

  • Better Than Nothing: A Look at Content Moderation in 2020

    January 4, 2021

    For more than a decade, the attitude of the biggest social media companies toward policing misinformation on their platforms was best summed up by Mark Zuckerberg’s oft-repeated warning: “I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online.” Even after the 2016 election, as Facebook, Twitter, and YouTube faced growing backlash for their role in the dissemination of conspiracy theories and lies, the companies remained reluctant to take action against it. Then came 2020. Under pressure from politicians, activists, and media, Facebook, Twitter, and YouTube all made policy changes and enforcement decisions this year that they had long resisted—from labeling false information from prominent accounts to attempting to thwart viral spread to taking down posts by the president of the United States. It’s hard to say how successful these changes were, or even how to define success. But the fact that they took the steps at all marks a dramatic shift. “I think we’ll look back on 2020 as the year when they finally accepted that they have some responsibility for the content on their platforms,” said Evelyn Douek, an affiliate at Harvard’s Berkman Klein Center for Internet and Society. “They could have gone farther, there’s a lot more that they could do, but we should celebrate that they’re at least in the ballgame now.”

  • We hardly ever talk about YouTube and disinformation. Not anymore.

    December 17, 2020

    We talk a lot on this show about how social media platforms have been slow to react to disinformation over the years, and especially around elections — and now the coronavirus and also the coronavirus vaccine. But perhaps the slowest to take a stand is YouTube. The video platform waited until Dec. 9 — more than a full month after the presidential election — before it started to remove videos falsely claiming election fraud or rigging. Researchers have worried about its radicalizing algorithm for years, and the company has basically no interest in working with them. I spoke with Evelyn Douek, an affiliate at Harvard’s Berkman Klein Center for Internet and Society. She said YouTube is flying firmly under the radar. The following is an edited transcript of our conversation.

  • Sinister sounds: podcasts are becoming the new medium of misinformation

    December 14, 2020

    In the drawn-out aftermath of the US election, Amelia’s* dad was losing faith in Fox News. Why wasn’t it covering more allegations of voter fraud, he asked. The network was “a joke”. And so he turned to alternative sources of information: podcasts like Bannon’s War Room, hosted by alt-right figure Steve Bannon, which regularly broadcasts baseless claims about ballot dumps and illegal voters. And an old favourite of his, the rightwing Catholic podcast The Taylor Marshall Show. In the US, Australia and across the Anglosphere, people regularly spend hours with strangers talking directly into their ears. Around one third of Australian news consumers are reported to be podcast listeners, and indications are that numbers have grown during the pandemic. Yet the role of podcasts in the information ecosystem has gone largely unexamined...Apps such as Apple and Google Podcasts “are significant gatekeepers” of what kind of audio content reaches our ears, says Evelyn Douek, a lecturer at Harvard Law School, although they function more as directories for organising and discovering shows than as social networking platforms, and have varying degrees of oversight and control. A Google spokesperson said Google Podcasts indexes audio available on the web much like Google Search. “This can include topics and ideas that may be controversial,” she said. “Google Podcasts ... only removes podcasts from its index in very rare circumstances, largely guided by local law.”

  • Why Isn’t Susan Wojcicki Getting Grilled By Congress?

    November 18, 2020

    An op-ed by Evelyn DouekThere are many important questions that could be asked at the Senate Judiciary Committee hearing with tech CEOs today regarding their handling of the US 2020 election. Foremost among them should be “Where is Susan Wojcicki, YouTube’s CEO?” The election was billed as a major test for social media platforms, but it’s one that YouTube failed weeks before election day. The platform is playing host to, and is an important vector for, spreading false claims of election victory and attempts to delegitimize Biden’s win. YouTube had to have seen it all coming, and it shrugged. That’s YouTube’s fault—but it’s also a result of the success of its broader strategy to keep its head down and let other platforms be the face of the content moderation wars. In general, the media, researchers, and lawmakers have let this strategy work. YouTube is one of the biggest social media platforms in the country—indeed, a Pew Research Center survey last year found it was themost widely used. Over a quarter of Americans get news from the platform. There have been millions of views of videos with false claims of election results or voter fraud on YouTube since election day, with nothing more than a small, uniform label about election results attached beneath them. And yet, the platform often escapes scrutiny. Judging by much of the press coverage and public outrage about the role that social media platforms play in the modern information ecosystem, one could be forgiven for thinking that Facebook and Twitter were the only major sources of online information. This disproportionate focus goes beyond public narratives: While Facebook and Twitter CEOs Mark Zuckerberg and Jack Dorsey have been repeatedly hauled before Congress in the past few years, YouTube CEO Susan Wojcicki has escaped summons.

  • Facebook Has A Rule To Stop Calls To Arms. Moderators Didn’t Enforce It Ahead Of The Kenosha Shootings.

    November 17, 2020

    In August, following a Facebook event at which two protesters were shot and killed in Kenosha, Wisconsin, Mark Zuckerberg called the company’s failure to take down the event page asking militant attendees to bring weapons “an operational mistake.” There had been a new policy established earlier that month “to restrict” the ability of right-wing militants to post or organize in groups, Facebook’s CEO said, and under that rule, the event page should have been removed. BuzzFeed News has learned, however, that Facebook also failed to enforce a separate year-old call to arms policy that specifically prohibited event pages from encouraging people to bring weapons to intimidate and harass vulnerable individuals...Harvard lecturer and content moderation researcher Evelyn Douek commended the senators’ letter for attempting to highlight the gap between Facebook’s policy announcements and its applications of those policies. “We've gotten to a place where for many of the major platforms, their policies on paper are mostly fine and good,” she said. “But the question is always, always ‘Can and will they enforce them effectively?’”

  • Facebook Rejected Employee Push to Throttle Misleading Political Posts

    November 17, 2020

    On Nov. 3, Facebook added an advisory under a post from President Donald Trump that included the unfounded claim that his opponents were trying to “steal the election.” Followers of Trump shared the post 26,000 times as he made his Facebook account a key part of his arsenal in raising doubts about the outcome of U.S. elections. Months earlier, dozens of Facebook employees from the company’s Civic Integrity unit, which is in charge of determining how the company handles election-related content, had anticipated and tried to prevent the spread of such posts. Shocked into action by an incendiary Trump post targeting Black Lives Matter protesters, they proposed stricter penalties for politicians and public figures on Facebook, such as blocking the ability to comment or easily share posts if they ran afoul of the company’s policies...While Facebook officials have touted the company’s preparedness in combating election interference this year, researchers have criticized its labeling approach as too mild to meaningfully slow the proliferation of misinformation on its service. Members of the campaign for President-elect Joseph Biden have critiqued Facebook for failing to stop the spread of posts by Trump and his allies seeking to undermine the election process. “The labels they went with are pretty weak and unobtrusive, and fundamentally fail to reckon with the key role that Facebook plays in spreading and amplifying disinformation,” said Evelyn Douek, a lecturer at Harvard Law School who studies the content moderation practices of tech companies.

  • Twitter says it labeled 0.2% of all election-related tweets as disputed.

    November 13, 2020

    Twitter said on Thursday that it had labeled as disputed 300,000 tweets related to the presidential election, or 0.2 percent of the total on the subject, even as some users continued sharing misleading information about the outcome. The disclosure made Twitter the first major social media platform to publicly evaluate its performance during the 2020 election. Its revelations followed intense criticism by President Trump and other Republicans, who have said Twitter’s fact-checking efforts silenced conservative voices...The high-profile changes showed that social media companies are still evolving their content moderation policies and that more changes could come. Misinformation researchers praised Twitter for its transparency but said more data was needed to determine how content moderation should adapt in future elections. “Nothing about the design of these platforms is natural, inevitable or immutable. Everything is up for grabs,” said Evelyn Douek, a lecturer at Harvard Law School who focuses on online speech. “This was the first really big experiment in content moderation outside of the ‘take down or leave up’ paradigm of content moderation,” she added. “I think that’s the future of content moderation. These early results are promising for that kind of avenue. They don’t need to completely censor things in order to stop the spread and add context.” Facebook, which initially said it would not fact-check political figures, also added several labels to Mr. Trump’s posts on its platform. Although Twitter was more aggressive, other social platforms may copy its approach to labeling disputed content, Ms. Douek said.

  • From Steve Bannon To Millennial Millie: Facebook, YouTube Struggle With Live Video

    November 11, 2020

    Last week, millions of Americans turned to cable news to watch election returns pour in. Some refreshed their Twitter feeds to get the latest tallies. And nearly 300,000 others kept an eye on the YouTube channel of 29-year-old Millie Weaver, a former correspondent for the conspiracy theory website Infowars, who offered right-wing analysis to her followers in a live-stream that carried on for almost seven hours the day after the election. At times, her pro-Trump commentary veered into something else: misinformation. First she aired footage of a man pulling a red wagon into a ballot-counting center in Detroit. That image has been spread widely online by conservatives who contend, without evidence, that it is proof of illegal ballot stuffing. It was, in fact, a TV cameraman pulling his equipment. Then Weaver raised questions with a guest about the integrity of the election, stoking the false theme that the election was rife with fraud...Experts say the streams often occupy an ambiguous gray zone, where it's difficult for the platform's automated detection systems or human moderators to quickly flag this type of content. "That's in part because it's harder to search video content as opposed to text," said Evelyn Douek, a Harvard Law School lecturer who studies the different ways platforms approach content moderation. "It's a lot harder to scrutinize what's going on, and it's a lot more time consuming." ... "Taking a platform-by-platform view of these things is inherently limited," Douek, the Harvard Law School lecturer, said. "What one individual platform can do in the whole of the Internet ecosystem will always be somewhat limited." Yet, she said, platforms have the responsibility "to think about exactly what they can do to help mitigate the harm that their platform can cause."

  • YouTube Election Loophole Lets Some False Trump-Win Videos Spread

    November 10, 2020

    On Monday, cable outlet One America News Network posted two videos to its YouTube account titled “Trump won.” The clips echoed several others telling viewers, falsely, that U.S. President Donald Trump was re-elected and that the vote was marred by fraud. YouTube added a label noting that the Associated Press called the election for Joe Biden. But the world’s largest online video service didn’t block or remove the content. That approach differs from Twitter Inc., which has hidden conspiratorial election posts behind warnings. A few months ago, YouTube released a detailed policy prohibiting manipulated media and voter suppression, but left one gap: Expressing views on the election is OK. The result has been an onslaught of videos aiming to undermine the legitimacy of the election, according to online media and political researchers. Some of this material has spread on other social networks. And several clips, like the two OANN videos on Monday, ran advertisements, profiting from a Google policy that lets content framed as news reporting or talk shows cash in. “YouTube saw the inevitable writing on the wall that its platform would be used to spread false claims of election victory and it shrugged,” said Evelyn Douek, a lecturer at Harvard Law School who studies content moderation and the regulation of online speech. One YouTube video claiming evidence of voter fraud in Michigan has more than five million views. Another posted by Trump was selectively edited to appear as if Biden is endorsing voter fraud. That has over 1.6 million views. One of the OANN clips was watched 142,000 times in seven hours on Monday, while the other got 92,000 hits in that time.