Skip to content

People

evelyn douek

  • The Lawfare Podcast: Darius Kazemi on The Great Bot Panic

    July 6, 2020

    On this episode of Lawfare's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Darius Kazemi, an internet artist and bot-maker extraordinaire. Recently, there have been a lot of ominous headlines about bots—including an NPR article stating that nearly 50 percent of all Twitter commentary about the pandemic has been driven by bots rather than human users. That sounds bad—but Darius thinks that we shouldn’t be so worried about bots. In fact, he argues, a great deal of reporting and research on bots is often wrong and actually causes harm by drumming up needless worry and limiting online conversations. So, what is a bot, anyway? Do they unfairly take the blame for the state of things online? And if weeding out bot activity isn’t a simple way to cultivate healthier online spaces, what other options are there for building a less unpleasant internet?

  • Advertisers retreating from Facebook amid backlash over hate speech on social media

    July 1, 2020

    Australian advertisers say they are considering pulling their advertising from Facebook amid a growing global backlash over hate speech on social media. Starbucks, Coca-Cola and consumer goods giant Unilever are among the big names who've joined the boycott, prompting Facebook's share price to tumble by more than 8 per cent. The company's founder and chief executive Mark Zuckerberg has since announced updates to the company's advertising standards. Guest: Evelyn Doeuk, lecturer on law and a Doctoral Candidate with Harvard Law School and an expert in the regulation of online speech.

  • A Tweet Filled With Porn Noises Demonstrates Twitter Is Unprepared for Audio

    June 23, 2020

    “You can Tweet a Tweet. But now you can Tweet your voice!” This was how Twitter introduced last week its new audio-tweet option. In the replies to the announcement, however, lingered a warning. “Is this what y’all want?” asked one person, reposting another user’s audio tweet, which used the new feature to record the sounds of… porn. The porn audio tweet is still up without a content warning as of this writing (Update: Twitter labeled the Tweet Monday afternoon). The company’s lack of response could be a harbinger of what’s to come in this new, chatty Twitter. Audio brings with it a new way for pesky trolls and bad actors to spread content, and one that is more difficult to moderate than traditional tweets. The potential solution — voice-to-text transcription — is not ideal...Content moderation researchers told OneZero that while the feature is not inherently good or bad, Twitter — a platform that already struggles with curbing harmful content — doesn’t seem to be prepared for its consequences. “Like any new platform for content on the internet it is going to have all of the bad things that come along with — it’s going to have hate speech, disinformation, threats, bullying,” said evelyn douek, a lecturer at the Harvard Law School who studies regulation of online speech. (Evelyn spells her name using lowercase letters.) “We know that now that’s a part of the internet. And so when you’re rolling out a product you need to think about your plans for dealing with it and [Twitter’s] just didn’t seem to be a very good one.”

  • Australia warned to not ignore domestic misinformation in social media crackdown

    June 23, 2020

    The Select Committee on Foreign Interference through Social Media has been tasked with probing the risk posed to the nation's democracy by foreign actors online, but it's been warned against ignoring the power of domestic influence in spreading misinformation. It's also been cautioned against simply enforcing content blocking and leaving the responsibility to a handful of mostly US-based tech companies. The committee on Monday heard from evelyn douek from the Berkman Klein Centre for Internet & Society and Alex Stamos from Stanford Internet Observatory, who both agree it's a battle best fought with transparency and not one about setting a guideline of what is right or wrong information...Meanwhile, douek said a centrepiece for any regulation or policy is getting greater transparency from platforms, telling the committee on Monday, "we cannot fix problems that we don't understand". She said the idea that platforms can and should do more is oversimplifying the problem. Touching on what Australia in particular is facing, douek said overt influence campaigns and homegrown conspiracy theories often receive far higher levels of engagement than covert ones from overseas actors. "Overhyping and securitising the discourse around disinformation campaigns only furthers the aim of such campaigns by increasing the levels of distrust in and apathy towards public discourse more generally," she said. "These second order effects will be, in the long term, far more [of a] panacea than any individual information operation." To that end, douek said the Australian government's response must be grounded in democratic values, including respect for free speech.

  • Why We Should Care That Facebook Accidentally Deplatformed Hundreds of Users

    June 15, 2020

    This week, as part of the company’s efforts to cull “bad actors,” Facebook accidentally deplatformed hundreds of accounts. The victims? Anti-racist skinheads and members of ska, punk, and reggae communities—including artists of color. Some users even believed their accounts were suspended just for “liking” nonracist skinhead pages and punk fan pages. While Facebook has kept mum on the reasons behind the mistake, it seems likely, as OneZero reported, that the platform confused these subcultures with far-right, neo-Nazi skinheads. It’s not exactly a hard mistake to make. The skinhead aesthetic has long been associated with white supremacist groups...One of the main reasons for such mistakes is the increased reliance on artificial intelligence. Social media companies have used A.I. for years to monitor content, but at the start of the pandemic, they said they would rely on A.I. even more as human moderators were sent home, admitting that they “expect to make more mistakes” as a result. It was a rare moment of candor: “For years, these platforms have been touting A.I. tools as the panacea that’s going to fix all of content moderation,” said Evelyn Douek, a doctoral student at Harvard Law School and affiliate at Harvard’s Berkman Klein Center for Internet and Society...The problem, of course, is that A.I. is here to stay—and that, partly as a consequence of this, we should expect to see many more mistakes on these platforms. But that doesn’t mean we should look at content moderation from a defeatist standpoint. “We need to start thinking about what kinds of mistakes we want platforms to make,” said Douek, who also mentioned that the conversation has been slow in catching up to this point “because people get uneasy talking about that kind of calculus in the context of speech rights.”

  • Facebook in turmoil over refusal to police Trump’s posts

    June 2, 2020

    The clash between Twitter and Donald Trump has thrust rival Facebook into turmoil, with employees rebelling against CEO Mark Zuckerberg's refusal to sanction false or inflammatory posts by the US president. Some Facebook employees put out word of a "virtual walkout" to take place Monday to protest, according to tweeted messages. "As allies we must stand in the way of danger, not behind. I will be participating in today's virtual walkout in solidarity with the black community," tweeted Sara Zhang, one of the Facebook employees in the action. Nearly all Facebook employees are working remotely due to the pandemic. "We recognize the pain many of our people are feeling right now, especially our Black community," Facebook said in response to the AFP request for comment. "We encourage employees to speak openly when they disagree with leadership." Facebook was aware some workers planned the virtual walkout and did not plan to dock their pay...To make matters worse, US media revealed Sunday that Zuckerberg and Trump spoke by telephone on Friday. The conversation was "productive," unnamed sources told the Axios news outlet and CNBC. Facebook would neither confirm nor deny the reports. The call "destroys" the idea that Facebook is a "neutral arbiter," said Evelyn Douek, a researcher at Harvard Law School. Like other experts, she questioned whether Facebook's new oversight board, formed last month to render independent judgments on content, will have the clout to intervene. On Saturday, the board offered assurances it was aware there were "many significant issues related to online content" that people want it to consider.

  • Trump’s Tweets Force Twitter Into a High-Wire Act

    June 1, 2020

    The feud between Twitter and Donald Trump keeps escalating. Days after Twitter drew the president’s ire by applying a fact-checking label to one of his tweets—prompting a retaliatory executive order from Trump—the platform went even further. On Friday morning, it flagged a Trump tweet for violating its rules and implemented measures to keep it from going viral, while keeping the tweet up in the name of public interest. It’s a move that attempts to strike a thoughtful balance. But it also gets Twitter deeper into a messy conflict that there may be no easy way out of. The tweet that finally crossed Twitter’s line came just after midnight on Friday morning, in response to the escalating riots in Minneapolis following the apparent murder of George Floyd, an unarmed black man, by a white police officer. Trump suggested that he might deploy the National Guard and warned that “when the looting starts, the shooting starts,” a phrase attributed to Walter Headley, a Miami police chief in the 1960s who bragged about using “police brutality” against rioters. Twitter soon covered up Trump’s tweet with a label warning that it violated a rule against glorifying violence...There’s no perfect answer here, but Twitter may have found the least bad approach to a nearly impossible situation. “This is the most effective way of Twitter balancing the public interest of constituents knowing what their president says and believes, versus reducing the harm where that speech is potentially dangerous,” says Evelyn Douek, an affiliate at Harvard’s Berkman Klein Center for Internet and Society. Douek cautioned against expecting a platform like Twitter to completely solve the problems of political discourse. “There’s a real democratic tension in a private company that has no democratic accountability or legitimacy deciding what a duly elected public official can or cannot say.”

  • Gabrielle Lim on the Life and Death of Malaysia’s Anti-Fake News Act

    May 29, 2020

    In this episode of Lawfare's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Gabrielle Lim, a researcher with the Technology and Social Change Research Project at Harvard Kennedy School’s Shorenstein Center and a fellow with Citizen Lab. Lim just released a new report with Data and Society on the fascinating story of a Malaysian law ostensibly aimed at stamping out disinformation. The Anti-Fake News Act, passed in 2018, criminalized the creation and dissemination of what the Malaysian government referred to as “fake news.” After a new government came into power following the country’s 2018 elections, the law was quickly repealed. But the story of how Malaysia’s ruling party passed the act, and how Malaysian civil society pushed back against it, is a useful case study on how illiberal governments can use the language of countering disinformation to clamp down on free expression, and how the way democratic governments talk about disinformation has global effects.

  • Twitter Can’t Change Who the President Is

    May 27, 2020

    An article by Evelyn DouekDonald Trump’s tweets pose a special problem for Twitter. Absolutely no one can be surprised that the president is using the platform to tweet false and inflammatory claims in the middle of a global pandemic and the lead-up to an election: This is the president’s signature style. His recent tweets have promoted baseless conspiracy theories about the death of Lori Klausutis, a former staffer for Republican congressman–turned–MSNBC host Joe Scarborough, and falsely claimed that an expansion of mail-in voting would rig the 2020 election. When Twitter took the unprecedented step of adding a fact-check link to Trump’s tweets about voting, many critics of the decision thought that CEO Jack Dorsey still had not gone far enough—they maintained that the offending tweets should come down, or that the company should kick Trump off its platform altogether. The problem is that Trump’s critics are looking to Dorsey to solve a problem that Twitter did not create. What the president says and does is inherently newsworthy. As The Atlantic’s Adam Serwer tweeted yesterday, “You can’t deplatform the president of the United States.” At the moment, the duly elected president is someone who deliberately puts out divisive misinformation on social media. Twitter can surely do a better job of enforcing its own rules and flagging Trump’s worst statements—this morning, for instance, he repeated his casual insinuation that Scarborough was involved in Klausutis’s death and allegations that mail-in voting would lead to election cheating, and so far no warning labels or fact-checks are attached. But a tech company can’t change who the president is.

  • Facebook’s ‘oversight board’ is proof that it wants to be regulated – by itself

    May 18, 2020

    Here we go again. Facebook, a tech company that suffers from the delusion that it’s a nation state, has had another go at pretending that it is one...On the grounds that Facebook is the world’s largest information-exchange autocracy (population 2.6 billion) he [Zuckerberg] thinks that it should have its own supreme court...So it’s now just an “oversight board for content decisions”, complete with its own charter and a 40-strong board of big shots who will, it seems, have the power “to reverse Facebook’s decisions about whether to allow or remove certain posts on the platform”. Sounds impressive, doesn’t it? But it looks rather less so when you realise what it will actually be doing...One big surprise (for me, anyway) was that Alan Rusbridger, the former editor of the Guardian, should have lent his name and reputation to this circus. In an essay on Medium he’s offered a less than convincing justification. “In the eyes of some,” he writes, “the oversight board is one of the most significant projects of the digital age, ‘a pivotal moment’ in the words of Evelyn Douek, a young scholar at Harvard, ‘when new constitutional forms can emerge that will shape the future of online discourse.’” “Others are unconvinced,” continues Rusbridger. “Some, inevitably, will see it as a fig leaf.” I’m in the fig leaf camp, but even those like the aforementioned Douek – who evidently takes the FOB seriously – seem to have serious doubts about its viability. The most important question, she writes, is about the FOB’s jurisdiction, which of Facebook’s decisions the board will be able to review. “The board’s ‘bylaws’ contemplate a potentially vast jurisdiction, including the power to hear disputes about groups, pages, events, ads and fact-checking,” says Douek. “But the bylaws only promise this jurisdiction at some unspecified time ‘in the future’ and initially the board’s jurisdiction is limited to referrals from Facebook and ‘content that has been removed for violations of content policies’ from Facebook or Instagram.”

  • Craig Silverman on Real Reporting on Fake News

    May 15, 2020

    On this week's episode of Lawfare's Arbiters of Truth series on disinformation, Evelyn Douek spoke with Craig Silverman, the media editor for Buzzfeed News and one of the leading journalists covering the disinformation beat. Craig is credited with coining the phrase “Fake News.” Evelyn spoke with him about how he feels about that, especially now that the phrase has taken on a life of its own. They also talked about a book Craig edited, the second edition of the "Verification Handbook,” available online now, that equips journalists with the tools they need to verify the things they see online. Journalism and reporting on disinformation has never been so important—but the internet has never been so chaotic, and journalists are not only observers of disinformation, but also targets of it.

  • Evelyn Douek talks about the Facebook Oversight Board

    May 11, 2020

    After two years of discussion and planning, Facebook finally announced the first members of its Oversight Board, the so-called "Supreme Court" that will adjudicate problematic content cases for the social network. The 20 initial members are an impressive group, with a Nobel Peace Prize winner, multiple experts in constitutional law, former judges, etc. But there are still plenty of problematic questions surrounding the board, including: How much power will they actually have? And is their existence just an elaborate fig leaf to redirect blame for Facebook's content decisions and make it look like they care?...Our first guest is Evelyn Douek, who is an S.J.D. candidate at Harvard Law School and an affiliate at the Berkman Klein Center For Internet and Society. Evelyn studies international and transnational regulation of online speech and content moderation institutional design. Prior to coming to HLS, she was a clerk for the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She graduated with First Class Honours from the University of New South Wales with a Bachelor of Commerce/Laws in 2013, and is the host of an interview podcast featuring Professors at Harvard Law School called Leading Questions. She also blogs at Lawfare.

  • Aric Toler on How Not to Report on Disinformation

    May 8, 2020

    For this week's episode of our Arbiters of Truth series on disinformation, Evelyn Douek and Alina Polyakova talked to Aric Toler of Bellingcat, a collective that has quickly become the gold-standard for open source and social media investigations. Aric recently published a blog post in response to a New York Times article on Russian influence campaigns—one retweeted by former President Barak Obama no less—that Aric called “How Not to Report on Disinformation.” Evelyn and Alina asked him about the article and what exactly Aric thought was wrong with it as a case study in the challenges for reporters writing about disinformation operations. When are reporters helping to uncover threats to democracy, and when are they giving oxygen to fringe actors?

  • Why I’m Joining Facebook’s Oversight Board

    May 7, 2020

    Almost exactly a year ago, back in the days when near strangers could strike up random conversations in Italian bars, I found myself learning about a new initiative on which Facebook was embarking — a kind of independent Supreme Court to help the company rule on the deluge of moral, ethical, editorial, and legal challenges it was facing...I asked lots of questions about this Facebook Oversight Board, an idea Mark Zuckerberg had announced the previous November. It seemed a promising move by a company which was exasperating and alienating so many people by its apparent unwillingness, or inability, to get grips with the torrent of lousy, malign content it was enabling and amplifying. As well as all the good stuff...The idea of the alternative — some form of independent, external oversight — apparently grew out of multiple conversations and a thousand op-eds. One such discussion, in January 2018, involved a Harvard Law Professor, Noah Feldman, who had struck up a dialogue with Mark Zuckerberg. Both men agreed that, whoever should be making some hugely consequential decisions about the information which half the connected people on the planet were plugged into, it probably shouldn’t be Mark Zuckerberg. In the eyes of some, the fruits of those deliberations — the Oversight Board — is one of the most significant projects of the digital age, “a pivotal moment” in the words of Evelyn Douek, a young scholar at Harvard, “when new constitutional forms can emerge that will shape the future of online discourse.” Others are unconvinced. Some, inevitably, will see it as a fig leaf.

  • Podcast: Camille François on COVID-19 and the ABCs of disinformation

    April 29, 2020

    Camille François is a leading investigator of disinformation campaigns and author of the well-known “ABC” or “Actor-Behavior-Content” disinformation framework, which has informed how many of the biggest tech companies tackle disinformation on their platforms. Here, she speaks with Lawfare‘s Quinta Jurecic and Evelyn Douek for that site’s series on disinformation, “Arbiters of Truth.”

  • Facebook Is Removing Protest Pages. That’s a Terrible Precedent.

    April 27, 2020

    Last week, images of MAGA-hat wearing protestors, unmasked and tightly packed together on street corners, ricocheted across the internet...Depending on your politics — and perhaps your trust in epidemiologists — the attendees were either brave freedom-fighters resisting government overreach or reckless ideologues, risking public health to produce a moment of media spectacle. On Monday, Recode reported that Facebook, after consulting with state governments, had removed certain event pages for in-person rallies against coronavirus lockdowns in California, New Jersey, and Nebraska. The decision was met with immediate backlash...The move raises serious questions about the role of social platforms during the pandemic — and not just among those sympathetic to anti-quarantine rallies. Social distancing has eliminated many of the traditional methods for effectively leveraging political energy against decision-makers...Due to a dearth of human content moderators, Facebook is relying more heavily on its AI algorithms to flag unacceptable speech. “The platforms are churning out new rules by the day and being unapologetic about it,” said Evelyn Douek, a doctoral student at Harvard Law School who focuses on social platforms and digital constitutionalism. To an extent, Douek said, that makes sense. “Emergency powers are good when there’s an emergency.” False or deliberately harmful information about Covid-19 could endanger millions of lives.

  • The Internet’s Titans Make a Power Grab

    April 20, 2020

    An article by Evelyn DouekThe ordinary laws no longer govern. Every day, new rules are being written to deal with the crisis. Freedoms are curtailed. Enforcement is heavy-handed. Usual civil-liberties protections, such as rights of appeal, are suspended. By act, if not by word, a state of emergency has been declared. This is not a description of the United States, or even Hungary. It’s the internet during the coronavirus pandemic. We are living under an emergency constitution invoked by Facebook, Google, and other major tech platforms. In normal times, these companies are loath to pass judgment about what’s true and what’s false. But lately they have been taking unusually bold steps to keep misinformation about COVID-19 from circulating. As a matter of public health, these moves are entirely prudent. But as a matter of free speech, the platforms’ unconstrained power to change the rules virtually overnight is deeply disconcerting.

  • Who is right about political ads, Twitter or Facebook?

    January 16, 2020

    As the 2020 federal election draws closer, the issue of online political advertising is becoming more important, and the differences in how the platforms are approaching it more obvious. Twitter has chosen to ban political advertising, but questions remain about how it plans to define that term, and whether banning ads will do more harm than good. Meanwhile, Facebook has gone in the opposite direction, saying it will not even fact-check political ads. So whose strategy is the best, Twitter’s or Facebook’s? To answer this and other questions, we convened a virtual panel of experts...Harvard Law student and Berkman Klein affiliate Evelyn Douek, however, said in her view neither company is 100 percent right. “The best path is somewhere in the grey area in between,” she said. “It’s not obvious that a ban improves the quality of democratic debate. Facebook’s position, on the other hand, seems to rest on a notion of free expression that is nice in theory, but just doesn’t match reality.”

  • Cloudflare Wanted to Be a Boring Infrastructure Company. A Brave Choice and a $525 Million IPO Proved It’s Anything But

    December 18, 2019

    Firing a customer isn't uncommon. But few can say they've done it under such an intense media spotlight - and personal internal struggle - as Cloudflare co-founder and CEO Matthew Prince. This past August, the man accused of a mass shooting at a Walmart store in El Paso, Texas, posted his manifesto on the online message-board 8Chan. Cloudflare, a web infrastructure and security firm, counted 8Chan among its thousands of customers. Weeks before taking his company public, Prince suddenly found himself in the middle of a national debate about free speech, wrestling with the decision to pull Cloudflare's services, which would practically assure 8Chan's removal from the internet. Prince didn't think his company should decide who can and can't publish on the web. Still, after debating the issue for 24 hours with his team, he ultimately decided 8Chan wasn't following the rule of law. Cloudflare would no longer offer the site its support services...his thoughtful decision making has been met with respect by others in the industry. Evelyn Douek, an affiliate at Harvard's Berkman Klein Center for Internet and Society, commended Prince for "embracing the storm," and noted "he is committed to the hard task of defining a policy that Cloudflare can enforce transparently and consistently going forward."