Skip to content

People

evelyn douek

  • Julie Owono and Evelyn Douek

    ‘Be the Twitter that you want to see in the world’

    November 7, 2020

    Ahead of the 2020 presidential election in the United States, experts from the Berkman Klein Center for Internet & Society convened to discuss how platforms are approaching mis- and disinformation and what they can improve going forward.

  • How viral videos helped blast voting lies across the Web

    November 6, 2020

    The news-style YouTube video posted Wednesday by the pro-Trump One America News network was loaded with bogus claims: “Boldly cheating” Democrats had stolen a “decisive victory” for President Trump by “tossing Republican ballots, harvesting fake ballots” and calling their “antifa buddies to cause chaos … so that Americans stop focusing on the election and start fearing for their own safety.” Officials at YouTube, the world’s largest video site, said the clip showcased “demonstrably false content that undermines trust in the democratic process,” and they blocked it from pulling in advertising money. But they kept it viewable, attaching only a small disclaimer saying election “results may not be final,” because they said it did not directly break rules against videos that “materially discourage voting.” It has been viewed more than 400,000 times...Evelyn Douek, a lecturer at Harvard Law School who researches online speech, said video sites like YouTube too often get a pass from the discussions over hate speech, conspiracy theories and viral misinformation that have defined Facebook’s and Twitter’s last few years. Part of that is technical: Videos are harder to track and more time-consuming to check than simple, searchable text. But Douek also said it’s because regulators and journalists underappreciate the major pull videos have on our information ecosystem: YouTube’s parent Google says more than a billion hours of video are watched there every day. Unlike other social media sites, Douek said, YouTube had no policy addressing false claims of victory heading into the election, and the site didn’t publish a policy banning medical misinformation about covid-19 until May 20, when more than 90,000 people in the U.S. had already died.

  • Facebook removes pro-Trump Stop the Steal group over ‘calls for violence’

    November 6, 2020

    Facebook removed a viral group falsely claiming that “Democrats are scheming to disenfranchise and nullify Republican votes” after it gained more than 350,000 members in a single day. The hasty enforcement action against a political group was unusual for Facebook and raised questions about the consistency and transparency of the company’s content moderation. The group, “Stop the Steal”, was established by a rightwing not-for-profit group, Women for America First, and run by a team of moderators and administrators that included the longtime Tea Party activist Amy Kremer. Members were encouraged to provide their email addresses to a website calling for “boots on the ground to protect the integrity of the vote”, as well as to donate money...Facebook’s hasty action on the Stop the Steal group stands in marked contrast to its handling of other domestic groups that have organized on its platform. The company dragged its heels for months before taking action against the anti-government “boogaloo” movement, which has been linked to multiple murders, and against the antisemitic conspiracy theory QAnon, which has also been linked to violence and identified as a potential domestic terrorism threat. The inconsistency and lack of transparency around Facebook’s approach to content moderation drew quick criticism from experts in the field and digital rights advocates. “It really matters that platforms should be as clear in advance about their policies and consistent in their application,” said Evelyn Douek, a lecturer at Harvard law school who studies online speech regulation. “That helps fend off charges that any decisions are politically motivated or biased, and gives us a lever to pull for accountability that isn’t purely about who can get the most public attention or generate public outrage.”

  • Can Dan Bongino Make Rumble The Right’s New Platform?

    November 3, 2020

    As the CEOs of Twitter, Google, and Facebook testified before the US Senate just weeks before the election, the Facebook page of right-wing commentator Dan Bongino, which boasts more than 3.7 million followers, gleefully shared clips of Republican senators grilling the execs. But instead of sending people to watch the videos on Bongino’s popular YouTube channel, his page referred them to rumble.com, a relatively unknown video site that has become the darling of right-wing figures including Bongino, conservative author Dinesh D’Souza, writer John Solomon, Rep. Devin Nunes, and pro-Trump commentators Diamond and Silk...Prior to its influx of conservatives, Rumble's partners included news organization Reuters, venerable viral video outfit America’s Funniest Home Videos, television station owner E.W. Scripps, and fact-checking site Snopes. Its new content creators are more controversial. Solomon’s previous work for the Hill resulted in a damning internal inquiry, while Diamond and Silk have at times been flagged by Facebook’s third-party fact-checkers. The New York Times also noted that Bongino was a proponent of the “Spygate,” which it called “a dubious conspiracy theory about an illegal Democratic plot to spy on Mr. Trump’s 2016 campaign.” That means Rumble will soon have more difficult content moderation decisions to make, according to evelyn douek, a lecturer at Harvard Law School. “If Rumble doesn't want to be a haven for hate speech or harmful health misinformation or upset its user base and genuinely wants to moderate responsibly, it needs to think ahead about how it's going to draw lines and, just as importantly, how it's going to clearly communicate those lines to its users,” she told BuzzFeed News. “These decisions can be genuinely hard.”

  • It’s the End of an Era for the Media, No Matter Who Wins the Election

    November 2, 2020

    There’s a media phenomenon the old-time blogger Mickey Kaus calls “overism”: articles in the week before the election whose premise is that even before the votes are counted, we know the winner — in this case, Joe Biden. I plead guilty to writing a column with that tacit premise. I spent last week asking leading figures in media to indulge in the accursed practice of speculating about the consequences of an election that isn’t over yet. They all read the same polls as you do and think that President Trump will probably lose. But many leaders in news and media have been holding their breaths for the election — and planning everything from retirements to significant shifts in strategy for the months to come, whoever wins. President Trump, after all, succeeded in making the old media great again, in part through his obsession with it. His riveting show allowed much of the television news business, in particular, to put off reckoning with the technological shifts — toward mobile devices and on-demand consumption —  that have changed all of our lives. But now, change is in the air across a news landscape that has revolved around the president... The battles over speech and censorship, the sociologist Zeynep Tufekci tweeted recently, are becoming “attention wars.” As recently as last week, senators were dragging in tech executives to complain about individual tweets, but the arguments are about to turn more consequential. The platforms are increasingly being pushed to disclose how content travels and why — not just what they leave up and what they take down. “We’re in this brave new world of content moderation that’s outside the take-down/leave-up false binary,” said Evelyn Douek, an expert on the subject and a lecturer at Harvard Law School. In practice, Twitter, Facebook and the other big platforms are facing two sources of pressure.

  • Social media’s struggle with self-censorship

    October 23, 2020

    Within hours of the publication of a New York Post article on October 14th, Twitter users began receiving strange messages. If they tried to share the story—a dubious “exposé” of emails supposedly from the laptop of Hunter Biden, son of the Democratic presidential nominee—they were told that their tweet could not be sent, as the link had been identified as harmful. Many Facebook users were not seeing the story at all: the social network had demoted it in the news feed of its 2.7bn users while its fact-checkers reviewed it. If the companies had hoped that by burying or blocking the story they would stop people from reading it, the bet did not pay off. The article ended up being the most-discussed story of the week on both platforms—and the second-most talked-about story was the fact that the social networks had tried to block it...For now, the social networks have to get through perhaps the hardest fortnight in their short history. They face the possibility of having to deploy content-moderation tools developed for fragile, emerging democracies in their home country. Facebook removed 120,000 pieces of content aimed at voter suppression in America in the past quarter. The New York Post affair does not bode well for how the companies might handle the fallout from a contested election. “When they appeared to depart from their policies they opened themselves up to the very charges of bias that followed,” says Evelyn Douek of Harvard Law School. As the election approaches, they need to “tie themselves to a mast” of clear rules, she says. A storm is coming.

  • Facebook Flips on Holocaust Denial

    October 19, 2020

    Two years ago, Mark Zuckerberg held up Holocaust denial as an example of the type of speech that would be protected on Facebook. The company wouldn’t take down content simply because it was incorrect. This week, Facebook reversed that stance. Is this decision the first step toward a new way of policing speech on the social network? Guest: Evelyn Douek: lecturer at Harvard Law School and affiliate at the Berkman Klein Center for Internet and Society.

  • Facebook Has Made Lots of New Rules This Year. It Doesn’t Always Enforce Them.

    October 16, 2020

    Facebook Inc. this year has made a flurry of new rules designed to improve the discourse on its platforms. When users report content that breaks those rules, a test by The Wall Street Journal found, the company often fails to enforce them. Facebook allows all users to flag content for review if they think it doesn’t belong on the platform. When the Journal reported more than 150 pieces of content that Facebook later confirmed violated its rules, the company’s review system allowed the material—some depicting or praising grisly violence—to stand more than three-quarters of the time...Facebook’s content moderation gained renewed attention Wednesday when the company limited online sharing of New York Post articles about the son of Democratic presidential nominee Joe Biden, saying it needed guidance from third-party fact-checkers who routinely vet content on the platform. On a platform with 1.8 billion daily users, however, making a rule banning content doesn’t mean that content always disappears from Facebook. “Facebook announces a lot of policy statements that sound great on paper, but there are serious concerns with their ability or willingness to enforce the rules as written,” said Evelyn Douek, a Harvard University lecturer and researcher at the Berkman Klein Center for Internet and Society who studies social-media companies’ efforts to regulate their users’ behavior.

  • Twitter Changes Course After Republicans Claim ‘Election Interference’

    October 16, 2020

    President Trump called Facebook and Twitter “terrible” and “a monster” and said he would go after them. Senators Ted Cruz and Marsha Blackburn said they would subpoena the chief executives of the companies for their actions. And on Fox News, prominent conservative hosts blasted the social media platforms as “monopolies” and accused them of “censorship” and election interference. On Thursday, simmering discontent among Republicans over the power that Facebook and Twitter wield over public discourse erupted into open acrimony. Republicans slammed the companies and baited them a day after the sites limited or blocked the distribution of an unsubstantiated New York Post article about Hunter Biden, the son of the Democratic presidential nominee, Joseph R. Biden Jr...Late Thursday, under pressure, Twitter said it was changing the policy that it had used to block the New York Post article and would now allow similar content to be shared, along with a label to provide context about the source of the information. Twitter said it was concerned that the earlier policy was leading to unintended consequences. Even so, the actions brought the already frosty relationship between conservatives and the companies to a new low point, less than three weeks before the Nov. 3 presidential election, in which the social networks are expected to play a significant role. It offered a glimpse at how online conversations could go awry on Election Day. And Twitter’s bob-and-weave in particular underlined how the companies have little handle on how to consistently enforce what they will allow on their sites. “There will be battles for control of the narrative again and again over coming weeks,” said Evelyn Douek, a lecturer at Harvard Law School who studies social media companies. “The way the platforms handled it is not a good harbinger of what’s to come.”

  • Facebook And Twitter Limit Sharing ‘New York Post’ Story About Joe Biden

    October 15, 2020

    Facebook and Twitter took action on Wednesday to limit the distribution of New York Post reporting with unconfirmed claims about Democratic presidential nominee Joe Biden, leading President Trump's campaign and allies to charge the companies with censorship. Both social media companies said the moves were aimed at slowing the spread of potentially false information. But they gave few details about how they reached their decisions, sparking criticism about the lack of clarity and consistency with which they apply their rules. The New York Post published a series of stories on Wednesday citing emails, purportedly sent by Biden's son Hunter, that the news outlet says it got from Trump's private lawyer, Rudy Giuliani, and former Trump adviser Steve Bannon...Facebook has been warning about the possibility of "hack and leak" operations, where stolen documents or other sensitive materials are strategically leaked — as happened in 2016 with hacked emails from the Democratic National Committee and Hillary Clinton's campaign. But the companies' moves on Wednesday drew criticism from some experts, who said Facebook and Twitter needed to more clearly explain their policies and how often they apply them. "This story is a microcosm of something that I think we can expect to happen a lot over the next few weeks and, I think, demonstrates why platforms having clear policies that they are prepared to stick to is really important," said Evelyn Douek, a Harvard Law School lecturer who studies the regulation of online speech. "It's really unclear if they have stepped in exceptionally in this case and, if they have, why they've done so," she said. "That inevitably leads to exactly the kind of outcry that we've seen, which is that they're doing it for political reasons and because they're biased."

  • Twitter’s answer to election misinformation: Make it harder to retweet

    October 13, 2020

    Twitter announced on Friday — less than 30 days ahead of the US election — that it’s enacting a series of significant changes in order to make it harder to spread election misinformation on its platform. It’s one of the most aggressive series of actions any social media company has taken yet to stop the spread of misinformation on their platforms... “As always, the big question for both platforms is around enforcement,” wrote Evelyn Douek, a researcher at Harvard Law School studying the regulation of online speech, in a message to Recode. “Will they be able to work quickly enough on November 3 and in the days following? So far, signs aren’t promising.” Twitter already has a policy of adding labels to misleading content that “may suppress participation or mislead people” about how to vote. But in recent cases when President Trump has tweeted misleading information about voting, it’s taken the platform several hours to add such labels. Facebook has similarly been criticized for its response time...Douek said that platforms “need to be moving much quicker and more comprehensively on actually applying their rules.” But, she added, if “introducing more friction is the only way to keep up with the content, then that’s what they should do.” The concept of “friction” to which Douek is referring is the idea of slowing down the spread of misinformation on social media to give fact-checkers more time to correct it. It’s also an ideal that many misinformation experts have long advocated. Overall, misinformation experts, including Douek, lauded Twitter for introducing friction by nudging users to think twice before sharing misleading content.

  • Yochai Benkler on Mass-Media Disinformation Campaigns

    October 8, 2020

    On this episode of Lawfare's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Yochai Benkler, a professor at Harvard Law School and co-director of the Berkman Klein Center for Internet and Society. With only weeks until Election Day in the United States, there’s a lot of mis- and disinformation flying around on the subject of mail-in ballots. Discussions about addressing that disinformation often focus on platforms like Facebook or Twitter. But a new study by the Berkman Klein Center suggests that social media isn’t the most important part of mail-in ballot disinformation campaigns—rather, traditional mass media like news outlets and cable news are the main vector by which the Republican Party and the president have spread these ideas. So what’s the research behind this counterintuitive finding? And what are the implications for how we think about disinformation and the media ecosystem?

  • Should Big Tech Be Setting the Terms of Political Speech?

    October 5, 2020

    In the run up to the US presidential election on November 3, digital platforms are releasing a number of new or updated policies related to disinformation, election advertising and content moderation. We asked five experts if big tech should be setting the terms of political speech. And if it does, how might this ad hoc and disjointed approach to platform governance impact democracy? ... Evelyn Douek, the Berkman Klein Center: “We are now firmly in a world of second or third or fourth bests. No one’s ideal plan is the current patchwork of hurriedly drafted policies written and enforced by unaccountable private actors with very little transparency or oversight. Nevertheless, here we are. So platforms should be as clear and open as possible about what they will do in the coming weeks and tie themselves to a mast. Comprehensive and detailed policies should not only be the basis for platform action but a shield for it, when inevitable charges of bias arise. Platforms have been talking tough on the need to remove misinformation about election integrity, and rightly so — it’s an area where relying on democratic accountability for false claims is especially inadequate, because the misinformation itself interferes with those accountability mechanisms. You can’t vote someone out if you’re scared or misled out of voting at all.” ... Dipayan Ghosh, the Berkman Klein Center: “The political discourse is increasingly moving online, and particularly to dominant digital platforms like Facebook and YouTube — we know that. Internet companies have variously enforced new policies — such as Facebook’s new restrictions against certain hateful ads, and Google’s limitations on the micro-targeting of political ads. These are half-measures: they are not enough. Dominant digital platforms should be liable for facilitating the dissemination of political advertising at segmented voting audiences. In the absence of such a policy, we will never diminish the disinformation problem — let alone the slate of related negative externalities that have been generated by the business models at the core of the consumer internet.”

  • Facebook’s long-awaited oversight board to launch before US election

    September 24, 2020

    The long-awaited Facebook Oversight Board, empowered to overrule some of the platform’s content moderation decisions, plans to launch in October, just in time for the US election. The board will be ready to hear appeals from Facebook users as well as cases referred by the company itself “as soon as mid- or late-October at the very latest, unless there are some major technical issues that come up”, said Julie Owono, one of the 20 initial members of the committee who were named in May, in an interview on Wednesday...The limits of the oversight board’s mandate have been a key point of controversy since the independent institution was proposed by Facebook’s chief executive, Mark Zuckerberg, in 2018. The board’s initial bylaws only allowed it to consider appeals from users who believe that individual pieces of content were unfairly removed, prompting criticism from experts, including Evelyn Douek, a lecturer at Harvard Law School who studies online speech regulation. “We were told this was going to be the supreme court of Facebook, but then it came out more like a local district court, and now it’s more of a traffic court,” Douek told the Guardian. “It’s just been steadily narrowed over time.” Crucial areas where Facebook exercises editorial control include the algorithms that shape what content receives the most distribution; decisions to take down or leave up Facebook groups, pages and events; and decisions to leave certain pieces of content up. The board would be considering “leave up” decisions as soon as it launched, Owono said, but only if Facebook referred a case to it. She said technical and privacy challenges had delayed the launch of a system for Facebook users to appeal “leave up” decisions, but that one would be available “as soon as possible”.

  • Mark Zuckerberg Says Facebook Doesn’t Want To Be The “Arbiter Of Truth.” Its Fact-Checkers And Employees Say It Already Is.

    August 13, 2020

    On May 8, Prager University, a nonprofit conservative media outlet, published a video on Facebook that incorrectly claimed “there is no evidence that CO2 emissions are the dominant factor” in climate change. Within days, Climate Feedback, a nonpartisan network of scientists and a member of Facebook’s global fact-checking partnership, rated the content as false — a designation that was supposed to result in serious consequences. It was PragerU’s second strike for false content that month, which under Facebook’s own policies should have triggered “repeat offender” penalties including the revocation of advertising privileges and the specter of possible deletion. But it didn't. As first reported by BuzzFeed News last week, a Facebook employee intervened on PragerU’s behalf and asked for a reexamination of the judgment, citing “partner sensitivity” and the amount of money the organization had spent on ads. Eventually, while the false labels on PragerU’s posts remained, Facebook disappeared the strikes from its internal record and no one — not the public, the fact-checkers, or Facebook’s own employees — was informed of the decision...Evelyn Douek, a lecturer at Harvard Law School, said that even though Facebook doesn’t want to be in the business of declaring what is true and false, it still makes a lot of choices in how it structures its policies and fact-checking program that leave it “in the driver seat.” “There will be a pretty big reckoning around fact-checking,” she said. “People don’t really understand it either and they see it as a panacea for problems on social media platforms.”

  • The Future of Free Speech Online May Depend on This Database

    August 13, 2020

    Last October, a neo-Nazi livestreamed his attack on a synagogue in Halle, Germany. The video of the shooting, which killed two people, stayed on Twitch for more than an hour before it was removed. That’s long enough for a recording to go viral—but it never did. While users downloaded it and passed it around less moderated platforms, such as Telegram, the recording was stopped in its tracks on the major platforms: Facebook, Twitter, YouTube. The reason, Vice reported, is that Twitch was quick to share digital fingerprints, or hashes, of the video and its copies with these platforms and others. All Twitch had to do was upload the hashes to the database of the Global Internet Forum to Counter Terrorism, or, as it’s been called, “the most underrated project in the future of free speech.” The GIFCT has gone largely unnoticed by the public since it was established in 2017 by Facebook, Microsoft, Twitter, and YouTube...The GIFCT’s structure typifies what Evelyn Douek, a doctoral student at Harvard Law School and affiliate at Harvard’s Berkman Klein Center for Internet and Society, has termed content cartels, or “arrangements between platforms to work together to remove content or actors from their services without adequate oversight.” Many of the problems with the GIFCT’s arrangement lie in its opacity. None of the content decisions are transparent, and researchers don’t have access to the hash database. As Keller recently laid out, the GIFCT sets the rules for “violent extremist” speech in private, so it defines what is and isn’t terrorist content without accountability. That’s a serious problem, in part because content moderation mistakes and biases are inevitable. The GIFCT may very well be blocking satire, reporting on terrorism, and documentation of human rights abuses.

  • Twitter Finally Cracked Down on QAnon—but There’s a Catch

    July 27, 2020

    The narrative of content moderation, especially over the past few months, goes something like this: Extremists and conspiracy theorists peddle misinformation and dangerous content, Twitter (or Facebook, or Reddit) cracks down on said content by removing the offending posts and accounts, onlookers largely commend the platform, and it’s on to the next group of baddies. This week, that target became QAnon, a group of pro-Trump conspiracy theorists who push fabrications about Satanist “deep state” elites who run a child sex trafficking ring while also plotting to overthrow the current administration...Yet the problem here is that Twitter’s plans—at least the ones available to the public—are rather vague, leaving the door open for confusion, inconsistent enforcement, and future content moderation debacles. “I get concerned when there’s sort of unquestioning praise for Twitter’s actions here, and it earns itself a good news cycle,” said Evelyn Douek, a doctoral student at Harvard Law School and affiliate at Harvard’s Berkman Klein Center for Internet and Society. She worries the move in the long term is detrimental to the project of pushing Twitter to become “more accountable and consistent in the way that they exercise their power.” There are two main places Twitter’s plans fall short. The first is that the platform, as a Twitter spokesperson told NBC News, has decided to classify QAnon behavior with a new, undefined designation: “coordinated harmful activity.” Twitter has yet to provide any information on what this term means or explain how it differs from its preexisting standards on harassment, abusive behavior, and violent groups. “We’re going to see a lot of things, I think, on Twitter that look coordinated and harmful, and we’re going to ask: Is this an example of this new designation?” said Douek. “And we don’t know—Twitter can just decide in the moment whether it is, and we can’t hold onto anything because we have absolutely no details.”

  • Twitter Brings Down the Banhammer on QAnon

    July 27, 2020

    An article by Evelyn DouekAre the days of the Wild Wild Web over? In recent weeks, social media platforms have unveiled a series of high-profile enforcement actions and deplatformings. All the major platforms rolled out hardline policies against pandemic-related misinformation. Facebook banned hundreds of accounts, groups and pages associated with the boogaloo movement, Snap removed President Trump’s account from its promoted content and YouTube shut down several far-right channels, including that of former Ku Klux Klan leader David Duke. And the hits keep coming: most recently, on July 21, Twitter announced it was taking broad action against content related to the conspiracy theory QAnon. But however welcome Twitter’s response to QAnon may be, these actions do not signify a new era of accountability in content moderation. If anything, it’s a show of how powerful and unaccountable these companies are that they can change their policies in an instant and provide little by way of detail or explanation. Twitter’s announcement about QAnon content was indeed sweeping. More than 7,000 accounts were taken down, and another 150,000 were prevented from being promoted as “trending” on the site or as recommended accounts for people to follow. URLs “associated with” QAnon are now blocked from being shared on the platform. QAnon accounts immediately started trying to come up with ways to evade the ban, kicking off what is sure to be an ongoing game of cat-and-mouse, or moved to other networks.

  • Podcast: Brandi Collins-Dexter on COVID-19 misinformation and black communities

    July 15, 2020

    Brandi Collins-Dexter is the senior campaign director at the advocacy organization Color of Change and a visiting fellow at the Harvard Kennedy School of Government. Here, she speaks with Lawfare’s Quinta Jurecic and Evelyn Douek about her new report with the Shorenstein Center, “Canaries in the Coal Mine: COVID-19 Misinformation and Black Communities,” which follows the emergence and dissemination of coronavirus-related mis- and disinformation among Black social media users in the United States. They also discuss Color of Change’s role in the #StopHateForProfit Facebook ad boycott.

  • What Does “Coordinated Inauthentic Behavior” Actually Mean?

    July 6, 2020

    An article by Evelyn DouekAt the start of a hearing of the House Permanent Select Committee on Intelligence recently, Rep. Adam Schiff praised representatives from Facebook, Twitter, and Google for having “taken significant steps and invested resources to detect coordinated inauthentic behavior.” The comment passed by without note, as if “coordinated inauthentic behavior”—or CIB, as those really in the know call it—is the most natural thing in the world for tech companies to be rooting out and for members of Congress to be talking about. Such casual use of the phrase is remarkable when you remember that it was only invented, by Facebook itself, around two years ago. It’s more remarkable still once you know, as former Facebook chief security officer Alex Stamos told me on The Lawfare Podcast, that the company was going to call it “coordinated inauthentic activity” but thought it probably best to avoid the acronym CIA, showing the arbitrariness of how some terms of art get created. And perhaps what makes it most remarkable of all is that no one really knows what it means. Most commonly used when talking about foreign influence operations, the phrase sounds technical and objective, as if there’s an obvious category of online behavior that crosses a clearly demarcated line between OK and not OK. But a few recent examples show that’s far from the case. This lack of clarity matters because as the election season heats up, there’s going to be plenty of stuff online that will be varying degrees of coordinated and inauthentic, and as things stand, we’re leaving it to tech companies to tell us, without a lot of explanation, when something crosses over that magical line into CIB. That needs to change.

  • Podcast: Whitney Phillips and Ryan Milner on our polluted information environment

    July 6, 2020

    Whitney Phillips and Ryan Milner speak with Lawfare’s Evelyn Douek and Quinta Jurecic about their new book, You Are Here: A Field Guide for Navigating Polarized Speech, Conspiracy Theories, and Our Polluted Media Landscape. Phillips is an assistant professor in communications and rhetorical studies at Syracuse University, and Milner is an associate professor of communication at the College of Charleston. Here, Phillips and Milner discuss their birds-eye, “ecological” approach to analyzing the online information environment, the role of “internet culture,” and challenges for journalists in understanding and reporting on that culture.