Skip to content

People

Jonathan Zittrain

  • Woman talking into a microphone

    Is it time to swipe left on social media?

    October 12, 2021

    Leaked revelations about Instagram’s impact on teens have united Republicans and Democrats in considering legal reforms, say Harvard Law School scholars.

  • Facebook Blames ‘Faulty Configuration Change’ for Nearly Six-Hour Outage

    October 5, 2021

    Facebook Inc blamed a "faulty configuration change " for a nearly six-hour outage on Monday that prevented the company's 3.5 billion users from accessing its social media and messaging services such as WhatsApp, Instagram and Messenger. The company in a late Monday blog post did not specify who executed the configuration change and whether it was planned. Several Facebook employees who declined to be named had told Reuters earlier that they believed that the outage was caused by an internal mistake in how internet traffic is routed to its systems. ... “Facebook basically locked its keys in its car,” tweeted Jonathan Zittrain, director of Harvard's Berkman Klein Center for Internet & Society.

  • Big Tech’s not-so-secret plan to monopolize your home

    September 29, 2021

    ... I’ve been writing a lot recently about the price you face as a consumer and a citizen for being trapped in a Big Tech economy. Here’s one it’s not too late to stop: Letting tech giants make your smart home more dumb. Their monopolistic mind-set makes your home more complicated, leaves you less choice and less privacy, and already resulted in less-capable smart speakers. ... Asking the most powerful companies in history just to have “elbows that are less sharp” isn’t going to work, said Harvard Law Professor Jonathan Zittrain during the Senate’s June hearing. “They’re trying to compete and they owe their shareholders that duty. Let’s set up the rules so that they know how to play to the chalk, but not go beyond it.”

  • Illustration showing Pinocchio caught in a spider's web with social media icons

    Oh, what a tangled web we weave

    July 7, 2021

    Deception spreads faster than truth on social media. Who — if anyone — should stop it?

  • Jonathan Zittrain testifying before a Senate Judiciary subcommittee

    Towards more interoperable ‘smart’ home devices

    June 16, 2021

    Professor Jonathan Zittrain ’95 appeared as a witness for the Senate Subcommittee on Competition Policy, Antitrust, and Consumer Rights on June 15 to discuss the current state of home technologies and antitrust.

  • What the ephemerality of the Web means for your hyperlinks

    May 21, 2021

    An article by John Bowers, Clare Stanton, and Jonathan ZittrainHyperlinks are a powerful tool for journalists and their readers. Diving deep into the context of an article is just a click away. But hyperlinks are a double-edged sword; for all of the internet’s boundlessness, what’s found on the Web can also be modified, moved, or entirely vanished. The fragility of the Web poses an issue for any area of work or interest that is reliant on written records. Loss of reference material, negative SEO impacts, and malicious hijacking of valuable outlinksare among the adverse effects of a broken URL. More fundamentally, it leaves articles from decades past as shells of their former selves, cut off from their original sourcing and context. And the problem goes beyond journalism. In a 2014 study, for example, researchers (including some on this team) found that nearly half of all hyperlinks in Supreme Court opinions led to content that had either changed since its original publication or disappeared from the internet. Hosts control URLs. When they delete a URL’s content, intentionally or not, readers find an unreachable website. This often irreversible decay of Web content is commonly known as linkrot. It is similar to the related problem of content drift, or the typically unannounced changes––retractions, additions, replacement––to the content at a particular URL.

  • What are NFTs (and Why Should We Care)?

    April 21, 2021

    The non-fungible token (NFT) craze, which took off in 2020, appears to continue unabated. NFTs are digital “certificates of authenticity” that attach to creations like songs, photos and sports clips, and they can command hefty prices. An NFT of digital artist Beeple’s work brought in $69 million at auction last month, and other NFTs are being sold for similarly eyebrow-raising sums. And demand is showing no sign of declining despite what law professor Jonathan Zittrain in a recent Atlantic piece calls “their abstraction, their seemingly arbitrary valuation, and...the paltriness of the privileges they convey to their owners.” We talk to Zittrain about the future of NFTs.

  • iPhone 11 Pro showing Social media applications on its screen

    Should the internet be treated like a public utility?

    April 20, 2021

    At the annual Klinsky Lecture, Visiting Professor John G. Palfrey ’01, president of the MacArthur Foundation, says we need a regulatory regime for technology.

  • What Critics Don’t Understand About NFTs

    April 7, 2021

    An op-ed by Jonathan Zittrain and Will MarksLong before cryptocurrency speculators got involved, art prices were capricious—as the British artist Banksy no doubt understands. Recently, the work “Game Changer,” which he delivered unsolicited to an English hospital last year, earned it $23.2 million at auction—about $20 million more than experts had predicted...Last month a company called Injective Protocol took the spirit of “Morons” to a new extreme: After purchasing one of 500 prints of that work for just under $100,000, the company scanned the print and then destroyed it. A copy of the resulting digital file was then placed on IPFS—a distributed data-storage network whose initials stand for interplanetary file system—for anyone to see. A “non-fungible token,” or NFT, that points to the work was exchanged for almost 230 units of a cryptocurrency called Ether, about $400,000. All things considered, the purchaser of that token might have been in on the joke rather than the butt of it: Some NFTs are selling for tens of millions of dollars. These high prices suggest that regulators may not be moving quickly enough to protect unsuspecting investors. Impulsively buying GameStop shares on Robinhood is risky enough—the equivalent of placing a long-shot Kentucky Derby bet because the horse had a cool name. Worse still is losing your money because you didn’t understand what a horse race was and thought your wager was actually buying a horse.

  • The Internet Doesn’t Have to Be Awful

    March 9, 2021

    To read the diary of Gustave de Beaumont, the traveling companion of Alexis de Tocqueville, is to understand just how primitive the American wilderness once seemed to visiting Frenchmen...If Tocqueville were to visit cyberspace, it would be as if he had arrived in pre-1776 America and found a people who were essentially powerless. We know alternatives are possible, because we used to have them. Before private commercial platforms definitively took over, online public-interest projects briefly flourished. Some of the fruits of that moment live on. In 2002, the Harvard Law professor Lawrence Lessig helped create the Creative Commons license, allowing programmers to make their inventions available to anyone online; Wikipedia—which for all the mockery once directed its way has emerged as a widely used and mostly unbiased source of information—still operates under one...All of that began to change with the mass-market arrival of smartphones and a shift in the tactics of the major platforms. What the Harvard Law professor Jonathan Zittrain calls the “generative” model of the internet—an open system in which anyone could introduce unexpected innovations—gave way to a model that was controlled, top-down, and homogeneous. The experience of using the internet shifted from active to passive; after Facebook introduced its News Feed, for example, users no longer simply searched the site but were provided a constant stream of information, tailored to what the algorithm thought they wanted to read.

  • Using a collective ‘virtuous cycle’ to break the pandemic

    March 3, 2021

    An op-ed by Isaac S. Kohane and Jonathan ZittrainMedical schools teach students a four-part “virtuous cycle” in which one step positively reinforces the next: Assess the patient. Implement a therapeutic plan. Assess the patient’s response. Revise the therapeutic plan as needed. In an emergency department, this cycle can be completed in minutes. In the cancer clinic, it can take months. Mastering the virtuous cycle is understood to be a central measure of medical competence. Yet when the patient is not one person but an entire society, this cycle is fractured and ad hoc in ways that would make any patient demand a new doctor. We’ve all been witness to — and victims of — this failure in the pandemic. The superb accomplishments of therapeutic medicine cannot address the population-based issues that Covid-19 has raised. But we can use the virtuous cycle as a way to switch gears to employ approaches drawn from disciplines like public health. For the first step, assessment, doctors were unable to define the most basic clinical course of severe Covid-19, despite billions of dollars invested to achieve interoperable electronic health records over the past 30 years. It took clinicians and researchers months to identify the interplay of inflammation, coagulopathy, and cardiac dysfunction, and then only through a jury-rigged combination of conference calls and small studies shared through disparate nuggets of preprints.

  • The Cybersecurity 202 Network

    February 24, 2021

    The Network is a group of high-level digital security experts from across government, the private sector and security research community invited by The Washington Post to vote in surveys on the most pressing issues in the field. Our regular surveys will highlight insights from some of the most influential people in cybersecurity...Camille Francois is the research and analysis director at Graphika, where she leads a data science and analysis team focused on analyzing social media manipulations...Francois is a cybersecurity fellow at the New America Foundation and an affiliate at the Harvard-Klein Berkman Center for Internet and Society, where she pursues her work on mechanisms to establish peace and security in the face of cyber conflict...Matthew Olsen is the president and chief revenue officer at IronNet Cybersecurity. He previously served as director of the National Counterterrorism Center, general counsel for the National Security Agency and in a number of leadership positions at the Justice Department. Olsen teaches at Harvard Law School...Jonathan Zittrain: Harvard law and computer science professor and co-founder of the Berkman Klein Center for Internet and Society, Zittrain’s research interests include battles for control of digital property and content, cryptography, electronic privacy, the roles of intermediaries within Internet architecture, human computing and the deployment of technology in education.

  • Due Process

    February 17, 2021

    As recently as 10 years ago, Jeannie Suk Gersen was still telling people that the area of law she specialized in—sexual assault and domestic violence—didn’t hold much interest for the general public. A quiet corner of the profession, she thought. Remembering that now, she laughs. “But, you know,” she adds, “every area of law does end up moving into focus. Because, in the end, law is really about every aspect of our lives.” Which is partly why Gersen, J.D. ’02, has always taken it so seriously. “Words don’t just describe things,” she explains. In the law, “words actually do things.” ... “Jeannie is intellectually fearless,” says Bemis professor of international law Jonathan Zittrain. That’s a common sentiment among her colleagues... “There are a lot of people who are afraid to say things in our business,” says Learned Hand professor of law Jack Goldsmith, “and she’s not afraid to say what she thinks.” ... “Her whole response to Title IX has been very, very striking—and I think completely correct,” says Beneficial professor of law Charles Fried, who was Gersen’s teacher before he was her colleague ... Says her former teacher, Loeb University Professor emeritus Laurence Tribe, “I was always impressed by how both meticulous and yet unconventional her insights were. She would often come at issues in a kind of perpendicular way. Rather than finding a point between A and B, she would say that maybe that axis is the wrong axis.” ... “She has one of those amazing brains,” says Williams professor of law I. Glenn Cohen, who worked on the Harvard Law Review with Gersen. “She was a year ahead of me in law school, and we all regarded her more like a faculty member, even back then. She just seemed to know everything.”

  • Trump Isn’t the Only One on Trial. The Conservative Media Is, Too.

    February 9, 2021

    With the Senate’s impeachment trial starting oral arguments on Tuesday, Donald Trump now faces the possibility of real consequences for his role in inciting the Capitol siege of Jan. 6. But the apparatus that fed him much of his power — the conservative news media — is facing a test of its own. This might ultimately have a much bigger impact on the future of American politics than anything that happens to Mr. Trump as an individual. In recent weeks, two voting-technology companies have each filed 10-figure lawsuits against Mr. Trump’s lawyers and his allies in the media, claiming they spread falsehoods that did tangible harm. This comes amid an already-raging debate over whether to reform Section 230 of the Communications Decency Act, which prevents online companies from being held liable for the views expressed on their platforms...Jonathan Zittrain, a Harvard Law School professor who studies digital media, sees a sea change coming. In the early decades of the internet, he said, most legal discussions were guided by a question of “rights,” particularly the right to free speech under the First Amendment. But in recent years, a new interest in what he called “the public health framework” has taken hold. “Misinformation and extremism — particularly extremism that’s tied to violence — can result in harm,” Mr. Zittrain said. “Given that there are compelling things in both the rights framework and the health framework, there’s going to be a balance struck.”

  • How a Democratic plan to reform Section 230 could backfire

    February 8, 2021

    Over the last few years, Section 230 of the 1996 US Communications Decency Act has metamorphosed from a little-known subset of regulations about the internet into a major rallying point for both the right and left. So when Democrats unveiled their attempt to overhaul the law on Friday, the technology world took notice. There have been other suggestions for how to change Section 230, and many threats from President Trump while he was still in office—but the bill, announced on Friday by Senators Mark Warner, Mazie Hirono, and Amy Klobuchar, appears to be the most significant step yet toward genuinely reforming it...The problem of online abuse and misinformation became impossible to ignore over the last year, with harmful online conspiracy theories fueling the pandemic, and political lies threatening the election. That culminated in January, when the violent assault on the US Capitolwas fanned by online groups and by Trump himself...The proposals are a “recipe for a bit of a mess” agrees Jonathan Zittrain, a professor of international law at Harvard Law School. He suggests that it may be more important to come up with common standards “to establish what is or isn’t actionable” to make sure that frivolous cases from ill-intentioned complainants do not get turned into vast, expensive lawsuits.

  • Gaining power, losing control

    January 29, 2021

    As we grapple with disinformation driving the recent attack on the U.S. Capitol and hundreds of thousands of deaths from a pandemic whose nature and mitigation is subject to heated dispute, social media companies are weighing how to respond to both the political and public health disinformation, or intentionally false information, that they can spread. These decisions haven’t taken place in a vacuum, says Jonathan Zittrain ’95, Harvard’s George Bemis Professor of International Law and Professor of Computer Science. Rather, he says, they’re part of a years-long trend from viewing digital governance first through a “Rights” framework, then through a “Public Health” framework, and, with them irreconcilable, most immediately through a “Legitimacy” framework. Zittrain, co-founder of the Berkman Klein Center for Internet & Society, delivered his remarks as part of the first of two editions of the 2020 Tanner Lecture on Human Values at Clare Hall, Cambridge, a prestigious lecture series that advances and reflects upon how scholarly and scientific learning relates to human values. According to Zittrain, former President Donald Trump’s deplatforming from Twitter, Facebook, and YouTube, among others, is one of the most notable recent content moderation policy decisions — one which Facebook just referred for binding assessment by its new external content Oversight Board.

  • Jonathan Zittrain delivers the 2020 Tanner Lecture

    Gaining power, losing control

    January 28, 2021

    As the 2020 Tanner Lecturer on Human Values at Clare Hall, Cambridge, Harvard Law Professor Jonathan Zittrain explores the clash of free speech and public health online.

  • Impeachment Defends the Constitution and Bill of Rights

    January 14, 2021

    An op-ed by Jonathan ZittrainThe majority staff of the Judiciary Committee of the House of Representatives has issued a report to accompany the resolution for today’s second impeachment of President Donald Trump for incitement of insurrection. Earlier this week, my former colleague Alan Dershowitz argued in Newsweek that First Amendment protections against a criminal conviction for incitement to riot make impeachment over the president’s role in last week’s events at the Capitol unconstitutional. I want to explain why this claim carries no weight. Dershowitz wrote that Trump’s speech last week, “disturbing as it may have been—is within the core protection of political speech.” He pointed to Brandenburg v. Ohio, where the Supreme Court ruled that the government cannot prohibit speech unless it is specifically “directed to inciting or producing imminent lawless action” and “is likely to incite or produce such action.” (In Brandenburg, the Court found that a short speech by a Ku Klux Klan leader at an Ohio farm saying that there might have to be “some revengeance taken” and referencing a march to take place later in Washington, D.C. was protected by the First Amendment from a state charge of “criminal syndicalism” because any lawless action incited was not imminent.) There are a number of ways that the president’s speech, in which he told his supporters that “if you don’t fight like hell you’re not going to have a country anymore” and to march to Congress at that very moment in the hopes of disrupting the Electoral Vote count, differs from the facts of Brandenburg. But more importantly, the question at the moment isn’t whether the president could be charged with incitement to violence in criminal court. It’s whether the president can be impeached for his actions, both arising from the speech and from his actions (and inactions) as the crowd stormed the Capitol and he was implored to help.

  • The Lawfare Podcast: Jonathan Zittrain on the Great Deplatforming

    January 14, 2021

    Yesterday, January 13, the House of Representatives impeached President Trump a second time for encouraging the violent riot in the Capitol Building on January 6. And yet, the impeachment is probably less of a crushing blow to the president than something else that’s happened in recent days: the loss of his Twitter account. After a few very eventful weeks, Lawfare's Arbiters of Truth series on disinformation is back. Evelyn Douek and Quinta Jurecic spoke with Jonathan Zittrain, the George Bemis Professor of International Law at Harvard Law School, about the decision by Twitter, Facebook and a whole host of other platforms to ban the president in the wake of the Capitol riot. Jonathan, Evelyn and Quinta take a step back and situate what’s happening within the broader story of internet governance. They talked about how to understand the bans in the context of the internet’s now not-so-brief history, how platforms make these decisions and, of course, Section 230 of the Communications Decency Act. Listeners might also be interested in Zittrain's February 2020 Tanner Lecture, "Between Suffocation and Abdication: Three Eras of Governing Digital Platforms," which touches on some of the same ideas discussed in the podcast.

  • Molly Brady wearing a bright red jacket sits in front of a computer and teaches her class in Zoom

    2020 in pictures

    January 5, 2021

    A look back at the year at HLS.

  • On the Bookshelf: HLS Library Book Talks, Spring 2018 2

    On the bookshelf

    December 15, 2020

    In the unusual year of 2020, Harvard Law authors continued to do what they always have: Write.

  • Imagine a world in which AI is in your home, at work, everywhere

    December 7, 2020

    Imagine a robot trained to think, respond, and behave using you as a model. Now imagine it assuming one of your roles in life, at home or perhaps at work. Would you trust it to do the right thing in a morally fraught situation? That’s a question worth pondering as artificial intelligence increasingly becomes part of our everyday lives, from helping us navigate city streets to selecting a movie or song we might enjoy — services that have gotten more use in this era of social distancing. It’s playing an even larger cultural role with its use in systems for elections, policing, and health care...Jonathan Zittrain, Berkman Klein Center director and Harvard Law School’s George Bemis Professor of International Law, said it’s important that we think hard about whether and where AI’s adoption might be a good thing and, if so, how best to proceed. Zittrain said that AI’s spread is different from prior waves of technology because AI shifts the decision-making away from humans and toward machines and their programmers. And, where technologies like the internet were developed largely in the open by government and academia, the frontiers of AI are being pushed forward largely by private corporations, which shield the business secrets of how their technologies operate. The effort to understand AI’s place in society and its impact on all of our lives — not to mention whether and how it should be regulated — will take input from all areas of society, Zittrain said. “It’s almost a social-cultural question, and we have to recognize that, with the issues involved, it’s everybody’s responsibility to think it through,” Zittrain said.

  • Connected Parent Zoom panel

    ‘The Connected Parent’ offers guidance, insight into digital parenting

    November 16, 2020

    “The Connected Parent,” a new book by John Palfrey ’01 and Urs Gasser LL.M. ’03  is a practical guide for addressing concerns brought on by the COVID-19 pandemic and navigating an increasingly digital world.

  • voting box with a lock

    Simulating responses to election disinformation

    October 14, 2020

    In an effort to combat multiple potential vectors of attack on the 2020 U.S. election, two Berkman Klein Center affiliates have published a package of “tabletop exercises,” freely available to decisionmakers and the public to simulate realistic scenarios in which disinformation threatens to disrupt the 2020 election.

  • Icon of a lock indicating digital security

    ‘We need to be more imaginative about cybersecurity than we are right now’

    October 7, 2020

    In the “good old days” of cybersecurity risk, we only had to worry about being hacked or downloading malware. But the stakes have ramped up considerably in the past decade, say Berkman Klein directors James Mickens and Jonathan Zittrain.

  • An unbalanced scale weighing COVID against a dollar sign, house, medical symbol, pyramid, and a man teaching

    The law is ‘tested and illuminated during this pandemic’

    September 16, 2020

    In the first colloquium of a sweeping new series, “COVID-19 and the Law,” five Harvard Law faculty members grappled with the challenges, limitations, and opportunities of governmental powers during a public health crisis.

  • “Column” launches to modernize public notices

    September 9, 2020

    A group of prominent media veterans are advising a team of millennials who are launching a new company called "Column," which modernizes the placement of public notices. Why it matters: Public notices have been one of the biggest and most reliable revenue streams​ for local newspapers for centuries. Amid the pandemic, they are becoming more important to local papers that are seeing regular local advertising dry up. Catch up quick: Public notices are legally required updates from the government to citizens about different types of legal or regulatory proceedings, like ordinances, foreclosures, municipal budgets, adoptions, and public meetings. Like obituaries, the exact amount of revenue they deliver to the newspaper industry is debated, but they've long been considered a critical part of local newspaper businesses, due to a longstanding legal requirement for local governments to place them in newspapers...Details: The company has a full-time team of nine, and is supported and advised by many notable media experts, including David Chavern, CEO of the News Media Alliance and Nancy Gibbs, the faculty director of Harvard's Shorenstein Center on Media, Politics and Public Policy, and the former editor in chief of Time Magazine. It receives occasional informal advice from Marty Baron, executive editor of the Washington Post. The company is advised by several Harvard professors and entrepreneurs, including Jonathan Zittrain, co-founder of the Berkman Klein Center of Internet and Society at Harvard Law School, and Henry Ward, founder and CEO of Carta.

  • Are We Already Living in a Tech Dystopia?

    September 8, 2020

    For the most part, fictional characters rarely recognize when they’re trapped in a dystopia...To them, that dystopia is just life. Which suggests that—were we, at this moment, living in a dystopia ourselves—we might not even notice it...Is this, right here, the tech-dystopia we were worried about? For this week’s Giz Asks, we reached out to a number of experts with differing opinions...Jonathan Zittrain: "Yes, in the sense that so many of us feel rightly that instead of empowering us, technology is used against us—most especially when it presents itself as merely here to help. I’ve started thinking of some of our most promising tech, including machine learning, as like asbestos: it’s baked wholesale into other products and services so we don’t even know we’re using it; it becomes pervasive because it seems to work so well, and without any overlay of public interest on its installation; it’s really hard to account for, much less remove, once it’s in place; and it carries with it the possibility of deep injury both now and down the line. I’m not anti-tech. But I worry greatly about the deployment of such power with so little thought given to, and so few boundaries against, its misuse, whether now or later. More care and public-minded oversight goes into someone’s plans for an addition to a house than to what can be or become a multi-billion dollar, multi-billion-user online platform. And while thanks to their power, and the trust placed in them by their clients, we recognize structural engineers, architects, lawyers, and doctors as members of learned professions—with duties to their clients and to the public that transcend a mere business agreement—we have yet to see that applied to people in data science, software engineering, and related fields who might be in a position to recognize and prevent harm as it coalesces. There are ways out of this. But it first requires abandoning the resignation that so many have come to about business as usual."

  • The Root Room 7

    HLS librarians, remote but not distant: ‘We’re here for you’

    August 13, 2020

    Since going remote in March, the Harvard Law School Library has reimagined what it means to provide services to the HLS community.

  • Testing Is on the Brink of Paralysis. That’s Very Bad News.

    July 17, 2020

    An article by Margaret Bourdeaux, Beth Cameron and Jonathan ZittrainAs Covid-19 cases surge to their highest levels in dozens of states, the nation’s testing effort is on the brink of paralysis because of widespread delays in getting back results. And that is very bad news, because even if testing is robust, the pandemic cannot be controlled without rapid results. This is the latest failure in our national response to the worst pandemic in a century. Since the Trump administration has abdicated responsibility, governors must join forces to meet this threat before the cataclysm that Florida is experiencing becomes the reality across the country. Testing should be the governors’ first order of business. Despite President Trump’s boast early this month that testing “is so massive and so good,” the United States’ two largest commercial testing companies, Quest Diagnostics and LabCorp, have found themselves overwhelmed and unable to return results promptly. Delays averaging a week or longer for all but top-priority hospital patients and symptomatic health care workers are disastrous for efforts to slow the spread of the virus. Without rapid results, it is impossible to isolate new infections quickly enough to douse flare-ups before they grow. Slow diagnosis incapacitates contact tracing, which entails not only isolating those who test positive but also alerting the infected person’s contacts quickly so they can quarantine, too, and avoid exposing others to the virus unwittingly. Among those who waited an absurdly long time for her results was the mayor of Atlanta, Keisha Lance Bottoms. “We FINALLY received our test results taken 8 days before,” she tweeted last week. “One person in my house was positive then. By the time we tested again, 1 week later, 3 of us had COVID. If we had known sooner, we would have immediately quarantined.”

  • Twitter’s Least-Bad Option for Dealing With Donald Trump

    June 26, 2020

    An article by Jonathan ZittrainOn Tuesday, President Donald Trump began his day as he usually does—by tweeting. In this case, Trump fired off a threat of using “serious force” against hypothetical protesters setting up an “autonomous zone” in Washington, D.C. Twitter, in response, hid the tweet but did not delete it, requiring readers to click through a notice that says the tweet violated the platform’s policy “against abusive behavior, specifically, the presence of a threat of harm against an identifiable group.” Twitter’s placement of such a “public interest notice” on a tweet from the president of the United States was just the latest salvo in the company’s struggle to contend with Trump’s gleefully out-of-bounds behavior. But any response from Twitter is going to be the least bad option rather than a genuinely good one. This is because Trump himself has demolished the norms that would make a genuinely good response possible in the first place. The truth is that every plausible configuration of social media in 2020 is unpalatable. Although we don’t have consensus about what we want, no one would ask for what we currently have: a world in which two unelected entrepreneurs are in a position to monitor billions of expressions a day, serve as arbiters of truth, and decide what messages are amplified or demoted. This is the power that Twitter’s Jack Dorsey and Facebook’s Mark Zuckerberg have, and they may well experience their own discomfort with it. Nor, though, would many of us wish for such powerful people to stand idly by when, at no risk to themselves, they could intervene to prevent misery and violence in the physical world, by, say, helping to counter dangerous misinformation or preventing the incitement of violence.

  • Is Digital Contact Tracing Over Before It Began?

    June 26, 2020

    An article by Jonathan ZittrainLast month I wrote a short essay covering some of the issues around standing up contact tracing across the U.S., as part of a test/trace/quarantine regime that would accompany the ending of a general lockdown to prevent the spread of the Coronavirus pandemic...In the intervening month, some things have remained the same. As before, tech companies and startups continue to develop exposure notification apps and frameworks. And there remains no Federally-coordinated effort to test, trace, and isolate — it’s up to states and respective municipalities to handle anything that will happen. Some localities continue to spin up ambitious contact tracing programs, while others remain greatly constrained. As Margaret Bourdeaux explains, for example: “In Massachusetts, many of the 351 local boards of health are unaccredited, and most have only the most rudimentary digital access to accomplish the most basic public health goals of testing and contact tracing in their communities.” She cites Georgetown’s Alexandra Phelan: “Truly the amount of US COVID19 response activities that rely solely on the fax machine would horrify you.” There remain any number of well-considered plans that depend on a staged, deliberate reopening based on on testing, tracing, and supported isolation, such as ones from Harvard’s Safra Center (“We need to massively scale-up testing, contact tracing, isolation, and quarantine — together with providing the resources to make these possible for all individuals”), the Center for American Progress (calling for “instantaneous contact tracing and isolation of individuals who were in close proximity to a positive case”), and the American Enterprise Institute (“We need to harness the power of technology and drive additional resources to our state and local public-health departments, which are on the front lines of case identification and contact tracing”).

  • João Marinotti ’20

    João Marinotti ’20 wants to know how the world works

    May 27, 2020

    “I’ve always had a passion for engaging in my curiosity,” says João Marinotti ‘20, a linguist turned lawyer whose work focuses on sustainability, business, property, and private law.

  • Entering the Minefield of Digital Contact Tracing

    May 26, 2020

    An article by Jonathan Zittrain: People across America and the world remain under strong advisories or outright orders to shelter in place, and economies largely shut down, as part of an ongoing effort to flatten the curve of the most virulent pandemic since 1918. The economic effects have been predictably staggering, with no clear end in sight. Until a vaccine or other transformative medical intervention is developed, the broad consensus of experts is that the only way out of mass sheltering in place, if hospital occupancy curves are to remain flattened, entails waiting for most of the current cases to resolve, and then cautiously and incrementally reopening. That would mean a sequence of allowing people out; promptly testing anyone showing symptoms — and even some who are not; identifying recent proximate contacts of those who test positive; and then getting in touch with those contacts and, if circumstances dictate, asking or demanding that they individually shelter until the disease either manifests or not. The idea is to promptly prune branches of further disease transmission in order to keep its reproductive factor non-exponential.

  • Tree with red leaves

    Summations: Reflections from the Class of 2020

    May 20, 2020

    Members of the Class of 2020 reflect on their interests and share experiences they will take from their time at Harvard Law.

  • A start-up is using photos to ID you. Big tech can stop it from happening again.

    April 16, 2020

    An article by Jonathan Zittrain and John BowersEarlier this year, the public learned that a tiny start-up called Clearview AI was offering a big service. Clearview subscribers could give the company a photo of someone they had just taken and get links to other photos of the same person, often revealing information like who they are and where they live. A little tweaking and the service might simply identify people over any live feed aimed at any street, hallway or classroom. Though it has been marketed as a one-stop warrantless law enforcement tool, Clearview’s client list is also reported to include casinos, gyms, supermarkets, sporting leagues and wealthy parents curious about their kids’ dates. The upshot? The fundamental comfort — and liberty — of being able to walk down a street or enter a supermarket or stadium without the authorities, or fellow strangers, immediately knowing who you are is about to evaporate without any public debate about whether that’s okay. It’s as if someone invented glasses that could see through walls, sold them to a select few, and everyone else inexplicably shrugged. Now, the Wall Street Journal reports that Clearview AI is “in discussions with state agencies about using its technology to track patients infected by the coronavirus, according to people familiar with the matter.” It’s a savvy move, aimed at turning a rogue actor into a hero.

  • Cyberlaw Clinic turns 20

    April 9, 2020

    It was 1999 and the dot-com bubble was about to burst. Corporations were scrambling to address new legal challenges online. Napster was testing the music industry. And at Harvard Law School, the Berkman Klein Center was creating a clinical teaching program specializing in cyberlaw.

  • An Ambitious Reading of Facebook’s Content Regulation White Paper

    March 9, 2020

    An article by John Bowers and Jonathan Zittrain: Corporate pronouncements are usually anodyne. And at first glance one might think the same of Facebook’s recent white paper, authored by Monika Bickert, who manages the company’s content policies, offering up some perspectives on the emerging debate around governmental regulation of platforms’ content moderation systems. After all, by the paper’s own terms it’s simply offering up some questions to consider rather than concrete suggestions for resolving debates around platforms’ treatment of such things as anti-vax narratives, coordinated harassment, and political disinformation. But a careful read shows it to be a helpful document, both as a reflection of the contentious present moment around online speech, and because it takes seriously some options for “content governance” that–if pursued fully–would represent a moonshot for platform accountability premised on the partial but substantial, and long-term, devolution of Facebook’s policymaking authority.

  • Mike Bloomberg tweeted a doctored debate video. Is it political spin or disinformation?

    February 21, 2020

    Following his lackluster performance in Wednesday’s Democratic presidential debate, former New York Mayor Mike Bloomberg tweeted out a doctored video that made it look like he had a hugely successful moment on the debate stage, even though he didn’t. ... Take what happened earlier this month: Trump tweeted out a video that had been edited to make it look like Speaker of the House Nancy Pelosi was ripping up the president’s State of the Union speech during touching moments, such as the introduction of a Tuskegee airman. That’s not what transpired: Pelosi did rip up the speech, but only at the end of the full address. Jonathan Zittrain, a legal expert at Harvard, argues that tweet shouldn’t be taken down, even though it’s misleading, because it’s protected by free speech. “It’s political expression that could be said to be rearranging the video sequence in order to make a point that ripping up the speech at the end was, in effect, ripping up every topic that the speech had covered,” he wrote on Medium on February 10. “And to show it in a video conveys a message far more powerful than just saying it — something First Amendment values protect and celebrate, at least if people aren’t mistakenly thinking it is real,” Zittrain wrote.

  • A man of letters: The Antonin Scalia Collection opens at Harvard Law School

    February 11, 2020

    The Harvard Law School Library has announced the public release of the first batch of papers and other items from the Antonin Scalia Collection. His papers were donated by the Scalia family following the influential justice's death in 2016.

  • A World Without Privacy Will Revive the Masquerade

    February 11, 2020

    An article by Jonathan ZittrainTwenty years ago at a Silicon Valley product launch, Sun Microsystems CEO Scott McNealy dismissed concern about digital privacy as a red herring: “You have zero privacy anyway. Get over it.” “Zero privacy” was meant to placate us, suggesting that we have a fixed amount of stuff about ourselves that we’d like to keep private. Once we realized that stuff had already been exposed and, yet, the world still turned, we would see that it was no big deal. But what poses as unsentimental truth telling isn’t cynical enough about the parlous state of our privacy. That’s because the barrel of privacy invasion has no bottom. The rallying cry for privacy should begin with the strangely heartening fact that it can always get worse. Even now there’s something yet to lose, something often worth fiercely defending. For a recent example, consider Clearview AI: a tiny, secretive startup that became the subject of a recent investigation by Kashmir Hill in The New York Times.

  • The Video Trump Shared Of Pelosi Isn’t Real. Here’s Why Twitter And Facebook Should Leave It Up Anyway

    February 11, 2020

    An article by Jonathan Zittrain: Last week, Speaker Nancy Pelosi famously ripped up her copy of President Donald Trump's State of the Union address on camera after he finished delivering it. Later, the president retweeted a video based on it. The video the president retweeted (and pinned) had been edited to appear like the speaker had been ripping up pages throughout the speech, as if reacting contemptuously to each American credited by name, like Tuskeegee Airman Charles McGee. An official from the speaker's office has publicly sought to have Facebook and Twitter take down the video, since it's not depicting something real. So should Twitter and Facebook take it down? As a starting point for thinking about this, it helps to know that the video isn't legally actionable. It's political expression that could be said to be rearranging the video sequence in order to make a point that ripping up the speech at the end was, in effect, ripping up every topic that the speech had covered.

  • Pelosi Clashes With Facebook and Twitter Over Video Posted by Trump

    February 10, 2020

    Facebook and Twitter have rejected a request by Speaker Nancy Pelosi to remove a video posted by President Trump that was edited to make it appear as though she were ripping a copy of his State of the Union address as he honored a Tuskegee airman and other guests. The decision highlighted the tension between critics who want social media platforms to crack down on the spread of misinformation and others who argue that political speech should be given wide latitude, even if it’s deceptive or false...The video isn’t legally actionable and shouldn’t be taken down, said Jonathan L. Zittrain, a Harvard Law School professor and a founder of the Berkman Klein Center for Internet and Society. But, he said, Facebook and Twitter should probably label the video. “It’s important for social media sites that have massive reach to make and enforce policies concerning manipulated content, rather than abdicating all responsibility,” Professor Zittrain said. Labeling is helpful, he added, because “even something that to most people clearly appears to be satire can be taken seriously by others.”

  • Shedding light on fraudulent takedown notices

    December 13, 2019

    Every day, companies like Google remove links to online content in response to court orders, influencing the Internet search results we see. But what happens if bad actors deliberately falsify and submit court documents requesting the removal of content? Research using the Berkman Klein Center for Internet & Society’s Lumen database shows the problem is larger than previously understood. ... “From its inception and through its evolution, Lumen has played a foundational role in helping us to understand what’s behind what we see — and don’t see — online,” says Jonathan Zittrain ’95, the Berkman Klein Center’s faculty director, who worked with Wendy Seltzer to get the fledgling project off the ground in 2000.

  • Lumen Homepage

    Shedding light on fraudulent takedown notices

    December 12, 2019

    What happens if bad actors deliberately falsify and submit court documents requesting the removal of content? Research using the Berkman Klein Center for Internet & Society’s Lumen database shows the problem is larger than previously understood.

  • Building a More Honest Internet

    November 26, 2019

    Over the course of a few short years, a technological revolution shook the world. New businesses rose and fell, fortunes were made and lost, the practice of reporting the news was reinvented, and the relationship between leaders and the public was thoroughly transformed, for better and for worse. The years were 1912 to 1927 and the technological revolution was radio...Those models, and the ways they shaped the societies from which they emerged, offer a helpful road map as we consider another technological revolution: the rise of the commercial internet...Facebook and other companies have pioneered sophisticated methods of data collection that allow ads to be precisely targeted to individual people’s consumer habits and preferences...When Facebook users were shown that up to six of their friends had voted, they were 0.39 percent more likely to vote than users who had seen no one vote. While the effect is small, Harvard Law professor Jonathan Zittrain observed that even this slight push could influence an election—Facebook could selectively mobilize some voters and not others. Election results could also be influenced by both Facebook and Google if they suppressed information that was damaging to one candidate or disproportionately promoted positive news about another.

  • How Google Interferes With Its Search Algorithms and Changes Your Results

    November 18, 2019

    Every minute, an estimated 3.8 million queries are typed into Google, prompting its algorithms to spit out results for hotel rates or breast-cancer treatments or the latest news about President Trump. They are arguably the most powerful lines of computer code in the global economy, controlling how much of the world accesses information found on the internet, and the starting point for billions of dollars of commerce. ... The company states in a Google blog, “We do not use human curation to collect or arrange the results on a page.” It says it can’t divulge details about how the algorithms work because the company is involved in a long-running and high-stakes battle with those who want to profit by gaming the system. ... Jonathan Zittrain, a Harvard Law School professor and faculty director of the Berkman Klein Center for Internet & Society, said Google has poorly defined how often or when it intervenes on search results. The company’s argument that it can’t reveal those details because it is fighting spam “seems nuts,” said Mr. Zittrain. “That argument may have made sense 10 or 15 years ago but not anymore,” he said. “That’s called ‘security through obscurity,’ ” a reference to the now-unfashionable engineering idea that systems can be made more secure by restricting information about how they operate.