Skip to content

People

Jonathan Zittrain

  • “Column” launches to modernize public notices

    September 9, 2020

    A group of prominent media veterans are advising a team of millennials who are launching a new company called "Column," which modernizes the placement of public notices. Why it matters: Public notices have been one of the biggest and most reliable revenue streams​ for local newspapers for centuries. Amid the pandemic, they are becoming more important to local papers that are seeing regular local advertising dry up. Catch up quick: Public notices are legally required updates from the government to citizens about different types of legal or regulatory proceedings, like ordinances, foreclosures, municipal budgets, adoptions, and public meetings. Like obituaries, the exact amount of revenue they deliver to the newspaper industry is debated, but they've long been considered a critical part of local newspaper businesses, due to a longstanding legal requirement for local governments to place them in newspapers...Details: The company has a full-time team of nine, and is supported and advised by many notable media experts, including David Chavern, CEO of the News Media Alliance and Nancy Gibbs, the faculty director of Harvard's Shorenstein Center on Media, Politics and Public Policy, and the former editor in chief of Time Magazine. It receives occasional informal advice from Marty Baron, executive editor of the Washington Post. The company is advised by several Harvard professors and entrepreneurs, including Jonathan Zittrain, co-founder of the Berkman Klein Center of Internet and Society at Harvard Law School, and Henry Ward, founder and CEO of Carta.

  • Are We Already Living in a Tech Dystopia?

    September 8, 2020

    For the most part, fictional characters rarely recognize when they’re trapped in a dystopia...To them, that dystopia is just life. Which suggests that—were we, at this moment, living in a dystopia ourselves—we might not even notice it...Is this, right here, the tech-dystopia we were worried about? For this week’s Giz Asks, we reached out to a number of experts with differing opinions...Jonathan Zittrain: "Yes, in the sense that so many of us feel rightly that instead of empowering us, technology is used against us—most especially when it presents itself as merely here to help. I’ve started thinking of some of our most promising tech, including machine learning, as like asbestos: it’s baked wholesale into other products and services so we don’t even know we’re using it; it becomes pervasive because it seems to work so well, and without any overlay of public interest on its installation; it’s really hard to account for, much less remove, once it’s in place; and it carries with it the possibility of deep injury both now and down the line. I’m not anti-tech. But I worry greatly about the deployment of such power with so little thought given to, and so few boundaries against, its misuse, whether now or later. More care and public-minded oversight goes into someone’s plans for an addition to a house than to what can be or become a multi-billion dollar, multi-billion-user online platform. And while thanks to their power, and the trust placed in them by their clients, we recognize structural engineers, architects, lawyers, and doctors as members of learned professions—with duties to their clients and to the public that transcend a mere business agreement—we have yet to see that applied to people in data science, software engineering, and related fields who might be in a position to recognize and prevent harm as it coalesces. There are ways out of this. But it first requires abandoning the resignation that so many have come to about business as usual."

  • The Root Room 7

    HLS librarians, remote but not distant: ‘We’re here for you’

    August 13, 2020

    Since going remote in March, the Harvard Law School Library has reimagined what it means to provide services to the HLS community.

  • Testing Is on the Brink of Paralysis. That’s Very Bad News.

    July 17, 2020

    An article by Margaret Bourdeaux, Beth Cameron and Jonathan ZittrainAs Covid-19 cases surge to their highest levels in dozens of states, the nation’s testing effort is on the brink of paralysis because of widespread delays in getting back results. And that is very bad news, because even if testing is robust, the pandemic cannot be controlled without rapid results. This is the latest failure in our national response to the worst pandemic in a century. Since the Trump administration has abdicated responsibility, governors must join forces to meet this threat before the cataclysm that Florida is experiencing becomes the reality across the country. Testing should be the governors’ first order of business. Despite President Trump’s boast early this month that testing “is so massive and so good,” the United States’ two largest commercial testing companies, Quest Diagnostics and LabCorp, have found themselves overwhelmed and unable to return results promptly. Delays averaging a week or longer for all but top-priority hospital patients and symptomatic health care workers are disastrous for efforts to slow the spread of the virus. Without rapid results, it is impossible to isolate new infections quickly enough to douse flare-ups before they grow. Slow diagnosis incapacitates contact tracing, which entails not only isolating those who test positive but also alerting the infected person’s contacts quickly so they can quarantine, too, and avoid exposing others to the virus unwittingly. Among those who waited an absurdly long time for her results was the mayor of Atlanta, Keisha Lance Bottoms. “We FINALLY received our test results taken 8 days before,” she tweeted last week. “One person in my house was positive then. By the time we tested again, 1 week later, 3 of us had COVID. If we had known sooner, we would have immediately quarantined.”

  • Twitter’s Least-Bad Option for Dealing With Donald Trump

    June 26, 2020

    An article by Jonathan ZittrainOn Tuesday, President Donald Trump began his day as he usually does—by tweeting. In this case, Trump fired off a threat of using “serious force” against hypothetical protesters setting up an “autonomous zone” in Washington, D.C. Twitter, in response, hid the tweet but did not delete it, requiring readers to click through a notice that says the tweet violated the platform’s policy “against abusive behavior, specifically, the presence of a threat of harm against an identifiable group.” Twitter’s placement of such a “public interest notice” on a tweet from the president of the United States was just the latest salvo in the company’s struggle to contend with Trump’s gleefully out-of-bounds behavior. But any response from Twitter is going to be the least bad option rather than a genuinely good one. This is because Trump himself has demolished the norms that would make a genuinely good response possible in the first place. The truth is that every plausible configuration of social media in 2020 is unpalatable. Although we don’t have consensus about what we want, no one would ask for what we currently have: a world in which two unelected entrepreneurs are in a position to monitor billions of expressions a day, serve as arbiters of truth, and decide what messages are amplified or demoted. This is the power that Twitter’s Jack Dorsey and Facebook’s Mark Zuckerberg have, and they may well experience their own discomfort with it. Nor, though, would many of us wish for such powerful people to stand idly by when, at no risk to themselves, they could intervene to prevent misery and violence in the physical world, by, say, helping to counter dangerous misinformation or preventing the incitement of violence.

  • Is Digital Contact Tracing Over Before It Began?

    June 26, 2020

    An article by Jonathan ZittrainLast month I wrote a short essay covering some of the issues around standing up contact tracing across the U.S., as part of a test/trace/quarantine regime that would accompany the ending of a general lockdown to prevent the spread of the Coronavirus pandemic...In the intervening month, some things have remained the same. As before, tech companies and startups continue to develop exposure notification apps and frameworks. And there remains no Federally-coordinated effort to test, trace, and isolate — it’s up to states and respective municipalities to handle anything that will happen. Some localities continue to spin up ambitious contact tracing programs, while others remain greatly constrained. As Margaret Bourdeaux explains, for example: “In Massachusetts, many of the 351 local boards of health are unaccredited, and most have only the most rudimentary digital access to accomplish the most basic public health goals of testing and contact tracing in their communities.” She cites Georgetown’s Alexandra Phelan: “Truly the amount of US COVID19 response activities that rely solely on the fax machine would horrify you.” There remain any number of well-considered plans that depend on a staged, deliberate reopening based on on testing, tracing, and supported isolation, such as ones from Harvard’s Safra Center (“We need to massively scale-up testing, contact tracing, isolation, and quarantine — together with providing the resources to make these possible for all individuals”), the Center for American Progress (calling for “instantaneous contact tracing and isolation of individuals who were in close proximity to a positive case”), and the American Enterprise Institute (“We need to harness the power of technology and drive additional resources to our state and local public-health departments, which are on the front lines of case identification and contact tracing”).

  • João Marinotti ’20

    João Marinotti ’20 wants to know how the world works

    May 27, 2020

    “I’ve always had a passion for engaging in my curiosity,” says João Marinotti ‘20, a linguist turned lawyer whose work focuses on sustainability, business, property, and private law.

  • Entering the Minefield of Digital Contact Tracing

    May 26, 2020

    An article by Jonathan Zittrain: People across America and the world remain under strong advisories or outright orders to shelter in place, and economies largely shut down, as part of an ongoing effort to flatten the curve of the most virulent pandemic since 1918. The economic effects have been predictably staggering, with no clear end in sight. Until a vaccine or other transformative medical intervention is developed, the broad consensus of experts is that the only way out of mass sheltering in place, if hospital occupancy curves are to remain flattened, entails waiting for most of the current cases to resolve, and then cautiously and incrementally reopening. That would mean a sequence of allowing people out; promptly testing anyone showing symptoms — and even some who are not; identifying recent proximate contacts of those who test positive; and then getting in touch with those contacts and, if circumstances dictate, asking or demanding that they individually shelter until the disease either manifests or not. The idea is to promptly prune branches of further disease transmission in order to keep its reproductive factor non-exponential.

  • Tree with red leaves

    Summations: Reflections from the Class of 2020

    May 20, 2020

    Members of the Class of 2020 reflect on their interests and share experiences they will take from their time at Harvard Law.

  • A start-up is using photos to ID you. Big tech can stop it from happening again.

    April 16, 2020

    An article by Jonathan Zittrain and John BowersEarlier this year, the public learned that a tiny start-up called Clearview AI was offering a big service. Clearview subscribers could give the company a photo of someone they had just taken and get links to other photos of the same person, often revealing information like who they are and where they live. A little tweaking and the service might simply identify people over any live feed aimed at any street, hallway or classroom. Though it has been marketed as a one-stop warrantless law enforcement tool, Clearview’s client list is also reported to include casinos, gyms, supermarkets, sporting leagues and wealthy parents curious about their kids’ dates. The upshot? The fundamental comfort — and liberty — of being able to walk down a street or enter a supermarket or stadium without the authorities, or fellow strangers, immediately knowing who you are is about to evaporate without any public debate about whether that’s okay. It’s as if someone invented glasses that could see through walls, sold them to a select few, and everyone else inexplicably shrugged. Now, the Wall Street Journal reports that Clearview AI is “in discussions with state agencies about using its technology to track patients infected by the coronavirus, according to people familiar with the matter.” It’s a savvy move, aimed at turning a rogue actor into a hero.

  • Cyberlaw Clinic turns 20

    April 9, 2020

    It was 1999 and the dot-com bubble was about to burst. Corporations were scrambling to address new legal challenges online. Napster was testing the music industry. And at Harvard Law School, the Berkman Klein Center was creating a clinical teaching program specializing in cyberlaw.

  • An Ambitious Reading of Facebook’s Content Regulation White Paper

    March 9, 2020

    An article by John Bowers and Jonathan Zittrain: Corporate pronouncements are usually anodyne. And at first glance one might think the same of Facebook’s recent white paper, authored by Monika Bickert, who manages the company’s content policies, offering up some perspectives on the emerging debate around governmental regulation of platforms’ content moderation systems. After all, by the paper’s own terms it’s simply offering up some questions to consider rather than concrete suggestions for resolving debates around platforms’ treatment of such things as anti-vax narratives, coordinated harassment, and political disinformation. But a careful read shows it to be a helpful document, both as a reflection of the contentious present moment around online speech, and because it takes seriously some options for “content governance” that–if pursued fully–would represent a moonshot for platform accountability premised on the partial but substantial, and long-term, devolution of Facebook’s policymaking authority.

  • Mike Bloomberg tweeted a doctored debate video. Is it political spin or disinformation?

    February 21, 2020

    Following his lackluster performance in Wednesday’s Democratic presidential debate, former New York Mayor Mike Bloomberg tweeted out a doctored video that made it look like he had a hugely successful moment on the debate stage, even though he didn’t. ... Take what happened earlier this month: Trump tweeted out a video that had been edited to make it look like Speaker of the House Nancy Pelosi was ripping up the president’s State of the Union speech during touching moments, such as the introduction of a Tuskegee airman. That’s not what transpired: Pelosi did rip up the speech, but only at the end of the full address. Jonathan Zittrain, a legal expert at Harvard, argues that tweet shouldn’t be taken down, even though it’s misleading, because it’s protected by free speech. “It’s political expression that could be said to be rearranging the video sequence in order to make a point that ripping up the speech at the end was, in effect, ripping up every topic that the speech had covered,” he wrote on Medium on February 10. “And to show it in a video conveys a message far more powerful than just saying it — something First Amendment values protect and celebrate, at least if people aren’t mistakenly thinking it is real,” Zittrain wrote.

  • A man of letters: The Antonin Scalia Collection opens at Harvard Law School

    February 11, 2020

    The Harvard Law School Library has announced the public release of the first batch of papers and other items from the Antonin Scalia Collection. His papers were donated by the Scalia family following the influential justice's death in 2016.

  • A World Without Privacy Will Revive the Masquerade

    February 11, 2020

    An article by Jonathan ZittrainTwenty years ago at a Silicon Valley product launch, Sun Microsystems CEO Scott McNealy dismissed concern about digital privacy as a red herring: “You have zero privacy anyway. Get over it.” “Zero privacy” was meant to placate us, suggesting that we have a fixed amount of stuff about ourselves that we’d like to keep private. Once we realized that stuff had already been exposed and, yet, the world still turned, we would see that it was no big deal. But what poses as unsentimental truth telling isn’t cynical enough about the parlous state of our privacy. That’s because the barrel of privacy invasion has no bottom. The rallying cry for privacy should begin with the strangely heartening fact that it can always get worse. Even now there’s something yet to lose, something often worth fiercely defending. For a recent example, consider Clearview AI: a tiny, secretive startup that became the subject of a recent investigation by Kashmir Hill in The New York Times.

  • The Video Trump Shared Of Pelosi Isn’t Real. Here’s Why Twitter And Facebook Should Leave It Up Anyway

    February 11, 2020

    An article by Jonathan Zittrain: Last week, Speaker Nancy Pelosi famously ripped up her copy of President Donald Trump's State of the Union address on camera after he finished delivering it. Later, the president retweeted a video based on it. The video the president retweeted (and pinned) had been edited to appear like the speaker had been ripping up pages throughout the speech, as if reacting contemptuously to each American credited by name, like Tuskeegee Airman Charles McGee. An official from the speaker's office has publicly sought to have Facebook and Twitter take down the video, since it's not depicting something real. So should Twitter and Facebook take it down? As a starting point for thinking about this, it helps to know that the video isn't legally actionable. It's political expression that could be said to be rearranging the video sequence in order to make a point that ripping up the speech at the end was, in effect, ripping up every topic that the speech had covered.

  • Pelosi Clashes With Facebook and Twitter Over Video Posted by Trump

    February 10, 2020

    Facebook and Twitter have rejected a request by Speaker Nancy Pelosi to remove a video posted by President Trump that was edited to make it appear as though she were ripping a copy of his State of the Union address as he honored a Tuskegee airman and other guests. The decision highlighted the tension between critics who want social media platforms to crack down on the spread of misinformation and others who argue that political speech should be given wide latitude, even if it’s deceptive or false...The video isn’t legally actionable and shouldn’t be taken down, said Jonathan L. Zittrain, a Harvard Law School professor and a founder of the Berkman Klein Center for Internet and Society. But, he said, Facebook and Twitter should probably label the video. “It’s important for social media sites that have massive reach to make and enforce policies concerning manipulated content, rather than abdicating all responsibility,” Professor Zittrain said. Labeling is helpful, he added, because “even something that to most people clearly appears to be satire can be taken seriously by others.”

  • Shedding light on fraudulent takedown notices

    December 13, 2019

    Every day, companies like Google remove links to online content in response to court orders, influencing the Internet search results we see. But what happens if bad actors deliberately falsify and submit court documents requesting the removal of content? Research using the Berkman Klein Center for Internet & Society’s Lumen database shows the problem is larger than previously understood. ... “From its inception and through its evolution, Lumen has played a foundational role in helping us to understand what’s behind what we see — and don’t see — online,” says Jonathan Zittrain ’95, the Berkman Klein Center’s faculty director, who worked with Wendy Seltzer to get the fledgling project off the ground in 2000.

  • Lumen Homepage

    Shedding light on fraudulent takedown notices

    December 12, 2019

    What happens if bad actors deliberately falsify and submit court documents requesting the removal of content? Research using the Berkman Klein Center for Internet & Society’s Lumen database shows the problem is larger than previously understood.

  • Building a More Honest Internet

    November 26, 2019

    Over the course of a few short years, a technological revolution shook the world. New businesses rose and fell, fortunes were made and lost, the practice of reporting the news was reinvented, and the relationship between leaders and the public was thoroughly transformed, for better and for worse. The years were 1912 to 1927 and the technological revolution was radio...Those models, and the ways they shaped the societies from which they emerged, offer a helpful road map as we consider another technological revolution: the rise of the commercial internet...Facebook and other companies have pioneered sophisticated methods of data collection that allow ads to be precisely targeted to individual people’s consumer habits and preferences...When Facebook users were shown that up to six of their friends had voted, they were 0.39 percent more likely to vote than users who had seen no one vote. While the effect is small, Harvard Law professor Jonathan Zittrain observed that even this slight push could influence an election—Facebook could selectively mobilize some voters and not others. Election results could also be influenced by both Facebook and Google if they suppressed information that was damaging to one candidate or disproportionately promoted positive news about another.

  • How Google Interferes With Its Search Algorithms and Changes Your Results

    November 18, 2019

    Every minute, an estimated 3.8 million queries are typed into Google, prompting its algorithms to spit out results for hotel rates or breast-cancer treatments or the latest news about President Trump. They are arguably the most powerful lines of computer code in the global economy, controlling how much of the world accesses information found on the internet, and the starting point for billions of dollars of commerce. ... The company states in a Google blog, “We do not use human curation to collect or arrange the results on a page.” It says it can’t divulge details about how the algorithms work because the company is involved in a long-running and high-stakes battle with those who want to profit by gaming the system. ... Jonathan Zittrain, a Harvard Law School professor and faculty director of the Berkman Klein Center for Internet & Society, said Google has poorly defined how often or when it intervenes on search results. The company’s argument that it can’t reveal those details because it is fighting spam “seems nuts,” said Mr. Zittrain. “That argument may have made sense 10 or 15 years ago but not anymore,” he said. “That’s called ‘security through obscurity,’ ” a reference to the now-unfashionable engineering idea that systems can be made more secure by restricting information about how they operate.