People
Jonathan Zittrain
-
Is Digital Contact Tracing Over Before It Began?
June 26, 2020
An article by Jonathan Zittrain: Last month I wrote a short essay covering some of the issues around standing up contact tracing across the U.S., as part of a test/trace/quarantine regime that would accompany the ending of a general lockdown to prevent the spread of the Coronavirus pandemic...In the intervening month, some things have remained the same. As before, tech companies and startups continue to develop exposure notification apps and frameworks. And there remains no Federally-coordinated effort to test, trace, and isolate — it’s up to states and respective municipalities to handle anything that will happen. Some localities continue to spin up ambitious contact tracing programs, while others remain greatly constrained. As Margaret Bourdeaux explains, for example: “In Massachusetts, many of the 351 local boards of health are unaccredited, and most have only the most rudimentary digital access to accomplish the most basic public health goals of testing and contact tracing in their communities.” She cites Georgetown’s Alexandra Phelan: “Truly the amount of US COVID19 response activities that rely solely on the fax machine would horrify you.” There remain any number of well-considered plans that depend on a staged, deliberate reopening based on on testing, tracing, and supported isolation, such as ones from Harvard’s Safra Center (“We need to massively scale-up testing, contact tracing, isolation, and quarantine — together with providing the resources to make these possible for all individuals”), the Center for American Progress (calling for “instantaneous contact tracing and isolation of individuals who were in close proximity to a positive case”), and the American Enterprise Institute (“We need to harness the power of technology and drive additional resources to our state and local public-health departments, which are on the front lines of case identification and contact tracing”).
-
João Marinotti ’20 wants to know how the world works
May 27, 2020
“I’ve always had a passion for engaging in my curiosity,” says João Marinotti ‘20, a linguist turned lawyer whose work focuses on sustainability, business, property, and private law.
-
Entering the Minefield of Digital Contact Tracing
May 26, 2020
An article by Jonathan Zittrain: People across America and the world remain under strong advisories or outright orders to shelter in place, and economies largely shut down, as part of an ongoing effort to flatten the curve of the most virulent pandemic since 1918. The economic effects have been predictably staggering, with no clear end in sight. Until a vaccine or other transformative medical intervention is developed, the broad consensus of experts is that the only way out of mass sheltering in place, if hospital occupancy curves are to remain flattened, entails waiting for most of the current cases to resolve, and then cautiously and incrementally reopening. That would mean a sequence of allowing people out; promptly testing anyone showing symptoms — and even some who are not; identifying recent proximate contacts of those who test positive; and then getting in touch with those contacts and, if circumstances dictate, asking or demanding that they individually shelter until the disease either manifests or not. The idea is to promptly prune branches of further disease transmission in order to keep its reproductive factor non-exponential.
-
Summations: Reflections from the Class of 2020
May 20, 2020
Members of the Class of 2020 reflect on their interests and share experiences they will take from their time at Harvard Law.
-
An article by Jonathan Zittrain and John Bowers: Earlier this year, the public learned that a tiny start-up called Clearview AI was offering a big service. Clearview subscribers could give the company a photo of someone they had just taken and get links to other photos of the same person, often revealing information like who they are and where they live. A little tweaking and the service might simply identify people over any live feed aimed at any street, hallway or classroom. Though it has been marketed as a one-stop warrantless law enforcement tool, Clearview’s client list is also reported to include casinos, gyms, supermarkets, sporting leagues and wealthy parents curious about their kids’ dates. The upshot? The fundamental comfort — and liberty — of being able to walk down a street or enter a supermarket or stadium without the authorities, or fellow strangers, immediately knowing who you are is about to evaporate without any public debate about whether that’s okay. It’s as if someone invented glasses that could see through walls, sold them to a select few, and everyone else inexplicably shrugged. Now, the Wall Street Journal reports that Clearview AI is “in discussions with state agencies about using its technology to track patients infected by the coronavirus, according to people familiar with the matter.” It’s a savvy move, aimed at turning a rogue actor into a hero.
-
Cyberlaw Clinic turns 20
April 9, 2020
It was 1999 and the dot-com bubble was about to burst. Corporations were scrambling to address new legal challenges online. Napster was testing the music industry. And at Harvard Law School, the Berkman Klein Center was creating a clinical teaching program specializing in cyberlaw.
-
An article by John Bowers and Jonathan Zittrain: Corporate pronouncements are usually anodyne. And at first glance one might think the same of Facebook’s recent white paper, authored by Monika Bickert, who manages the company’s content policies, offering up some perspectives on the emerging debate around governmental regulation of platforms’ content moderation systems. After all, by the paper’s own terms it’s simply offering up some questions to consider rather than concrete suggestions for resolving debates around platforms’ treatment of such things as anti-vax narratives, coordinated harassment, and political disinformation. But a careful read shows it to be a helpful document, both as a reflection of the contentious present moment around online speech, and because it takes seriously some options for “content governance” that–if pursued fully–would represent a moonshot for platform accountability premised on the partial but substantial, and long-term, devolution of Facebook’s policymaking authority.
-
Mike Bloomberg tweeted a doctored debate video. Is it political spin or disinformation?
February 21, 2020
Following his lackluster performance in Wednesday’s Democratic presidential debate, former New York Mayor Mike Bloomberg tweeted out a doctored video that made it look like he had a hugely successful moment on the debate stage, even though he didn’t. ... Take what happened earlier this month: Trump tweeted out a video that had been edited to make it look like Speaker of the House Nancy Pelosi was ripping up the president’s State of the Union speech during touching moments, such as the introduction of a Tuskegee airman. That’s not what transpired: Pelosi did rip up the speech, but only at the end of the full address. Jonathan Zittrain, a legal expert at Harvard, argues that tweet shouldn’t be taken down, even though it’s misleading, because it’s protected by free speech. “It’s political expression that could be said to be rearranging the video sequence in order to make a point that ripping up the speech at the end was, in effect, ripping up every topic that the speech had covered,” he wrote on Medium on February 10. “And to show it in a video conveys a message far more powerful than just saying it — something First Amendment values protect and celebrate, at least if people aren’t mistakenly thinking it is real,” Zittrain wrote.
-
The Harvard Law School Library has announced the public release of the first batch of papers and other items from the Antonin Scalia Collection. His papers were donated by the Scalia family following the influential justice's death in 2016.
-
A World Without Privacy Will Revive the Masquerade
February 11, 2020
An article by Jonathan Zittrain: Twenty years ago at a Silicon Valley product launch, Sun Microsystems CEO Scott McNealy dismissed concern about digital privacy as a red herring: “You have zero privacy anyway. Get over it.” “Zero privacy” was meant to placate us, suggesting that we have a fixed amount of stuff about ourselves that we’d like to keep private. Once we realized that stuff had already been exposed and, yet, the world still turned, we would see that it was no big deal. But what poses as unsentimental truth telling isn’t cynical enough about the parlous state of our privacy. That’s because the barrel of privacy invasion has no bottom. The rallying cry for privacy should begin with the strangely heartening fact that it can always get worse. Even now there’s something yet to lose, something often worth fiercely defending. For a recent example, consider Clearview AI: a tiny, secretive startup that became the subject of a recent investigation by Kashmir Hill in The New York Times.
-
The Video Trump Shared Of Pelosi Isn’t Real. Here’s Why Twitter And Facebook Should Leave It Up Anyway
February 11, 2020
An article by Jonathan Zittrain: Last week, Speaker Nancy Pelosi famously ripped up her copy of President Donald Trump's State of the Union address on camera after he finished delivering it. Later, the president retweeted a video based on it. The video the president retweeted (and pinned) had been edited to appear like the speaker had been ripping up pages throughout the speech, as if reacting contemptuously to each American credited by name, like Tuskeegee Airman Charles McGee. An official from the speaker's office has publicly sought to have Facebook and Twitter take down the video, since it's not depicting something real. So should Twitter and Facebook take it down? As a starting point for thinking about this, it helps to know that the video isn't legally actionable. It's political expression that could be said to be rearranging the video sequence in order to make a point that ripping up the speech at the end was, in effect, ripping up every topic that the speech had covered.
-
Pelosi Clashes With Facebook and Twitter Over Video Posted by Trump
February 10, 2020
Facebook and Twitter have rejected a request by Speaker Nancy Pelosi to remove a video posted by President Trump that was edited to make it appear as though she were ripping a copy of his State of the Union address as he honored a Tuskegee airman and other guests. The decision highlighted the tension between critics who want social media platforms to crack down on the spread of misinformation and others who argue that political speech should be given wide latitude, even if it’s deceptive or false...The video isn’t legally actionable and shouldn’t be taken down, said Jonathan L. Zittrain, a Harvard Law School professor and a founder of the Berkman Klein Center for Internet and Society. But, he said, Facebook and Twitter should probably label the video. “It’s important for social media sites that have massive reach to make and enforce policies concerning manipulated content, rather than abdicating all responsibility,” Professor Zittrain said. Labeling is helpful, he added, because “even something that to most people clearly appears to be satire can be taken seriously by others.”
-
Shedding light on fraudulent takedown notices
December 13, 2019
Every day, companies like Google remove links to online content in response to court orders, influencing the Internet search results we see. But what happens if bad actors deliberately falsify and submit court documents requesting the removal of content? Research using the Berkman Klein Center for Internet & Society’s Lumen database shows the problem is larger than previously understood. ... “From its inception and through its evolution, Lumen has played a foundational role in helping us to understand what’s behind what we see — and don’t see — online,” says Jonathan Zittrain ’95, the Berkman Klein Center’s faculty director, who worked with Wendy Seltzer to get the fledgling project off the ground in 2000.
-
Shedding light on fraudulent takedown notices
December 12, 2019
What happens if bad actors deliberately falsify and submit court documents requesting the removal of content? Research using the Berkman Klein Center for Internet & Society’s Lumen database shows the problem is larger than previously understood.
-
Building a More Honest Internet
November 26, 2019
Over the course of a few short years, a technological revolution shook the world. New businesses rose and fell, fortunes were made and lost, the practice of reporting the news was reinvented, and the relationship between leaders and the public was thoroughly transformed, for better and for worse. The years were 1912 to 1927 and the technological revolution was radio...Those models, and the ways they shaped the societies from which they emerged, offer a helpful road map as we consider another technological revolution: the rise of the commercial internet...Facebook and other companies have pioneered sophisticated methods of data collection that allow ads to be precisely targeted to individual people’s consumer habits and preferences...When Facebook users were shown that up to six of their friends had voted, they were 0.39 percent more likely to vote than users who had seen no one vote. While the effect is small, Harvard Law professor Jonathan Zittrain observed that even this slight push could influence an election—Facebook could selectively mobilize some voters and not others. Election results could also be influenced by both Facebook and Google if they suppressed information that was damaging to one candidate or disproportionately promoted positive news about another.
-
Every minute, an estimated 3.8 million queries are typed into Google, prompting its algorithms to spit out results for hotel rates or breast-cancer treatments or the latest news about President Trump. They are arguably the most powerful lines of computer code in the global economy, controlling how much of the world accesses information found on the internet, and the starting point for billions of dollars of commerce. ... The company states in a Google blog, “We do not use human curation to collect or arrange the results on a page.” It says it can’t divulge details about how the algorithms work because the company is involved in a long-running and high-stakes battle with those who want to profit by gaming the system. ... Jonathan Zittrain, a Harvard Law School professor and faculty director of the Berkman Klein Center for Internet & Society, said Google has poorly defined how often or when it intervenes on search results. The company’s argument that it can’t reveal those details because it is fighting spam “seems nuts,” said Mr. Zittrain. “That argument may have made sense 10 or 15 years ago but not anymore,” he said. “That’s called ‘security through obscurity,’ ” a reference to the now-unfashionable engineering idea that systems can be made more secure by restricting information about how they operate.
-
Methodology: How the Journal Carried Out Its Analysis
November 15, 2019
The Wall Street Journal compiled and compared auto-complete and organic search results on Google, Bing and DuckDuckGo in three phases, from July 23-Aug. 8; Aug. 26-31; and Sept. 12-19. We created a set of computers in the cloud, using Amazon Web Services EC2 (Elastic Compute Cloud), which presented new IP addresses, the unique identifier that many webpages use to associate one browser session with another, for each search. The computers were, however, identifiable as working off a server in Virginia, and location could be a factor in our results. ... The Journal reviewed the methodology with Jonathan Zittrain, the faculty director of Harvard University’s Berkman Klein Center for Internet & Society, and John Bowers, a research associate at the Berkman Klein Center. Google declined to comment on the Journal’s testing.
-
Let Juries Review Facebook Ads
November 14, 2019
An article by Jonathan Zittrain: Facebook has been weathering a series of disapproving news cycles after clarifying that its disinformation policies exempt political ads from review for truthfulness. There are now reports that the company is considering reducing the targeting options available to political advertisers. No matter how Facebook and its counterparts tweak their policies, whatever these companies do will prompt broad anxiety and disapprobation among experts and their own users. That’s because there are two fundamental problems underlying the debate. First, we the public don’t agree on what we want. And second, we don’t trust anyone to give it to us.
-
Facebook, free speech, and political ads
October 31, 2019
A number of Facebook's recent decisions have fueled a criticism that continues to follow the company, including the decision not to fact-check political advertising and the inclusion of Breitbart News in the company’s new “trusted sources” News tab. These controversies were stoked even further by Mark Zuckerberg’s speech at Georgetown University last week, where he tried—mostly unsuccessfully—to portray Facebook as a defender of free speech...Harvard Law professor Jonathan Zittrain...said the political ad fact-checking controversy is about more than just a difficult product feature. “Evaluating ads for truth is not a mere customer service issue that’s solvable by hiring more generic content staffers,” he said. “The real issue is that a single company controls far too much speech of a particular kind, and thus has too much power.”
-
In “Right to Be Forgotten,” which explores the question of whether people’s past indiscretions should live forever online, playwright Sharyn Rothstein has processed the perks and perils of the digital age. With such contemporary material comes relevance — to the current cultural dialogue — and a responsibility to monitor the news cycle. As the play has gone through workshops, rehearsals and preview performances on the way to its world premiere at Arena Stage, Rothstein has kept a close eye on developments in the technology world...Striving for authenticity, the creative team spoke to authorities on both sides of the debate. Early in the writing process, Rothstein reached out to Jonathan Zittrain, a professor of Internet law at Harvard who helped shape the legal cases presented in “Right to Be Forgotten.”
-
The Hidden Costs of Automated Thinking
July 23, 2019
An article by Jonathan Zittrain: Like many medications, the wakefulness drug modafinil, which is marketed under the trade name Provigil, comes with a small, tightly folded paper pamphlet. For the most part, its contents—lists of instructions and precautions, a diagram of the drug’s molecular structure—make for anodyne reading. The subsection called “Mechanism of Action,” however, contains a sentence that might induce sleeplessness by itself: “The mechanism(s) through which modafinil promotes wakefulness is unknown.” ... In the past, intellectual debt has been confined to a few areas amenable to trial-and-error discovery, such as medicine. But that may be changing, as new techniques in artificial intelligence—specifically, machine learning—increase our collective intellectual credit line. Machine-learning systems work by identifying patterns in oceans of data. Using those patterns, they hazard answers to fuzzy, open-ended questions. Provide a neural network with labelled pictures of cats and other, non-feline objects, and it will learn to distinguish cats from everything else; give it access to medical records, and it can attempt to predict a new hospital patient’s likelihood of dying. And yet, most machine-learning systems don’t uncover causal mechanisms.