Skip to content

People

Jonathan Zittrain

  • The Video Trump Shared Of Pelosi Isn’t Real. Here’s Why Twitter And Facebook Should Leave It Up Anyway

    February 11, 2020

    An article by Jonathan Zittrain: Last week, Speaker Nancy Pelosi famously ripped up her copy of President Donald Trump's State of the Union address on camera after he finished delivering it. Later, the president retweeted a video based on it. The video the president retweeted (and pinned) had been edited to appear like the speaker had been ripping up pages throughout the speech, as if reacting contemptuously to each American credited by name, like Tuskeegee Airman Charles McGee. An official from the speaker's office has publicly sought to have Facebook and Twitter take down the video, since it's not depicting something real. So should Twitter and Facebook take it down? As a starting point for thinking about this, it helps to know that the video isn't legally actionable. It's political expression that could be said to be rearranging the video sequence in order to make a point that ripping up the speech at the end was, in effect, ripping up every topic that the speech had covered.

  • Pelosi Clashes With Facebook and Twitter Over Video Posted by Trump

    February 10, 2020

    Facebook and Twitter have rejected a request by Speaker Nancy Pelosi to remove a video posted by President Trump that was edited to make it appear as though she were ripping a copy of his State of the Union address as he honored a Tuskegee airman and other guests. The decision highlighted the tension between critics who want social media platforms to crack down on the spread of misinformation and others who argue that political speech should be given wide latitude, even if it’s deceptive or false...The video isn’t legally actionable and shouldn’t be taken down, said Jonathan L. Zittrain, a Harvard Law School professor and a founder of the Berkman Klein Center for Internet and Society. But, he said, Facebook and Twitter should probably label the video. “It’s important for social media sites that have massive reach to make and enforce policies concerning manipulated content, rather than abdicating all responsibility,” Professor Zittrain said. Labeling is helpful, he added, because “even something that to most people clearly appears to be satire can be taken seriously by others.”

  • Shedding light on fraudulent takedown notices

    December 13, 2019

    Every day, companies like Google remove links to online content in response to court orders, influencing the Internet search results we see. But what happens if bad actors deliberately falsify and submit court documents requesting the removal of content? Research using the Berkman Klein Center for Internet & Society’s Lumen database shows the problem is larger than previously understood. ... “From its inception and through its evolution, Lumen has played a foundational role in helping us to understand what’s behind what we see — and don’t see — online,” says Jonathan Zittrain ’95, the Berkman Klein Center’s faculty director, who worked with Wendy Seltzer to get the fledgling project off the ground in 2000.

  • Lumen Homepage

    Shedding light on fraudulent takedown notices

    December 12, 2019

    What happens if bad actors deliberately falsify and submit court documents requesting the removal of content? Research using the Berkman Klein Center for Internet & Society’s Lumen database shows the problem is larger than previously understood.

  • Building a More Honest Internet

    November 26, 2019

    Over the course of a few short years, a technological revolution shook the world. New businesses rose and fell, fortunes were made and lost, the practice of reporting the news was reinvented, and the relationship between leaders and the public was thoroughly transformed, for better and for worse. The years were 1912 to 1927 and the technological revolution was radio...Those models, and the ways they shaped the societies from which they emerged, offer a helpful road map as we consider another technological revolution: the rise of the commercial internet...Facebook and other companies have pioneered sophisticated methods of data collection that allow ads to be precisely targeted to individual people’s consumer habits and preferences...When Facebook users were shown that up to six of their friends had voted, they were 0.39 percent more likely to vote than users who had seen no one vote. While the effect is small, Harvard Law professor Jonathan Zittrain observed that even this slight push could influence an election—Facebook could selectively mobilize some voters and not others. Election results could also be influenced by both Facebook and Google if they suppressed information that was damaging to one candidate or disproportionately promoted positive news about another.

  • How Google Interferes With Its Search Algorithms and Changes Your Results

    November 18, 2019

    Every minute, an estimated 3.8 million queries are typed into Google, prompting its algorithms to spit out results for hotel rates or breast-cancer treatments or the latest news about President Trump. They are arguably the most powerful lines of computer code in the global economy, controlling how much of the world accesses information found on the internet, and the starting point for billions of dollars of commerce. ... The company states in a Google blog, “We do not use human curation to collect or arrange the results on a page.” It says it can’t divulge details about how the algorithms work because the company is involved in a long-running and high-stakes battle with those who want to profit by gaming the system. ... Jonathan Zittrain, a Harvard Law School professor and faculty director of the Berkman Klein Center for Internet & Society, said Google has poorly defined how often or when it intervenes on search results. The company’s argument that it can’t reveal those details because it is fighting spam “seems nuts,” said Mr. Zittrain. “That argument may have made sense 10 or 15 years ago but not anymore,” he said. “That’s called ‘security through obscurity,’ ” a reference to the now-unfashionable engineering idea that systems can be made more secure by restricting information about how they operate.

  • Methodology: How the Journal Carried Out Its Analysis

    November 15, 2019

    The Wall Street Journal compiled and compared auto-complete and organic search results on Google, Bing and DuckDuckGo in three phases, from July 23-Aug. 8; Aug. 26-31; and Sept. 12-19. We created a set of computers in the cloud, using Amazon Web Services EC2 (Elastic Compute Cloud), which presented new IP addresses, the unique identifier that many webpages use to associate one browser session with another, for each search. The computers were, however, identifiable as working off a server in Virginia, and location could be a factor in our results. ... The Journal reviewed the methodology with Jonathan Zittrain, the faculty director of Harvard University’s Berkman Klein Center for Internet & Society, and John Bowers, a research associate at the Berkman Klein Center. Google declined to comment on the Journal’s testing.

  • Let Juries Review Facebook Ads

    November 14, 2019

    An article by Jonathan Zittrain: Facebook has been weathering a series of disapproving news cycles after clarifying that its disinformation policies exempt political ads from review for truthfulness. There are now reports that the company is considering reducing the targeting options available to political advertisers. No matter how Facebook and its counterparts tweak their policies, whatever these companies do will prompt broad anxiety and disapprobation among experts and their own users. That’s because there are two fundamental problems underlying the debate. First, we the public don’t agree on what we want. And second, we don’t trust anyone to give it to us.

  • Facebook, free speech, and political ads

    October 31, 2019

    A number of Facebook's recent decisions have fueled a criticism that continues to follow the company, including the decision not to fact-check political advertising and the inclusion of Breitbart News in the company’s new “trusted sources” News tab. These controversies were stoked even further by Mark Zuckerberg’s speech at Georgetown University last week, where he tried—mostly unsuccessfully—to portray Facebook as a defender of free speech...Harvard Law professor Jonathan Zittrain...said the political ad fact-checking controversy is about more than just a difficult product feature. “Evaluating ads for truth is not a mere customer service issue that’s solvable by hiring more generic content staffers,” he said. “The real issue is that a single company controls far too much speech of a particular kind, and thus has too much power.”

  • Arena Stage tackles Internet privacy — and permanence — in ‘Right to Be Forgotten’

    October 18, 2019

    In “Right to Be Forgotten,” which explores the question of whether people’s past indiscretions should live forever online, playwright Sharyn Rothstein has processed the perks and perils of the digital age. With such contemporary material comes relevance — to the current cultural dialogue — and a responsibility to monitor the news cycle. As the play has gone through workshops, rehearsals and preview performances on the way to its world premiere at Arena Stage, Rothstein has kept a close eye on developments in the technology world...Striving for authenticity, the creative team spoke to authorities on both sides of the debate. Early in the writing process, Rothstein reached out to Jonathan Zittrain, a professor of Internet law at Harvard who helped shape the legal cases presented in “Right to Be Forgotten.”

  • The Hidden Costs of Automated Thinking

    July 23, 2019

    An article by Jonathan Zittrain: Like many medications, the wakefulness drug modafinil, which is marketed under the trade name Provigil, comes with a small, tightly folded paper pamphlet. For the most part, its contents—lists of instructions and precautions, a diagram of the drug’s molecular structure—make for anodyne reading. The subsection called “Mechanism of Action,” however, contains a sentence that might induce sleeplessness by itself: “The mechanism(s) through which modafinil promotes wakefulness is unknown.” ... In the past, intellectual debt has been confined to a few areas amenable to trial-and-error discovery, such as medicine. But that may be changing, as new techniques in artificial intelligence—specifically, machine learning—increase our collective intellectual credit line. Machine-learning systems work by identifying patterns in oceans of data. Using those patterns, they hazard answers to fuzzy, open-ended questions. Provide a neural network with labelled pictures of cats and other, non-feline objects, and it will learn to distinguish cats from everything else; give it access to medical records, and it can attempt to predict a new hospital patient’s likelihood of dying. And yet, most machine-learning systems don’t uncover causal mechanisms.

  • Will artificial intelligence replace doctors?

    July 16, 2019

    For all of its upsides, scientists such as the late Stephen Hawking have warned that artificial intelligence could destroy mankind. At Harvard Medical School’s 2019 Precision Medicine conference, Harvard Law School professor Jonathan Zittrain compared AI to asbestos: “It turns out that it’s all over the place, even though at no point did you explicitly install it, and it has possibly some latent bad effects that you might regret later, after it’s already too hard to get it all out.” He also noted that AI can be tricked, according to a story from Stat, citing a Google algorithm that correctly identified a tabby cat. When some pixels were changed, the algorithm thought the kitty was — no joke — guacamole.

  • Going West: Palo Alto event showcases Harvard tech startup scene

    July 16, 2019

    “Bear with me,” Jonathan Zittrain urged the audience as his talk — up to this point, a romp through the early history of the internet — lurched into Kantian philosophy: “I’m about to get all ‘East Coast’ on you.” Zittrain, faculty director of Harvard’s Berkman Klein Center for Internet and Society, was in Palo Alto, Calif., delivering an energetic presentation on the ethical responsibilities of tech companies toward consumers in the era of artificial intelligence. About the shift of technology environments from unowned to owned and tightly controlled, he asked, “When is it that ‘can’ implies ‘ought’?” His provocative keynote was the culmination of a Harvard Tech Startup Night hosted by Harvard Office of Technology Development (OTD) and the law firm WilmerHale at its Palo Alto offices.

  • Jonathan Zittrain speaking at an event in Palo Alto, CA

    Going West

    July 11, 2019

    A provocative keynote by Harvard Law Professor Jonathan Zittrain on ethics in AI was the culmination of a Harvard Tech Startup Night, hosted by Harvard Office of Technology Development and the law firm WilmerHale, at its Palo Alto offices.

  • Pile of Legos and five Caselaw Access pins on a white background.

    HLS Caselaw Access Project helps researchers draw new connections between ideas, people and organizations

    July 3, 2019

    In June, the Harvard Library Innovation Lab hosted an inaugural research summit to highlight the diversity of research that the Caselaw Access Project is making possible.

  • Noah Feldman

    Harvard Law professor plays instrumental role in creation of Facebook’s content oversight board

    June 27, 2019

    New report from Facebook summarizes next steps in a plan to establish an independent content oversight board. For Noah Feldman, who first proposed the idea, helping develop a new approach to one of the most vexing challenges confronting social media has been one of the most exciting things in his professional life.

  • What if AI in health care is the next asbestos?

    June 25, 2019

    Artificial intelligence is often hailed as a great catalyst of medical innovation, a way to find cures to diseases that have confounded doctors and make health care more efficient, personalized, and accessible. But what if it turns out to be poison? Jonathan Zittrain, a Harvard Law School professor, posed that question during a conference in Boston Tuesday that examined the use of AI to accelerate the delivery of precision medicine to the masses. He used an alarming metaphor to explain his concerns: “I think of machine learning kind of as asbestos,” he said. “It turns out that it’s all over the place, even though at no point did you explicitly install it, and it has possibly some latent bad effects that you might regret later, after it’s already too hard to get it all out.”

  • notes and comment 3

    Collaboration zone

    April 26, 2019

    Library event provides unique opportunity for faculty-student interaction.

  • The Law and the Digital World 1

    The Law and the Digital World

    April 3, 2019

    Officials from 23 offices of state attorneys general recently met at HLS as part of the Berkman Klein Center’s AGTech Forum series, to discuss tech-driven challenges to privacy and data security that vex state regulators and threaten consumers, and to strategize on how the law can keep up.

  • Inside The R&D Of AI Ethics

    March 27, 2019

    ow do you start to wrap your head around some of the most fundamental issues surrounding new technology and how it impacts society? If you’re Jonathan Zittrain, you take this “brainstorming exercise,” as he calls it, and force it into the real world. Zittrain is, among other honorifics, a Harvard Law School professor and the faculty director of the Berkman Klein Center for Internet and Society. He’s also the force behind Assembly, a collaboration between Berkman Klein and the MIT Media Lab, a program which is taking a unique approach to solving problems related to AI and ethics.

  • Medical AI systems could be vulnerable to adversarial attacks

    Medical AI systems could be vulnerable to adversarial attacks

    March 26, 2019

    A team of researchers from Harvard Law School, Harvard Medical School and MIT have published a new article in Science, the peer-reviewed academic journal of the American Academy of Arts & Sciences, that suggests that medical artificial intelligence systems could be vulnerable to adversarial attacks.