The Berkman Klein Center for Internet & Society at Harvard is renowned for its research on the online world. Similarly, the MIT Media Lab is acclaimed for collaborations in which technologists and other experts invent and reinvent how humans experience—and can be aided by—technology.

In a new course for students at HLS and MIT, the two institutions have come together to discuss how to regulate driverless cars, respond to fake news, and gain access to secret math formulas built to dispense justice. Two professors lead a lightning-fast discourse in their new, team-taught class, The Ethics and Governance of Artificial Intelligence.

Jonathan Zittrain ’95, a Harvard Law and computer science professor, asks questions about rights and regulations, while Joi Ito, the MIT Media Lab’s director (and a visiting professor at HLS), adds philosophical and political comments. Together, they riff on music-industry copyright controversies, referencing Rod Stewart and Elvis Presley. They debate why Japanese culture places more moral restraints on capitalism than American culture.

The students pepper Zittrain and Ito with questions inspired by the news. One student asks about digital privacy rights and online companies’ freewheeling approaches to customer data.

“The American paradigm is heavily choice-based: The more you can just say you’re giving somebody a choice, whether it’s opt in or opt out, you’re done,” Zittrain says. But he thinks we face too many privacy choices to keep track of them. “If the choice is, do you want to get screwed over or not, don’t give me the choice. Just don’t screw me over.”

“In medicine, there’s an interesting way to think about consent versus duty of care,” adds Ito. “We’re missing that right now in the digital world. Maybe there’s a way to learn from [that] and apply it to a place like Facebook.”

Zittrain and Ito aren’t just talking theory. They’re putting it into action. They’re two of the lead researchers for the Ethics and Governance of Artificial Intelligence Fund, an AI initiative established by donors in January 2017, with the Berkman Klein Center at Harvard and the MIT Media Lab as anchor institutions. The $27 million AI initiative aims to reach far beyond the two universities. Its goal: to research and brainstorm new legal and moral rules for artificial intelligence and other technologies built on complex algorithms. Backers include LinkedIn co-founder Reid Hoffman and eBay founder Pierre Omidyar, who’ve grown concerned about AI and other aspects of the digital world they helped create. The fund’s first round of grants, in July 2017, gave $1.7 million to seven organizations on four continents to examine artificial intelligence’s development. In Brazil in November, Berkman Klein co-hosted the Global Symposium on Artificial Intelligence & Inclusion to explore further international research partnerships.

“Companies are building technology that will have very, very significant impacts on our lives,” says HLS Clinical Professor Christopher Bavitz, faculty co-director of the Berkman Klein Center and another leader of the AI initiative’s research. “They are raising issues that can only be addressed if you have lawyers, computer scientists, ethicists, economists and business folks working together.”

The AI initiative aims to establish rules to govern driverless cars before they hit the road en masse; to make social media news feeds more transparent and defend them against manipulation by Russian government-sponsored trolls and other propagandists; and to guard against racial bias seeping into seemingly objective courtroom tools. In short, Harvard and MIT want the public to understand and control these fast-growing technologies, before they control us.

This past November, 11 Harvard and MIT professors and researchers wrote an open letter to the Massachusetts Legislature. They asked lawmakers not to pass a provision of the state Senate’s criminal justice bill that would have required state courts to use risk-score programs while determining defendants’ bail. The scholars, most of whom are participants in the AI initiative, warned the state to study risk-score programs for hidden racial or gender bias—and to consider building their own program, through an open and public process.

“It may turn out to be that it’s very, very difficult to do risk scoring in a way that is fair,” says Bavitz, a lead author of the letter, “because it is virtually impossible to weed out the biases.”

Computerized risk assessments try to predict whether a defendant will skip bail, flee or commit more crimes if released before trial. Though some courts have used risk scores for decades, they’re growing in popularity. In 2016, the Wisconsin Supreme Court ruled that judges can use, without forensic examination by defendants, computer forecasts of a defendant’s likelihood to commit future crimes when deciding a prison sentence.

Supporters of risk-score algorithms believe they could make courts more efficient and less vulnerable to judges’ personal biases. Critics say the algorithms can end up reflecting and magnifying biases in the justice-system data they’re based on. Some risk-score tools use factors that may reflect the impact of racial discrimination on society, such as a defendant’s education level, social isolation or housing instability.

In February, the Harvard and MIT researchers endorsed a revised approach in the Massachusetts House’s criminal justice bill, which calls for a bail commission to study risk-assessment tools. In late March, the House-Senate conference committee included the more cautious approach in its reconciled criminal justice bill, which passed both houses and was signed into law by Gov. Charlie Baker in April.

Meanwhile, Harvard and MIT scholars are going still deeper into the issue. Bavitz and a team of Berkman Klein researchers are developing a database of governments that use risk scores to help set bail. It will be searchable to see whether court cases have challenged a risk-score tool’s use, whether that tool is based on peer-reviewed scientific literature, and whether its formulas are public.

Many risk-score tools are created by private companies that keep their algorithms secret. That lack of transparency creates due-process concerns, says Bavitz. “Flash forward to a world where a judge says, ‘The computer tells me you’re a risk score of 5 out of 7.’ What does it mean? I don’t know. There’s no opportunity for me to lift up the hood of the algorithm.” Instead, he suggests governments could design their own risk-assessment algorithms and software, using staff or by collaborating with foundations or researchers.

Students in the ethics class agreed that risk-score programs shouldn’t be used in court if their formulas aren’t transparent, according to then HLS 3L Arjun Adusumilli. “When people’s liberty interests are at stake, we really expect a certain amount of input, feedback and appealability,” he says. “Even if the thing is statistically great, and makes good decisions, we want reasons.”

The MIT Media Lab is partnering with Berkman Klein to research criminal justice algorithms. Its program, Humanizing AI in Law, is conducting research into how Kentucky judges use risk-assessment tools. (The program’s acronym, HAL, is partially inspired by HAL 9000, the computer in “2001: A Space Odyssey” science fiction’s most famous example of artificial intelligence gone rogue.) A recent paper by Ito, Zittrain and HAL researchers, “Interventions over Predictions,” argues that instead of merely using algorithms to predict future crime, governments should use machine learning to analyze the root causes of crime and find ways to “break cycles of criminalization.” Co-author and HAL researcher Chelsea Barabas expanded on the paper this year during a talk at the Conference on Fairness, Accountability, and Transparency in New York City.

Meanwhile, Bavitz says, he’s excited to learn the results of a study in progress elsewhere at Harvard Law. The Access to Justice Lab is conducting a randomized trial of a risk-assessment tool in Madison, Wisconsin. Lab faculty director and HLS Professor Jim Greiner says judges in Dane County, which includes Madison, have agreed to test the Public Safety Assessment, a risk-score tool developed by the Laura and John Arnold Foundation. The judges will use the tool in randomly chosen cases, and work without it in others.

Many members working on the AI initiative are pessimistic about whether bias can be purged from risk-score tools. Greiner is more optimistic. “I’m at least willing to try these risk-assessment scores as a way to improve upon unguided human best guesses,” he says.

In 2014, Zittrain wrote an article for The New Republic headlined “Facebook Could Decide an Election Without Anyone Ever Finding Out.” He argued that Facebook could alter its news-feed algorithm to depress turnout for candidates the company opposed. It was one of the first warnings that changes on Facebook’s platform could impact an election.

Four years later, the world has caught up to Zittrain. Exploitation of social media’s vulnerabilities during the 2016 U.S. election is still making headlines. And Zittrain, the co-founder, director, and faculty chair of Berkman Klein, is, with its executive director, Urs Gasser LL.M. ’03, leading the AI initiative’s work on media and information quality—e.g., how to define and fight fake news.

“You have no idea if the entity you’re communicating with [on social media] is automated, or is someone carrying water for someone else,” says Zittrain. So propaganda can flourish. In fact, he calls Russian efforts to influence the 2016 election a prime example of astroturfing: a fake grass-roots campaign. “Clever astroturfing campaigns can sway public opinion and shape people’s view of the world,” Zittrain says, adding that scaling up a mass propaganda campaign online requires using algorithms. Defending against subtly automated mass propaganda on a social media platform in turn requires altering what the site’s algorithms present to users—and those defenses can accidentally affect legitimate political debate. “Those are algorithmically driven decisions,” he says. “Depending on the code, [they] can present diametrically different views of the world.”

Zittrain’s ambition for the AI initiative is immense: to democratize social media’s secret algorithms, artificial intelligence and similar technologies. He wants to produce public-spirited discussions that allow everyone from engineers at major companies to “people who are either using or being affected by the technology and don’t even know it” to become “much more aware of the choices that were made in the design, and have a chance to contest it and talk about it.” That means engaging much of the world: 2.1 billion of the globe’s 7.6 billion people are on Facebook.

“A lot of the action and activity is behind the gates of a Twitter or a Facebook,” Zittrain says. “This initiative entails developing and expanding relationships with those companies, while also maintaining the appropriate distance from them, to be able to report independently on what we see and what we think ought to happen.”

Zittrain also hopes the initiative produces technological innovations—some created at MIT and Harvard, some by third parties receiving grants from the Ethics and Governance of Artificial Intelligence Fund.

“We will also be saying, ‘Hey, Twitter, let’s work together, and help figure out how you might change your code to effect this goal or that goal,’” Zittrain says. “‘Hey, Facebook, here’s a way to open up your News Feed so anybody can write a recipe for what gets seen.’ That might require Facebook’s buy-in to do, and we are pursuing that.”

Zittrain also suggests the AI initiative could create new social platforms or news aggregators. At the MIT Media Lab, researchers have already built Gobo, a social media aggregator with transparent filters controlled by the user.

The initiative will also build on other work by Harvard and MIT researchers. Last year, Media Cloud, a collaboration between Berkman Klein and the MIT Center for Civic Media, released a study of online media coverage of the 2016 election. The study, led by HLS Professor and Berkman Klein co-director Yochai Benkler ’94, mapped the wide split in influence between mainstream center-left websites and highly partisan far-right websites such as Breitbart. This February, a vast MIT Media Lab study of 126,000 stories shared on Twitter across 11 years concluded that lies spread faster than truth on the social media site. The study attracted widespread coverage and commentary.

Meanwhile, students in Zittrain and Ito’s class have been studying and debating social media’s effect on politics, in response to the news, and with participation by outside technologists, including from companies in the spotlight. For example, in one session, Facebook’s chief security officer, Alex Stamos, spoke to the class via video link.

Jessy Lin, then a junior at MIT who’s researching artificial intelligence, says hearing from Stamos in class gave her a more nuanced view of how Facebook deals with fake news and social responsibility. “They have been thinking about it and putting a lot of manpower and effort into these problems,” Lin says, “but it’s just obviously very complex.”

On March 20, days after a self-driving car struck and killed a pedestrian in Tempe, Arizona, students in the AI ethics and governance class talked about what the crash means for ethics and the law. Though a “safety driver” was in the car to take control in an emergency, Lin says the students agreed that she should not be blamed for the death. “You should not have the operator take on the brunt of the failures of the system,” she says.

Who’s to blame when a self-driving car crashes is just one of the vexing questions the AI initiative is tackling. Ito is the initiative’s point person on autonomous vehicles. The joint effort will also tie in to work at Harvard that predates it, such as computer science professor Barbara Grosz’s pioneering class and research on artificial intelligence and exploration of ethics alongside computer science, and Harvard Law Lecturer Bonnie Docherty’s (’01) research supporting a pre-emptive ban on fully autonomous weapons.

The MIT Media Lab’s Moral Machine website built by Media Lab Associate Professor Iyad Rahwan has polled millions of people worldwide on the wrenching problems that self-driving cars may be programmed to answer in the moments before a crash: Is it better to swerve one way, which may kill two passengers, or another way, which may kill five pedestrians?

“The result generally comes out to: ‘Cars should sacrifice a passenger if it’s going to save more lives,’” Ito says, “and ‘everyone should buy that car, but I wouldn’t.’” His conclusion: “Just a market-driven approach will not satisfy people’s view of what is ethical.” To find solutions, Harvard and MIT are building relationships with car companies—the Media Lab is already working with Toyota—with the aim of connecting them with government regulators and ethicists. Harvard Law’s Cyberlaw Clinic is offering help with legal liability questions around driverless cars to nonprofits and startups.

Zittrain, who’s been exploring driverless-car ethics and governance with Ito in class, says a driverless future brings up a vast array of new legal questions.

“If a city wants to decree an evacuation, can it just push a button, and all the cars flee the city with the people inside?” Zittrain asks. “Can you arrest somebody by realizing they’re in a car at the moment, having it lock the doors, and taking them to the nearest police station for delivery? Can someone declare an emergency, and have their car go 90 mph while all the other cars part like the Red Sea?”

People need to confront those questions sooner, not later, Zittrain thinks. Rules written into code in 2018 might control what happens in 2025. “You can set up the dominoes early, and they will fall later,” he says. “That’s a strange movement of autonomy: not from a person to a company, or a company to a government, but from the present to the past.”