Consider your morning routine. Did you groggily Google the weather before rolling out of bed? Did you take a moment to check your social media feed at breakfast, or use a navigation app on your way to work? Maybe you listened to a podcast during your commute, bought a gift for your sister online, or even clicked on this article because it was displayed as a “recommended reading.”
If any of those things — or a whole bunch more — are true, you have been subject to an algorithm today. Most likely, more than one.
“An algorithm is any process or set of instructions for taking a fixed set of inputs and transforming it into some form of output,” says Mason Kortz ’14, a clinical instructor at Harvard Law School’s Cyberlaw Clinic and lecturer on law, who is teaching a reading group called Algorithms, Rights, and Responsibilities this semester.
The federal sentencing guidelines are an example of an algorithm, says Kortz, because they account for a defendant’s criminal history, the nature of the offense, and aggravating factors, in determining an acceptable sentence.
But the reading group, he clarifies, examines a different kind of algorithmic process – those run by computers or artificial intelligence. “Our course is focused on algorithms that consider hundreds, or even thousands, of data points in making decisions,” says Kortz. “These are very complex processes that wouldn’t really be feasible for a human, such as a judge, to sit down and do on a sheet of paper. And these complex algorithms raise questions of what happens when technology allows you to exceed the limitations that come with ‘hand-turned’ algorithms, like the sentencing guidelines.”
He points to self-driving cars, which are one of the subjects of his course. An autonomous vehicle must consider seemingly countless data points to complete a trip successfully — in addition to stop signs, weather, and road markings, it must account for the unpredictable behavior of other cars and pedestrians. It also must adjust its behavior constantly. With no human behind the wheel, what happens if an accident does occur? And how should we understand and assign blame?
Kortz says his course encourages students to think about existing legal structures first. “For self-driving cars, we look to tort, and specifically product, liability. We ask the question, is product liability a good regime for governing harms potentially caused by self-driving cars? Why or why not? What are alternatives?”
The idea, he emphasizes, is not necessarily to come up with the best way to regulate the industry. “It’s to understand how existing principles of law might be applied to the problem in front of you.”
When it comes to algorithms that use our personal data to sell us things, track us, or help determine our future, other questions arise. Kortz highlights the growing use of algorithms in employment, as when companies use software to automatically filter out resumes, or in mortgage lending, which can take individual judgment out of decision-making, as ways in which biases can be eliminated — or further baked in.
“With these processes, we have to ask, are we actually reducing discrimination in these areas, or just moving it around?” he says. “And are our traditional legal structures, like the Civil Rights Act and Fair Housing Act, sufficient to protect against algorithmic discrimination?”
Kortz says each class meeting is organized around a specific technology, delving into its social benefits and potential harms, and looking at how existing law may or may not apply. Students are also asked to consider questions of ownership and control of data and algorithmic outputs.
During one recent debate about artificial intelligence-generated art, “Several members of the class brought up the fact that incorporation of other people’s artwork is already something that happens,” he says. “People borrow — it might be an homage; it might be outright stealing — but it’s really common. An example one student used was the sampling by U.S. or European artists of music from the global south without crediting creators. And if that problem is already happening, algorithms have the potential to amplify the harms that already exist in the system.”
Data privacy is another hot topic, says Kortz. He says some people consider control over one’s personal data to be a property right, while others view it as a natural right — which, it turns out, makes a difference. “Data ownership is something we talk about a lot, because almost every application of algorithmic decision-making requires a lot of data on the input side. And that data, obviously, comes from somewhere. Often it comes from individuals, and often without their permission.”
Kortz and his students are also interested in the role regulation could — or should — play in algorithm-based technology. “I’ve had students with a broad range of opinions on this,” he says. “There are those who are very skeptical of over-regulating the tech industry and slowing the development of potentially beneficial technologies. I’ve also had students who are deeply concerned about harms caused by algorithms, and think much more regulation is necessary.”
There are a few specific challenges to consider in regulating these technologies, he says. “One is that because the process of rule-making tends to be slow, there’s the risk that by the time a regulation is in place, the entities you’re trying to regulate have already adapted and found a way around it. The other challenge is that it can be hard to know how narrowly or broadly to draw a regulation.”
While Kortz’s students may not always agree on how to regulate technology, or even whether to do so at all, he says his classes have been full of lively debate. “Both times I have taught this course, I’ve gotten a group of students with a variety of views who are comfortable expressing those views and who are, I think more importantly, capable of listening to people who have different views and engaging with them in a very respectful way.”
It’s a skill Kortz says will be valuable to his students whether they hope to work in technology or not. “The ultimate goal is to help the students understand that, as a lawyer, they’re going to eventually come across fact patterns that are new and novel, so how do you take something new and assess the applicability of existing legal structures to it? Because that’s what lawyers do.”