Mason Kortz and Alejandra Caraballo are clinical instructors with the Cyberlaw Clinic, but their work does not stop there.

This semester, Kortz is teaching a reading group called Algorithms, Rights, and Responsibilities, where students explore questions surrounding the use of algorithms run by computers and artificial intelligence. Kortz and his students explore the role of regulation on algorithmic technologies, questions of discrimination, and data privacy.

Caraballo, whose work focuses on the intersection of gender and technology in the law, recently published an article with Slate exploring the impact of remote learning surveillance on student devices in the wake of Florida’s “Don’t Say Gay” bill. Serious potential harms to LGBTQ+ youth have been exacerbated by the influx of school surveillance technology with legislation like such.

Read excerpts from their recent articles below.


Algorithm nation

Via Harvard Law Today, by Rachel Reed

FINGER PRESSING AN AUTOPILOT BUTTON IN A SELF DRIVING CAR. COMPOSITE IMAGE BETWEEN A HAND PHOTOGRAPHY AND A 3D BACKGROUND.

Consider your morning routine. Did you groggily Google the weather before rolling out of bed? Did you take a moment to check your social media feed at breakfast, or use a navigation app on your way to work? Maybe you listened to a podcast during your commute, bought a gift for your sister online, or even clicked on this article because it was displayed as a “recommended reading.”

If any of those things — or a whole bunch more — are true, you have been subject to an algorithm today. Most likely, more than one.

“An algorithm is any process or set of instructions for taking a fixed set of inputs and transforming it into some form of output,” says Mason Kortz ’14, a clinical instructor at Harvard Law School’s Cyberlaw Clinic and lecturer on law, who is teaching a reading group called Algorithms, Rights, and Responsibilities this semester.

The federal sentencing guidelines are an example of an algorithm, says Kortz, because they account for a defendant’s criminal history, the nature of the offense, and aggravating factors, in determining an acceptable sentence.

But the reading group, he clarifies, examines a different kind of algorithmic process – those run by computers or artificial intelligence. “Our course is focused on algorithms that consider hundreds, or even thousands, of data points in making decisions,” says Kortz. “These are very complex processes that wouldn’t really be feasible for a human, such as a judge, to sit down and do on a sheet of paper. And these complex algorithms raise questions of what happens when technology allows you to exceed the limitations that come with ‘hand-turned’ algorithms, like the sentencing guidelines.”

He points to self-driving cars, which are one of the subjects of his course. An autonomous vehicle must consider seemingly countless data points to complete a trip successfully — in addition to stop signs, weather, and road markings, it must account for the unpredictable behavior of other cars and pedestrians. It also must adjust its behavior constantly. With no human behind the wheel, what happens if an accident does occur? And how should we understand and assign blame?

Kortz says his course encourages students to think about existing legal structures first. “For self-driving cars, we look to tort, and specifically product, liability. We ask the question, is product liability a good regime for governing harms potentially caused by self-driving cars? Why or why not? What are alternatives?”

The idea, he emphasizes, is not necessarily to come up with the best way to regulate the industry. “It’s to understand how existing principles of law might be applied to the problem in front of you.”

When it comes to algorithms that use our personal data to sell us things, track us, or help determine our future, other questions arise. Kortz highlights the growing use of algorithms in employment, as when companies use software to automatically filter out resumes, or in mortgage lending, which can take individual judgment out of decision-making, as ways in which biases can be eliminated — or further baked in.

“With these processes, we have to ask, are we actually reducing discrimination in these areas, or just moving it around?” he says. “And are our traditional legal structures, like the Civil Rights Act and Fair Housing Act, sufficient to protect against algorithmic discrimination?”


Remote Learning Accidentally Introduced a New Danger for LGBTQ Students

Via Slate, by Alejandra Caraballo

Student types on laptop

Imagine this: A 13-year-old student is called into their school counselor’s office. There they find their counselor and their parents waiting for them, concerned looks on their faces. “We know you think you’re trans,” one of them says. The student is horrified. They’ve never shared these private thoughts with anyone, channeling their feelings and questions into their personal diary on their laptop. Had their parents been reading their diary? No. Their laptop was given to them by their school, and it contains software that flags any student writing that uses, among other terms, “queer” or “transgender.” The company forwarded the flagged content to a school counselor. And under a recently passed “Don’t Say Gay” bill in the student’s state, the counselor was required to report the writing to the student’s parents, outing the student. Outing the student before they were ready to share their identity, or even sure of it themselves, puts that student at risk of their family disowning them, or worse.

This story is hypothetical, but it’s realistic. Last year, a student in Minneapolis was outed when school administrators contacted their parents after a surveillance software flagged LGBTQ keywords in their writing, and schools’ abilities to screen students’ writings are becoming more and more invasive.* The COVID-19 pandemic brought about a seismic shift in the use of surveillance software by schools. Simultaneously, a growing reactionary backlash against school policies affirming and supporting LGBTQ students is resulting in legislation, lawsuits, and pressure campaigns to implement anti-LGBTQ policies. These coinciding events threaten to turn schools into a surveillance apparatus uniquely suited to outing and marginalizing at-risk LGBTQ students.

As schools scrambled to shift to remote learning, they hastily signed contracts with educational technology vendors without understanding the implications on students’ privacy. A recent survey by the Center for Democracy and Technology showed that 81 percent of teachers report their schools are now using surveillance software to monitor students. One particular software used by schools around the country is Gaggle, which surveils school computers and student accounts. The use of Gaggle has resulted in the constant monitoring of students through their Gmail and Microsoft Office accounts, even when at home using personal devices. Gaggle even monitors in real time the content being written by students on Google Docs.

Gaggle flags the terms “lesbian,” “gay,” and “transgender” as sexual content that is reported to human reviewers at the company to determine if it should be passed along to school staff. Gaggle’s CEO, Jeff Patterson, defended the policy of flagging LGBTQ content as a means to protect students from bullying.

There are real potential harms to this kind of surveillance. A student writing that they might be queer or trans in a personal diary in a Google Doc on their school Google account—whether on their school or personal computer—could then result in that writing being reported to school administrators and outing the student.

While off-campus surveillance of students’ personal devices presents a grave threat to all students’ privacy, low-income students unable to afford personal devices feel the brunt of these surveillance technologies. Prior to the pandemic, 43 percent of schools had device distribution programs; now, 86 percent do. We know that school-issued devices often feature more invasive surveillance software and internet content filters. But despite this rapid increase, there is no readily available information on how schools monitor their students or with what software. It’s often up to journalists and nonprofit advocacy organizations to uncover this information through public information requests that take months or longer.

Whether on school-issued devices or downloaded onto students’ own, these software tools do not disclose the source code of their software nor how they prioritize student information for review and reporting, leaving parents, students, and even school administrators ignorant of what the privacy concerns even are before they can begin working for mitigation. But what is known is not good. A.I. automation tools, such as those used by Gaggle, are rife with broken promises about accuracy. They often have fundamental racial and gender biases. A consequence of using these tools to monitor students’ online content is that they will disproportionately affect people of color and marginalized groups.

This surveillance technology becomes even more harmful when combined with recent efforts to require the outing of students, such as the now-withdrawn amendment to Florida’s “Don’t Say Gay” bill that would have required school staff to out children within six weeks of discovering the information, regardless of concerns about the child’s health and safety. Even worse, Texas has declared that providing gender-affirming care to trans children is child abuse. Legislation like this is part of a massive reactionary backlash directed at school boards for everything from mask mandates to so-called critical race theory, or CRT.

Filed in: In the News, Updates

Tags: Cyberlaw Clinic

Contact Office of Clinical and Pro Bono Programs

Website:
hls.harvard.edu/clinics

Email:
clinical@law.harvard.edu