Alicia Solow-Niederman

Climenko Fellow and Lecturer on Law

2020-2021

Biography

Alicia Solow-Niederman’s research sits at the intersection of public law and emerging digital technologies, such as artificial intelligence. Recognizing that all technologies are deployed in social and political contexts, Alicia takes an interdisciplinary approach. She is especially interested in how to ensure public accountability and democratic legitimacy as government and private firms alike develop and deploy algorithmic tools that allocate benefits and burdens in particular jurisdictions. Her current project considers the ways in which the regulatory or administrative regime within which an algorithmic tool is adopted can facilitate or thwart not only certain technological understandings of fairness and bias, but also certain forms of stakeholder input and top-down oversight.

Alicia’s legal scholarship has appeared in the Southern California Law Review, the Berkeley Technology Law Journal, and the Stanford Technology Law Review. Her Essay on data breaches was selected as a winner of the Yale Law Journal’s recent graduate essay competition on emerging legal challenges in law and technology.

Alicia is a graduate of Harvard Law School, where she was Forum Editor for the Harvard Law Review. After law school, she served as the inaugural fellow in artificial intelligence, law, and policy for UCLA Law’s Program on Understanding Law, Science, and Evidence (PULSE) and then clerked in the U.S. District Court for the District of Columbia. Previously, she worked as a project manager at the Berkman Klein Center for Internet & Society and earned her B.A. with distinction in communication and political science from Stanford University.

Alicia Solow-Niederman, Administering Artificial Intelligence, 94 S. Cal. L. Rev. (forthcoming 2020).
Categories:
Technology & Law
,
Government & Politics
Sub-Categories:
Public Law
,
Administrative Law & Agencies
,
Cyberlaw
,
Networked Society
Type: Article
Abstract
As AI increasingly features in everyday life, it is not surprising to hear calls to step up regulation of the technology. In particular, a turn to administrative law to grapple with the consequences of AI is understandable because the technology’s regulatory challenges appear facially similar to those in other technocratic domains, such as the pharmaceutical industry or environmental law. But AI is unique, even if it is not different in kind. AI’s distinctiveness comes from technical attributes—namely, speed, complexity, and unpredictability—that strain administrative law tactics, in conjunction with the institutional settings and incentives, or strategic context, that affect its development path. And this distinctiveness means both that traditional, sectoral approaches hit their limits, and that turns to a new agency like an “FDA for algorithms” or a “federal robotics commission” are of limited utility in constructing enduring governance solutions. This Article assesses algorithmic governance strategies in light of the attributes and institutional factors that make AI unique. In addition to technical attributes and the contemporary imbalance of public and private resources and expertise, AI governance must contend with a fundamental conceptual challenge: algorithmic applications permit seemingly technical decisions to de facto regulate human behavior, with a greater potential for physical and social impact than ever before. This Article warns that the current trajectory of AI development, which is dominated by large private firms, augurs an era of private governance. To maintain the public voice, it suggests an approach rooted in governance of data—a fundamental AI input—rather than only contending with the consequences of algorithmic outputs. Without rethinking regulatory strategies to ensure that public values inform AI research, development, and deployment, we risk losing the democratic accountability that is at the heart of public law.
Alicia Solow-Niederman, Emerging Digital Technology and the “Law of the Horse” (UCLA L. Rev. Disc.: Law Meets World, Feb. 19, 2019).
Categories:
Technology & Law
Sub-Categories:
Cyberlaw
,
Networked Society
,
Information Privacy & Security
,
Digital Property
Type: Other
Richard M. Re & Alicia Solow-Niederman, Developing Artificially Intelligent Justice, 22 Stan. Tech. L. Rev. 242 (2019).
Categories:
Technology & Law
,
Government & Politics
Sub-Categories:
Judges & Jurisprudence
,
Courts
,
Cyberlaw
,
Networked Society
Type: Article
Abstract
Artificial intelligence, or AI, promises to assist, modify, and replace human decision-making, including in court. AI already supports many aspects of how judges decide cases, and the prospect of “robot judges” suddenly seems plausible—even imminent. This Article argues that AI adjudication will profoundly affect the adjudicatory values held by legal actors as well as the public at large. The impact is likely to be greatest in areas, including criminal justice and appellate decision-making, where “equitable justice,” or discretionary moral judgment, is frequently considered paramount. By offering efficiency and at least an appearance of impartiality, AI adjudication will both foster and benefit from a turn toward “codified justice,” an adjudicatory paradigm that favors standardization above discretion. Further, AI adjudication will generate a range of concerns relating to its tendency to make the legal system more incomprehensible, data-based, alienating, and disillusioning. And potential responses, such as crafting a division of labor between human and AI adjudicators, each pose their own challenges. The single most promising response is for the government to play a greater role in structuring the emerging market for AI justice, but auspicious reform proposals would borrow several interrelated approaches. Similar dynamics will likely extend to other aspects of government, such that choices about how to incorporate AI in the judiciary will inform the future path of AI development more broadly.
Alicia Solow-Niederman, YooJung Choi & Guy Van den Broeck, The Institutional Life of Algorithmic Risk Assessment, 34 Berkeley Tech. L.J. 705 (2019).
Categories:
Technology & Law
,
Criminal Law & Procedure
,
Legal Profession
,
Discrimination & Civil Rights
Sub-Categories:
Criminal Justice & Law Enforcement
,
Law & Public Policy
,
Legal Reform
,
Cyberlaw
,
Networked Society
Type: Article
Abstract
Symposium, Governing Machines: Defining and Enforcing Public Policy Values in AI Systems As states nationwide turn to risk assessment algorithms as tools for criminal justice reform, scholars and civil society actors alike are increasingly warning that this technological turn comes with complications. Research to date tends to focus on fairness, accountability, and transparency within algorithmic tools. Although attention to whether these instruments are fair or biased is normatively essential, this Article contends that this inquiry cannot be the whole conversation. Looking at issues such as fairness or bias in a tool in isolation elides vital bigger-picture considerations about the institutions and political systems within which tools are developed and deployed. Using California’s Money Bail Reform Act of 2017 (SB 10) as an example, this Article analyzes how risk assessment statutes create frameworks within which policymakers and technical actors are constrained and empowered when it comes to the design and implementation of a particular instrument. Specifically, it focuses on the tension between, on one hand, a top-down, global understanding of fairness, accuracy, and lack of bias, and, on the other, a tool that is well-tailored to local considerations. It explores three potential technical and associated policy consequences of SB 10’s framework: proxies, Simpson’s paradox, and thresholding. And it calls for greater attention to the design of risk assessment statutes and their allocation of global and local authority. As states nationwide turn to risk assessment algorithms as tools for criminal justice reform, scholars and civil society actors alike are increasingly warning that this technological turn comes with complications. Research to date tends to focus on fairness, accountability, and transparency within algorithmic tools. Although attention to whether these instruments are fair or biased is normatively essential, this Article contends that this inquiry cannot be the whole conversation. Looking at issues such as fairness or bias in a tool in isolation elides vital bigger-picture considerations about the institutions and political systems within which tools are developed and deployed. Using California’s Money Bail Reform Act of 2017 (SB 10) as an example, this Article analyzes how risk assessment statutes create frameworks within which policymakers and technical actors are constrained and empowered when it comes to the design and implementation of a particular instrument. Specifically, it focuses on the tension between, on one hand, a top-down, global understanding of fairness, accuracy, and lack of bias, and, on the other, a tool that is well-tailored to local considerations. It explores three potential technical and associated policy consequences of SB 10’s framework: proxies, Simpson’s paradox, and thresholding. And it calls for greater attention to the design of risk assessment statutes and their allocation of global and local authority.
Alicia Solow-Niederman, Beyond the Privacy Torts: Reinvigorating A Common Law Approach for Data Breaches, 127 Yale L.J. F. 614 (2018).
Categories:
Civil Practice & Procedure
,
Technology & Law
,
Consumer Finance
Sub-Categories:
Consumer Protection Law
,
Torts
,
Remedies
,
Networked Society
,
Information Privacy & Security
,
Cyberlaw
Type: Article
Abstract
Data breaches continue to roil the headlines, yet regulation and legislation are unlikely to provide a timely solution to protect consumers. Meanwhile, individuals are left, at best, in a state of data insecurity and, at worst, in a compromised economic situation. State common law provides a path forward. Rather than rely on statutory claims or the privacy torts to protect consumer data, this Essay suggests that courts should recognize how contemporary transactions implicate fiduciary-like relationships of trust. By designating what this Essay terms data confidants as a limited form of information fiduciary, courts can reinvigorate the tort of breach of confidence as a remedy for aggrieved consumers.
Yochai Benkler, Hal Roberts, Robert Faris, Alicia Solow-Niederman & Bruce Etling, Social Mobilization and the Networked Public Sphere: Mapping the SOPA-PIPA Debate, 32 Pol. Comm. 594 (2015).
Categories:
Technology & Law
Sub-Categories:
Communications Law
,
Cyberlaw
,
Networked Society
Type: Article
Alicia Solow-Niederman, Kevin Tsai, Andrew G. Crocker & Jonathan L. Zittrain, Final Briefing Document: Public Networks for Public Safety: A Workshop on the Present and Future of Mesh Networks (Berkman Ctr. Res. Publ'n. No. 2012-12, Harv. Pub. L. Working Paper No. 12-22, May 26, 2012).
Categories:
Technology & Law
Sub-Categories:
Communications Law
,
Networked Society
Type: Other
Abstract
This briefing document was developed as part of a March 30, 2012 workshop entitled “Public Networks for Public Safety: A Workshop on the Present and Future of Mesh Networking,” hosted by the Berkman Center for Internet & Society at Harvard University. The event provided a starting point for conversation about whether mesh networks could be adopted within consumer technologies to enhance public safety communications and empower and connect the public while simultaneously improving public safety. Participants in this initial convening included members of government agencies, academia, the telecommunications industry, and civil society organizations; their helpful inputs were integral to the final version of this document. Building on the dialogue at this gathering, this briefing document seeks to: sketch a broad overview of mobile ad hoc networks (MANETs) and mesh technologies; identify critical technical issues and questions regarding the communications effectiveness of those technologies; explain how public safety communications relate to mesh and offer a synopsis of current regulations affecting those communications; describe a set of basic use cases that emerged from the conference; map out stakeholders at the technical, regulatory, legal, and social levels, and associated interests, points of connection, and potential challenges; catalog select examples and, where possible, highlight potential next steps and areas for short term action; and, summarize key takeaways from the conference, with an emphasis on shared principles or best practices that might inform participants’ diverse efforts to improve communications affordances for the public and the public safety community. The paper also synthesizes several strains of workshop discussion that probed big picture framing concerns that could inform the present and future of mesh. Specifically, it puts forth two related but distinct models for mesh: mesh in a technical sense and mesh as a metaphor or social layer construct, with a particular emphasis on the need for further conceptual development with regard to “social mesh.” The final section emphasizes key take-aways from the event, highlighting core principles and best practices that might both provide a theoretical underpinning for the future conceptual development of mesh networking technologies and social mesh models, respectively, and inform the real-world development of communications systems that involve either definition of mesh. The Berkman Center thanks all of the workshop attendees both for their participation during the event and for comments offered during the development of this briefing document. Berkman Center Project Coordinator Alicia Solow-Niederman worked closely with Professor Jonathan Zittrain to plan and execute this event as well as to produce this briefing document. Berkman Center Research Assistants Andrew Crocker and Kevin Tsai provided exceptional research and contributions to this briefing document, and June Casey contributed indispensable support with background research.
Alicia Solow-Niederman, The Power of 140 Characters? #IranElection and Social Movements in Web 2.0, 3 Stan. J. Sci., Tech. & Soc’y (2010).
Categories:
Technology & Law
Sub-Categories:
Networked Society
,
Cyberlaw
,
Communications Law
Type: Article
Abstract
This paper analyzes the role of the micro-blogging site Twitter during the contested 2009 Iranian presidential election. It considers just what it means to call the Iranian case a “Twitter Revolution” and relies upon a body of literature discussing netwar, online activism, social media, and social movements. It concludes that a social medium such as Twitter can assist the spread of information and thereby counter the censorship of an authoritarian state, yet may fail to assist citizens of an oppressive regime from physically mobilizing within the country. Ultimately, the Iranian case illustrates that it is through interactions between old and new, between traditional media and micro-blogging, between on-the-ground protests and online activism, that social media like Twitter may contribute on the global stage.

Current Courses

Course Catalog View