via unite.ai
by Daniel Nelson
In 2019, there was more focus on AI ethics than ever before. However much of this discussion seemed hazy, with no codified approach. Rather, different companies created their own frameworks and policies regarding AI ethics. Having a consensus on AI ethics issues is important because it helps policymakers create and adjusts policies, and it also informs the work done by researchers and scholars. Beyond that, AI companies must know where ethical limits are if they hope to avoid unethical AI implementation. In order to create a better picture of the trends in AI ethics, as VentureBeats reports, the Berkman Klein Center at Harvard University performed a meta-analysis of the various existing AI ethics principles and frameworks.
According to the authors of the analysis, the researchers wanted to compare the principles side-by-side to look for overlap and divergence. Jessica Fjeld, the assistant director of the Harvard Law School Cyberlaw Clinic, explained that the research team wanted to “uncover the hidden momentum in a fractured, global conversation around the future of AI, resulted in this white paper and the associated data visualization.”
During the analysis, the team examined 36 different AI principle documents originating from around the world and coming from many different organizational types. The results of the research found that there were eight themes that kept appearing across the many documents.
Privacy and accountability were two of the most commonly appearing ethical themes, as was AI safety/security. Transparency/explainability was also a commonly cited goal, with there many attempts to make algorithms more explainable over the course of 2019. Fairness/non-discrimination was another ethical focal point, reflecting growing concerns about data bias. Ensuring human control of technology, and not surrendering decision power to AI was heavily mentioned as well. Professional responsibility was the seventh common theme found by the researchers. Finally, the researchers found continual mention of promoting human values in the AI ethics documentation they examined.
The research team gave qualitative and quantitative breakdowns of how these themes manifested themselves within AI ethics documentation in their paper and in an accompanying map. The map displays where each of the themes were mentioned.
The research team noted that much of the AI ethics discussion revolved around concern for human values and rights. As the research paper notes:
“64% of our documents contained a reference to human rights, and five documents [14%] took international human rights as a framework for their overall effort.”
References to human rights and values were more common in documents produced by private sector groups and civil society groups. This indicates that AI private sector companies aren’t concerned just with profits but with producing AI in an ethical way. Meanwhile, government agencies seem less concerned or aware of AI ethics overall, with less than half of AI-related documents originating from government agencies concerning themselves with AI ethics.
The researchers also noted that if the documents they examined were more recent, they were more likely to address all of the eight most prominent themes instead of just a few. This fact implies that the ideas behind what constitutes ethical AI usage are beginning to coalesce among those leading the discussion about AI ethics. Finally, the researchers state that the success of these principles in guiding the development of AI will depend on how well integrated they are in the AI development community at large. The researchers state in the paper:
“Moreover, principles are a starting place for governance, not an end. On its own, a set of principles is unlikely to be more than gently persuasive. Its impact is likely to depend on how it is embedded in a larger governance ecosystem, including for instance relevant policies (e.g. AI national plans), laws, regulations, but also professional practices and everyday routines.”
OCP note: To read more about Jessica Fjled and Adam Nagy’s work on ethics and governance of AI, read their report called Principled Artificial Intelligence: Mapping Consensus in Ethic and Rights-Based Approached to Principles for AI.
Filed in: Clinical Spotlight, Uncategorized OCP
Tags: AI, Cyberlaw Clinic, ethics, Jessica Fjeld
Contact Office of Clinical and Pro Bono Programs
Website:
hls.harvard.edu/clinics
Email:
clinical@law.harvard.edu