Skip to content
  • Type:
    Categories:
    Sub-Categories:

    Links:

    This course explores legal issues relating to the creation, exploitation, and protection of music and other content. It focuses on traditional regimes and models and the ways new technologies have affected strategies involved in making and distributing content. The seminar balances doctrinal and policy concerns with day-to-day legal and business practices and skills relevant to practitioners.

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

    Links:

    This paper offers thoughts on the evolving nature and scope of Internet governance in the context of the development of the right to be forgotten. It summarises traditional frameworks for: (a) defining and operationalizing principles of Internet governance; and (b) distinguishing the types of issues that raise transnational governance concerns from the types of issues that are commonly considered the domain of local laws and norms. If an issue falls within the ambit of Internet governance, it may lend itself to a certain set of solutions (with input from a broad cross-section of global public and private stakeholders). Issues outside that domain tend to be subjects of local regulatory mechanisms, in accordance with notions of national sovereignty. Categorizing a set of legal, policy, or technical considerations as one or the other, thus, has consequences in terms of the types of approaches to governance that may best be deployed to address them. The paper provides examples of how recent technical and legal developments have put pressure on narrow conceptions of Internet governance as concerned primarily with Internet architecture and infrastructure. It posits that Internet governance models may be relevant to more and more conduct that occurs above the level of Internet’s metaphorical pipes, including developments that occur at what is traditionally conceived of as the content layer. The paper suggests that various global implementations of the right to be forgotten —and, in particular, implementations that are directed at the activities of search engines— offer a useful case study in examining and assessing this transformation.

  • Type:
    Categories:
    Sub-Categories:

    Links:

    This piece endeavors to provide context for state and local officials considering tasks around development, procurement, implementation, and use of risk assessment tools. It begins with brief case studies of four states that adopted (or attempted to adopt) such tools early on and describes their experiences. It then draws lessons from these case studies and suggests some questions that procurement officials should ask of themselves, their colleagues who call for the acquisition and implementation of tools, and the developers who create them. This paper concludes by examining existing frameworks for technological and algorithmic fairness. The authors offer a framework of four questions that government procurers should be asking at the point of adopting RA tools. That framework draws from the experiences of the states we study and offers a way to think about accuracy (i.e., the RA tool’s ability to accurately predict recidivism), fairness (i.e., the extent to which an RA tool treats all defendants fairly, without exhibiting racial bias or discrimination), interpretability (the extent to which an RA tool can be interpreted by criminal justice officials and stakeholders, including judges, lawyers, and defendants), and operability (the extent to which an RA tool can be administered by officers within police, pretrial services, and corrections).

  • Type:
    Categories:
    Sub-Categories:

    Artificial intelligence (“AI”) is changing the world before our eyes. The promise of AI to improve our lives is enormous. AI-based systems are already outperforming medical specialists in diagnosing certain diseases, while the use of AI in the financial system is expanding access to credit to borrowers that were once passed by. Yet AI also has downsides that dampen its considerable promise. AI-based systems impact the right to privacy since they depend on the collection and use of vast quantities of data to make predictions which, in numerous cases, have served to perpetuate existing social patterns of bias and discrimination. These disturbing possibilities have given rise to a movement seeking to embed ethical considerations into the development and deployment of AI. This project, on the other hand, demonstrates the considerable value in using human rights law to evaluate and address the complex impacts of AI on society. Human rights law provides an agreed set of norms and a shared language and institutional infrastructure for helping to ensure that the promises of AI are met and its greatest perils are avoided. Our project seeks to advance the emerging conversation on AI and human rights by evaluating the human rights impacts of six current uses of AI. Our framework recognizes that AI systems are not being deployed against a blank slate, but rather against the backdrop of social conditions that have complex pre-existing human rights impacts of their own. By digging deep into current AI implementations, we see how they impact the full range of human rights guaranteed by international law, privacy chief among them. We also gain insight into the unequal distribution of the positive and negative impacts of AI on human rights throughout society, and begin to explore the power of the human rights framework to address these disparate impacts.

  • Type:
    Categories:
    Sub-Categories:

    Artificial intelligence is already starting to change our lives. Over the coming decades, these new technologies will shape many of our daily interactions and drive dramatic economic growth. As AI becomes a core element of our society and economy, its impact will be felt across many of the traditional spheres of AG jurisdiction. Members of AG offices will need an understanding of the AI tools and applications they will increasingly encounter in consumer devices, state-procured systems, the court system, criminal forensics, and others areas that touch on traditional AG issues like consumer privacy, criminal justice, and representing state governments. The modest goal of this primer is to help state AGs orient their thinking by providing both a broad overview of the impact of AI on AG portfolios, and a selection of resources for further learning regarding specific topics. As with any next technology, it is impossible to predict exactly where AI will have its most significant on matters of AG jurisdiction. Yet AGs can better prepare themselves for this future by maintaining a broad understanding of how AI works, how it can be used, and how it can impact our economy and society. In success, AGs can play a key constructive role in preventing misconduct, shaping guidelines, and ultimately maximizing the positive impact of these exciting new technologies. We intend for this briefing book to serve as a jumping-off point in that preparation, setting a baseline of understanding for the AGTech Forum and providing resources for specific learning beyond our workshop.

  • Type:
    Categories:
    Sub-Categories:

    The ubiquity of systems using artificial intelligence or "AI" has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before|applications range from clinical decision support to autonomous driving and predictive policing. That said, common sense reasoning [McCarthy, 1960] remains one of the holy grails of AI, and there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems [Bostrom, 2003, Amodei et al., 2016, Sculley et al., 2014]. There are many ways to hold AI systems accountable. In this work, we focus on one: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation [Goodman and Flaxman, 2016, Wachter et al., 2017], and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. Good choices about when to demand explanation can help prevent negative consequences from AI systems, while poor choices may not only fail to hold AI systems accountable but also hamper the development of much-needed beneficial AI systems. Below, we briefly review current societal, moral, and legal norms around explanation, and then focus on the different contexts under which explanation is currently required under the law. We find that there exists great variation around when explanation is demanded, but there also exists important consistencies: when demanding explanation from humans, what we typically want to know is how and whether certain input factors affected the final decision or outcome. These consistencies allow us to list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans under the law. Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should often be technically feasible but may sometimes be practically onerous|there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard.

  • Type:
    Categories:
    Sub-Categories:

    Zero rating, which allows users to access select Internet services and content without incurring mobile data charges, is not a new concept. But it has become an object of debate as mobile carriers and major app providers have used it in the developing world to attract customers, with the goal of increasing Internet access and adoption. While some feel these programs violate net neutrality and create the potential for a two-tiered Internet, others argue that zero rating programs bring the developing world online and could be modified to uphold, rather than violate, net neutrality principles. At the same time, little research evaluating zero rating programs exists, and many different program formulations are lumped under the term “zero rating,” some of which are more compatible with net neutrality than others. In March of 2016, the Berkman Klein Center for Internet & Society gathered a diverse group of stakeholders from academia, the media, the government sector, industry, and the open software community to discuss the use of zero rating as a means to improve Internet adoption in the developing world and how and when it could be an effective tool, if at all. This paper captures the resulting dialogue and recommendations. The workshop summary is followed by a collection of briefing papers representing the viewpoints of many of the workshop participants. Key Findings: Many different models of industry initiatives currently fall into the loose definition of zero rating. Creating a better defined taxonomy of program parameters, technical mechanisms, and impacts may allow for greater nuance and understanding in the field, as well as more targeted regulatory responses. Universal Internet access and adoption is a common goal but one that requires significant investment in global infrastructure. Some assert that zero rating programs may serve as a helpful stopgap measures to increase access, while others argue that these programs contribute to the creation of a tiered Internet ecosystem without providing meaningful benefits to the targeted beneficiaries. Zero rating initiatives may be employed in pursuit of goals other than Internet adoption, such as an emergency services messaging system or security updates. The goals of a particular zero rating program may make it more or less controversial. More empirical research is required to fully assess the impact of specific zero rating initiatives, as well as zero rating generally, on Internet adoption in the developing world. This research will sometimes require access to usage information held by mobile carriers and zero rating service providers that should be handled with user privacy in mind.

  • Type:
    Categories:
    Sub-Categories:

    This report addresses a number of key considerations that those managing open source software development initiatives should take into account when thinking about structure, organization, and governance. The genesis of this project involved an investigation into anecdotal reports that companies and other institutions developing open source software were facing difficulties obtaining tax exempt nonprofit status under Section 501(c)(3) of Title 26 of the United States Code. Based on conversations with a number of constituents in the open source software development community, the authors have prepared this report to address specific questions about nonprofit status alongside questions about corporate formation and governance models more generally. Nothing in this report should be viewed as a substitute for specific legal advice on the narrow questions facing particular organizations under particular sets of factual circumstances. But, the authors are hopeful the document provides a general overview of the complex issues that open source initiatives face when balancing a need for structure and continuity with the innovative and experimental spirit at the heart of many open source development projects. The report has two primary parts: • First, it addresses some formal organizational considerations that open source software initiatives should weigh, evaluating the benefits of taking on a formal structure and the options for doing so. The report provides information about different types of corporate organization that open source projects may wish to consider. And, it delves into Internal Revenue Service policy and practice and US tax law concerning questions about the tax exemptions referenced above. • In its second half, the authors pull back to consider more broadly questions of organizational structure, offering ideas about governance models that open source organizations may wish to explore, separate from formal corporate structure, as they seek to achieve their missions. Different considerations may inform the choice of formal, legal organizational structures (on the one hand) and governance models (on the other hand). By addressing both, the authors hope that this report will be useful to the broadest possible range of managers of and contributors to open source development initiatives.

  • Type:
    Categories:
    Sub-Categories:

    Links:

    An article discussing India's Information Technology Act, enacted in 2000 which had implications for new media technologies including the security of electronic records and digital signature certificates, and the subsequent amendment to the IT Act in 2008 which allow for government blocking of websites, provides for “safe harbors” available to online intermediaries, and creates several computer-related criminal offenses related to online speech and free expression.

  • Favorite

    Type:
    Categories:
    Sub-Categories:

    Links:

    Privacy law in the United States is a complicated patchwork of state and federal caselaw and statutes. Harvard Law School’s Cyberlaw Clinic, based at the Berkman Center for Internet & Society, prepared this briefing document in advance of the Student Privacy Initiative's April 2013 workshop, "Student Privacy in the Cloud Computing Ecosystem," to provide a high-level overview of two of the major federal legal regimes that govern privacy of children’s and students’ data in the United States: the Children’s Online Privacy Protection Act (COPPA) and the Family Educational Rights and Privacy Act (FERPA). This guide aims to offer schools, parents, and students alike a sense of some of the laws that may apply as schools begin to use cloud computing tools to help educate students. Both of the relevant statutes – and particularly FERPA – are complex and are the subjects of large bodies of caselaw and extensive third-party commentary, research, and scholarship. This document is not intended to provide a comprehensive summary of these statutes, nor privacy law in general, and it is not a substitute for specific legal advice. Rather, this guide highlights key provisions in these statutes and maps the legal and regulatory landscape.

  • Favorite

    Type:
    Categories:
    Sub-Categories:

    A White Paper highlighting several categories of laws relevant to independent journalists and newsgatherers in the Commonwealth, including state statutes governing open meetings and public records, revisions to Massachusetts Supreme Judicial Court Rule 1:19 (which concerns the recording of court proceedings), and federal caselaw interpreting the state wiretap statute as it applies to recording of public officials in public places.

  • Christine Lepera & Christopher T. Bavitz, Music Plagarism Defendants Win Summary Judgment: Recent Decisions Have Disposed of Cases on Both Originality and 'Access' Grounds, 228 N.Y. L.J., Dec. 2, 2002, at S4.

    Type:
    Categories:
    Sub-Categories: