Skip to content
  • Type:
    Categories:
    Sub-Categories:

    The next round of cyberattacks might come from a discussion about knitting.

  • Type:
    Categories:
    Sub-Categories:

  • Jonathan Zittrain, Intellectual Debt: With Great Power Comes Great Ignorance, in The Cambridge Handbook of Responsible Artificial Intelligence (Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard eds., 2022).

    Type:
    Categories:
    Sub-Categories:

    In this chapter, law and technology scholar Jonathan Zittrain warns of the danger of relying on answers for which we have no explanations. There are benefits to utilising solutions discovered through trial and error rather than rigorous proof: though aspirin was discovered in the late 19th century, it was not until the late 20th century that scientists were able to explain how it worked. But doing so accrues ‘intellectual debt’. This intellectual debt is compounding quickly in the realm of AI, especially in the subfield of machine learning. Whereas we know that ML models can create efficient, effective answers, we don’t always know why the models come to the conclusions they do. This makes it difficult to detect when they are malfunctioning, being manipulated, or producing unreliable results. When several systems interact, the ledger moves further to the red. Society’s movement from basic science towards applied technology that bypasses rigorous investigative research inches us closer to a world in which we are reliant on an oracle AI, one in which we trust regardless of our ability to audit its trustworthiness. Zittrain concludes that we must create an intellectual debt ‘balance sheet’ by allowing academics to scrutinise the systems.

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

    Links:

    Archivists regularly contend with a wide range of security threats, including data breaches, inadvertent loss, and legal action by those hoping to make sealed records public. These threats are particularly salient when sensitive materials are donated with delayed-release conditions. Trust in archivists’ ability to enforce such conditions gives donors the confidence to enter into the historical record materials that they might otherwise destroy. But as these materials are increasingly born-digital (and therefore hackable, convenient to exfiltrate en masse, and more easily corrupted), and as governments and private parties become ever more aggressive in their efforts to secure early releases, we must innovate in order to stand still. To compensate for these new dynamics, we propose Strong Dark Archives (SDA), a blended legal and technical protocol for securing delayed-released archival materials among a network of libraries. SDA leverages modern cryptography and institutional agreements to coordinate access-control across multiple accredited archival organizations, providing broad resilience to data breaches, technical failures, and legal process. Through this distributed approach to security, SDA imposes meaningful friction on efforts to force the early disclosure of archival records.

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

    "What’s a tort? It’s a wrong that a court is prepared to recognize, usually in the form of ordering the transfer of money ("damages") from the wrongdoer to the wronged. The court is usually alerted to wrong by the filing of a lawsuit: anyone can walk through the courthouse doors and, subject to the limits explored in civil procedure, call someone else (or, if a company, some-thing) to account. We’ll discuss the sources that courts turn to in order to answer such questions. Rarely, in tort cases, are those sources the ones laypeople expect: statutes passed by legislatures. Without statutes to guide them, what are courts left with?"– Provided by publisher.

  • Type:
    Categories:

  • Type:
    Categories:
    Sub-Categories:

    Links:

  • Type:
    Categories:
    Sub-Categories:

    Jonathan Zittrain’s testimony before Hearing of the Subcommittee on Competition Policy, Antitrust, and Consumer Rights on the Internet of Things.

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

    Links:

    The Internet, and the Web built on top of it, were intended to support an “entropic” physical and logical network map (Zittrain, 2013). That is, they have been designed to allow servers to be spread anywhere in the world in an ad hoc and evolving fashion, rather than a centralized one. Arbitrary distance among, and number of, servers causes no particular architectural problems and indeed ensures that problems experienced by one data source remain unlinked to others. A Web page can be assembled from any number of upstream sources, through the use of various URLs, each pointing to a different location. To a user, the page looks unified. Over time, however, there are signs that the hosting and finding of Internet services has become more centralized. We explore and document one possible dimension of this centralization. We analyze the extent to which the Internet’s global domain name resolution (DNS) system has preserved its distributed resilience given the rise of cloud-based hosting and infrastructure. We offer evidence of the dramatic concentration of the DNS hosting market in the hands of a small number of cloud service providers over a period spanning from 2011-2018. In addition, we examine changes in domains’ tendency to “diversify” their pool of nameservers – how frequently domains employ DNS management services from multiple providers rather than just one provider. Throughout the paper, we use the catastrophic October 2016 attack on Dyn, a major DNS hosting provider, to illustrate the cybersecurity consequences of our analysis.

  • Type:
    Categories:
    Sub-Categories:

    Preserving records of what user content is taken down—and why—could make platforms more accountable and transparent.

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

    Hyperlinks are a powerful tool for journalists and their readers. Diving deep into the context of an article is just a click away. But hyperlinks are a double-edged sword; for all of the internet’s boundlessness, what’s found on the web can also be modified, moved, or entirely disappeared. This often-irreversible decay of web content is commonly known as linkrot. It comes with a similar problem of content drift, or the often-unannounced changes––retractions, additions, replacement––to the content at a particular URL. Our team of researchers at Harvard Law School has undertaken a project to gain insight into the extent and characteristics of journalistic linkrot and content drift. We examined hyperlinks in New York Times articles starting with the launch of the Times website in 1996 up through mid-2019, developed on the basis of a dataset provided to us by the Times. We focus on the Times not because it is an influential publication whose archives are often used to help form a historical record. Rather, the substantial linkrot and content drift we find here across the New York Times corpus accurately reflects the inherent difficulties of long-term linking to pieces of a volatile web. Results show a near linear increase of linkrot over time, with interesting patterns emerging within certain sections of the paper or across top level domains. Over half of articles containing at least one URL also contained a dead link. Additionally, of the ostensibly “healthy” links existing in articles, a hand review revealed additional erosion to citations via content drift.

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

    People across America and the world remain under strong advisories or outright orders to shelter in place, and economies largely shut down, as part of an ongoing effort to flatten the curve of the most virulent pandemic since 1918. The economic effects have been predictably staggering, with no clear end in sight. Until a vaccine or other transformative medical intervention is developed, the broad consensus of experts is that the only way out of mass sheltering in place, if hospital occupancy curves are to remain flattened, entails waiting for most of the current cases to resolve, and then cautiously and incrementally reopening. That would mean a sequence of allowing people out; promptly testing anyone showing symptoms — and even some who are not; identifying recent proximate contacts of those who test positive; and then getting in touch with those contacts and, if circumstances dictate, asking or demanding that they individually shelter until the disease either manifests or not. The idea is to promptly prune branches of further disease transmission in order to keep its reproductive factor non-exponential.

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

    Links:

    The governance of online platforms has unfolded across three eras – the era of Rights (which stretched from the early 1990s to about 2010), the era of Public Health (from 2010 through the present), and the era of Process (of which we are now seeing the first stirrings). Rights-era conversations and initiatives amongst regulators and the public at large centered dominantly on protecting nascent spaces for online discourse against external coercion. The values and doctrine developed in the Rights era have been vigorously contested in the Public Health era, during which regulators and advocates have focused (with minimal success) on establishing accountability for concrete harms arising from online content, even where addressing those harms would mean limiting speech. In the era of Process, platforms, regulators, and users must transcend this stalemate between competing values frameworks, not necessarily by uprooting Rights-era cornerstones like CDA 230, but rather by working towards platform governance processes capable of building broad consensus around how policy decisions are made and implemented. Some promising steps in this direction could include delegating certain key policymaking decisions to entities outside of the platforms themselves; making platforms “information” or “content” fiduciaries; and systematically archiving data and metadata about disinformation detected and addressed by platforms.

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

    To understand where digital governance is going, we must take stock of where it’s been, because the timbre of mainstream thinking around digital governance today is dramatically different than it was when study of “Internet governance” coalesced in the late 1990s. Perhaps the most obvious change has been from emphasizing networked technologies’ positive effects and promise – couched around concepts like connectivity, innovation, and, by this author, “generativity” – to pointing out their harms and threats. It’s not that threats weren’t previously recognized, but rather that they were more often seen in external clamps on technological development and upon the corresponding new freedoms for users, whether government intervention to block VOIP services like Skype to protect incumbent telco revenues, or in the shaping of technology to effect undue surveillance, whether for government or corporate purposes. The shift in emphasis from positive to negative corresponds to a change in the overarching frameworks for talking about regulating information technology. We have moved from a discourse around rights – particularly those of end-users, and the ways in which abstention by intermediaries is important to facilitate citizen flourishing – to one of public health, which naturally asks for a weighing of the systemic benefits or harms of a technology, and to think about what systemic interventions might curtail its apparent excesses. Each framework captures important values around the use of technology that can both empower and limit individual freedom of action, including to engage in harmful conduct. Our goal today should be to identify where competing values frameworks themselves preclude understanding of others’ positions about regulation, and to see if we can map a path forward that, if not reconciling the frameworks, allows for satisfying, if ever-evolving, resolutions to immediate questions of public and private governance.

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

    Links:

    With public and academic attention increasingly focused on the new role of machine learning in the health information economy, an unusual and no-longer-esoteric category of vulnerabilities in machine-learning systems could prove important. These vulnerabilities allow a small, carefully designed change in how inputs are presented to a system to completely alter its output, causing it to confidently arrive at manifestly wrong conclusions. These advanced techniques to subvert otherwise-reliable machine-learning systems—so-called adversarial attacks—have, to date, been of interest primarily to computer science researchers (1). However, the landscape of often-competing interests within health care, and billions of dollars at stake in systems' outputs, implies considerable problems. We outline motivations that various players in the health care system may have to use adversarial attacks and begin a discussion of what to do about them. Far from discouraging continued innovation with medical machine learning, we call for active engagement of medical, technical, legal, and ethical experts in pursuit of efficient, broadly available, and effective health care that machine learning will enable.

  • Type:
    Categories:
    Sub-Categories:

    John Perry Barlow's insights were inseparable from his lyrical way of conveying them. Barlow's expression mates joy and canniness, and one of his talents in writing about new technologies was to flip our conception of the status quo in order to correct it. In 1994, the conventional sense was that the Internet and its champions were heedlessly upsetting a longstanding set of relationships and legal entitlements, with copyright as a signal example. And while that was superficially true, it wasn't the whole story. This brief essay examines the legacy of Barlow's work from the vantage point of today's markedly different digital world.

  • Type:
    Categories:
    Sub-Categories:

    In a book chartered to demonstrate intellectual property in objects, what concrete thing can represent the Internet, a phenomenon that exists only as a well-elaborated idea? Perhaps the best physical representation of the genius of the Internet—and in particular, “Internet Protocol”—is found in an hourglass.

  • Type:
    Categories:
    Sub-Categories:

    Links:

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

    Links:

    Jonathan L. Zittrain, the George Bemis Professor of International Law and Director of the Law Library at Harvard Law School, delivers the annual David L. Lange Lecture in Intellectual Property Law.

  • Type:
    Categories:
    Sub-Categories:

    Links:

    The rise of fake news highlights the erosion of long-standing institutional bulwarks against misinformation in the internet age. Concern over the problem is global. However, much remains unknown regarding the vulnerabilities of individuals, institutions, and society to manipulations by malicious actors. A new system of safeguards is needed. Below, we discuss extant social and computer science research regarding belief in fake news and the mechanisms by which it spreads. Fake news has a long history, but we focus on unanswered scientific questions raised by the proliferation of its most recent, politically oriented incarnation. Beyond selected references in the text, suggested further reading can be found in the supplementary materials.

  • Type:
    Categories:
    Sub-Categories:

    This paper analyzes the extent to which the Internet’s global domain name resolution (DNS) system has preserved its distributed resilience given the rise of cloud-based hosting and infrastructure. We explore trends in the concentration of the DNS space since at least 2011. In addition, we examine changes in domains’ tendency to “diversify” their pool of nameservers – how frequently domains employ DNS management services from multiple providers rather than just one provider – a comparatively costless and therefore puzzlingly rare decision that could supply redundancy and resilience in the event of an attack or service outage affecting one provider.

  • Type:
    Categories:
    Sub-Categories:

    Actuarial risk assessments might be unduly perceived as a neutral way to counteract implicit bias and increase the fairness of decisions made at almost every juncture of the criminal justice system, from pretrial release to sentencing, parole and probation. In recent times these assessments have come under increased scrutiny, as critics claim that the statistical techniques underlying them might reproduce existing patterns of discrimination and historical biases that are reflected in the data. Much of this debate is centered around competing notions of fairness and predictive accuracy, resting on the contested use of variables that act as "proxies" for characteristics legally protected against discrimination, such as race and gender. We argue that a core ethical debate surrounding the use of regression in risk assessments is not simply one of bias or accuracy. Rather, it's one of purpose. If machine learning is operationalized merely in the service of predicting individual future crime, then it becomes difficult to break cycles of criminalization that are driven by the iatrogenic effects of the criminal justice system itself. We posit that machine learning should not be used for prediction, but rather to surface covariates that are fed into a causal model for understanding the social, structural and psychological drivers of crime. We propose an alternative application of machine learning and causal inference away from predicting risk scores to risk mitigation.

  • Jonathan Zittrain, Torts! (2nd ed. 2017).

    Type:
    Categories:
    Sub-Categories:

    What’s a tort? It’s a wrong that a court is prepared to recognize, usually in the form of ordering the transfer of money (“damages”) from the wrongdoer to the wronged. The court is usually alerted to wrong by the filing of a lawsuit: anyone can walk through the courthouse doors and, subject to the limits explored in civil procedure, call someone else (or, if a company, some-thing) to account. We’ll discuss the sources that courts turn to in order to answer such questions. Rarely, in tort cases, are those sources the ones laypeople expect: statutes passed by legislatures. Without statutes to guide them, what are courts left with?

  • Type:
    Categories:
    Sub-Categories:

  • Type:
    Categories:
    Sub-Categories:

    Links:

    The architecture and offerings of the Internet developed without much steering by governments, much less operations by militaries. That made talk of “cyberwar” exaggerated, except in very limited instances. Today that is no longer true: States and their militaries see the value not only of controlling networks for surveillance or to deny access to adversaries, but also of subtle propaganda campaigns launched through a small number of wildly popular worldwide platforms such as Facebook and Twitter. This form of hybrid conflict – launched by states without state insignia, on privately built and publicly used services – offers a genuine challenge to those who steward the network and the private companies whose platforms are targeted. While interventions by one state may be tempered by defense by another state, there remain novel problems to solve when what users see and learn online is framed as organic and user-generated but in fact it is not.

  • Type:
    Categories:
    Sub-Categories:

    A sharp increase in web encryption and a worldwide shift away from standalone websites in favor of social media and online publishing platforms has altered the practice of state-level Internet censorship and in some cases led to broader crackdowns, the Internet Monitor project at the Berkman Klein Center for Internet & Society at Harvard University finds. This study documents the practice of Internet censorship around the world through empirical testing in 45 countries of the availability of 2,046 of the world’s most-trafficked and influential websites, plus additional country-specific websites. The study finds evidence of filtering in 26 countries across four broad content themes: political, social, topics related to conflict and security, and Internet tools (a term that includes censorship circumvention tools as well as social media platforms). The majority of countries that censor content do so across all four themes, although the depth of the filtering varies. The study confirms that 40 percent of these 2,046 websites can only be reached by an encrypted connection (denoted by the "HTTPS" prefix on a web page, a voluntary upgrade from "HTTP"). While some sites can be reached by either HTTP or HTTPS, total encrypted traffic to the 2,046 sites has more than doubled to 31 percent in 2017 from 13 percent in 2015, the study finds. Meanwhile, and partly in response to the protections afforded by encryption, activists in particular and web users in general around the world are increasingly relying on major platforms, including Facebook, Twitter, Medium, and Wikipedia.

  • Type:
    Categories:
    Sub-Categories:

    The world of 2016 is one where leaking a lot is much easier than leaking a little. And the indiscriminate compromise of people’s selfies, ephemeral data, and personal correspondence — what we used to rightly think of as a simple and brutal invasion of privacy — has become the unremarkable chaff surrounding a few worthy instances of potentially genuine whistleblowing. These now-routine Exxon Valdez spill-sized leaks, for which anyone can be a target, threaten us as individuals and as a citizenry. They’re not at all like the Pentagon Papers or the revelations of Watergate, and they wrongly benefit from the general feeling that such leaks are a way to bring powerful parties to account.

  • Type:
    Categories:
    Sub-Categories:

    Doctors and lawyers are prohibited from using clients’ information for their own interests, so why aren’t Google and Facebook?

  • Type:
    Categories:
    Sub-Categories:

  • Favorite

    Type:
    Categories:
    Sub-Categories:

    This report from the Berkman Center's Berklett Cybersecurity Project offers a new perspective on the "going dark" debate from the discussion, debate, and analyses of an unprecedentedly diverse group of security and policy experts from academia, civil society, and the U.S. intelligence community. The Berklett group took up some of the questions of surveillance and encryption as some companies are encrypting services by default, making their customers' messages accessible only to the customers themselves. The report outlines how market forces and commercial interest as the increasing prevalence of networked sensors in machines and appliances point to a future with more opportunities for surveillance, not less.

  • Type:
    Categories:
    Sub-Categories: