People
Dustin Lewis
-
What is a war crime? Ukraine accuses Russia of them, but what exactly constitutes a war crime?
March 2, 2022
As the Russian invasion of Ukraine continues, Ukrainian President Volodymyr Zelenskyy described Russia's missile strikes in civilian areas of Kharkiv as "war crimes." Tuesday, Zelenskyy accused Russia of engaging in terrorism, a day after Karim Khan, the prosecutor of the International Criminal Court, said he would open an investigation into potential war crimes. ... "From an international law perspective, a war crime is any conduct – whether an act or an omission – that fulfills two cumulative criteria," Dustin Lewis, research director for the Harvard Law School Program on International Law and Armed Conflict, told USA TODAY. "First, the conduct must be committed with a sufficient connection to an armed conflict. Second, the conduct must constitute a serious violation of the laws and customs of international humanitarian law that has been criminalized by international treaty or customary law."
-
Modirzadeh briefs UN on self-defense and state silence
March 5, 2021
On Feb. 24, Professor of Practice Naz Modirzadeh ’02, founding director of the Harvard Law School Program on International Law and Armed Conflict (HLS PILAC), briefed a United Nations Security Council Arria-formula meeting convened by the Permanent Mission of Mexico.
-
An Enduring Impasse on Autonomous Weapons
September 30, 2020
An article by Dustin Lewis: Regular readers of Just Security will know that the United States and Russia do not see eye to eye on many matters touching on war and peace, not least around cyber, information security, and the conflict in Syria. But you do not have to squint to glimpse how the two are, in several important respects, similarly positioned on one side of an enduring impasse on autonomous weapons. While there is no definition in international law of autonomous weapons, one shorthand is weapons that, once initiated, can nominate, select, and apply force to targets without further human intervention. The debate is not purely academic: a handful of systems falling into this relatively narrow definition are already in use, such as so-called loitering munitions; once launched, those systems can linger in the air over several hours while scanning for targets and then strike without in-the-moment clearance by a human operator. The spectrum of States’ views is on display in the Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons. At its core, the deadlock concerns whether existing international law mostly suffices (as the United States, Russia, and a handful of others have asserted) or new legal rules are needed (as dozens of other States have contended). In brief, beyond largely generic reaffirmations of existing rules and agreement on the importance of the “human element” in the use of force, States disagree in certain critical respects on how to frame and address an array of legal dimensions concerning autonomous weapons.
-
AI and Machine Learning Symposium: Why Detention, Humanitarian Services, Maritime Systems, and Legal Advice Merit Greater Attention
April 28, 2020
An article by Dustin Lewis: I am grateful for the invitation to contribute to this online symposium. The preservation of international legal responsibility and agency concerning the employment of artificial-intelligence techniques and methods in relation to situations of armed conflict presents an array of pressing challenges and opportunities. In this post, I will seek to use one of the many useful framings in the ICRC’s 2019 “Challenges” report’s section on AI to widen the aperture further in order to identify or amplify four areas of concern: detention, humanitarian services, uninhabited military maritime systems, and legal advice. While it remains critical to place sufficient focus on weapons and, indeed, on the conduct of hostilities more widely, we ought to consider other (sometimes-related) areas of concern as well. Drawing on research from an ongoing Harvard Law School Program on International Law and Armed Conflict project that utilizes the analytical concept of “war algorithms,” I will sidestep questions concerning the definitional parameters of what should and should not be labeled “AI.” (A “war algorithm” is defined in the project as an algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict.) Instead, I will assume a wide understanding that encompasses methods and techniques derived from, or otherwise related to, AI science broadly conceived.
-
An article by Dustin Lewis and Naz Modirzadeh: A debate is emerging slowly at the United Nations headquarters: Can and should a counterterrorism body authoritatively and authentically interpret and assess compliance with international humanitarian law (IHL), the principal body of law regulating armed conflicts? At first glance, that debate might seem merely niche or technocratic—one fast-tracked to the annals of international law pedantry. But when viewed in its proper context, the debate raises several concerns that may have cascading effects of great significance. Indeed, how it is ultimately resolved may entail consequences for safeguards for populations ravaged by armed conflict as well as the integrity and coherence of the system of legal protection in war. Along with our co-author, Jessica S. Burniske, we examine these issues in a new briefing for the Harvard Law School Program on International Law and Armed Conflict.
-
Legal reviews of weapons, means and methods of warfare involving artificial intelligence: 16 elements to consider
March 21, 2019
An op-ed by Dustin Lewis: What are some of the chief concerns in contemporary debates around legal reviews of weapons, means or methods of warfare involving techniques or tools related to artificial intelligence (AI)? One session of the December 2018 workshop on AI at the frontiers of international law concerning armed conflict focused on this topic. In this post, I outline a few key threshold considerations and briefly enumerate 16 elements that States might consider as part of their legal reviews involving AI-related techniques or tools.It is imperative, in general, for States to adopt robust verification, testing and monitoring regimes as part of the process to determine and impose limitations and—as warranted—prohibitions in respect of an employment of weapons, means or methods of warfare. Where AI-related techniques or tools are—or might be—involved, the design and implementation of legal review regimes might pose particular kinds and degrees of challenges as well as opportunities.
-
Recent advances in artificial intelligence have the potential to affect many aspects of our lives in significant and widespread ways. Certain types of machine learning systems—the major focus of recent AI developments—are already pervasive, for example in weather predictions, social media services and search engine results, online recommendation systems. Machine learning is also being applied to complex applications that include predictive policing in law enforcement and ‘advice’ for judges when sentencing in criminal justice. Meanwhile, growing resources are being allocated to developing other AI applications. ...We asked some of the experts to distill—in under 300 words—some of the key issues and concerns that they believe we aren’t thinking enough about now when it comes to the future on AI and armed conflict. ...Naz K. Modirzadeh, Founding Director & Dustin A. Lewis, Senior Researcher, Harvard Law School Program on International Law and Armed Conflict. "Looking to the future of artificial intelligence and armed conflict, those of us concerned about international law should prioritize (among other things) deeply cultivating our own knowledge of the rapidly changing technologies. And we should make that an ongoing commitment. There is a perennial question about subject-matter expertise and the law of armed conflict; consider cyber operations, weaponeering and nuclear technology. When it comes to the increasingly impactful and diverse suite of techniques and technologies labeled ‘AI’, the concern takes on a different magnitude and urgency. That’s in no small part because commentators have assessed that AI has the potential to transform armed conflict—and not just the conduct of hostilities.
-
Bots and bombs: Does cyberspace need a “Digital Geneva Convention”?
November 15, 2017
Cyber-attacks are on the rise, threatening power grids, driving up geopolitical tensions, and even crippling hospitals. Countries should agree a new “Digital Geneva Convention” to contain the risk and set up a new international organisation to police the new rules. These proposals from Microsoft’s chief legal officer, Brad Smith, also say that neutral companies dealing with the fallout should win protected status, like a technological Red Cross. Opinions differ on where the gaps are, if any, in international law, and whether Microsoft is a credible voice on the issues or just looking after its own vested interests...Dustin Lewis, senior researcher at the Harvard Law School Program on International Law and Armed Conflict, told IRIN: "the legal and political aspects are difficult, if not impossible, to completely dissociate".
-
At the UN General Assembly, Modirzadeh discusses protecting health care in armed conflict
October 4, 2017
HLS Professor of Practice Naz K. Modirzadeh ’02 gave a talk at a United Nations General Assembly event on Sept. 22 called, “International Humanitarian Law: Addressing violations in light of recent conflicts,” which focused on failures of international law to protect health care systems in armed conflict in Syria involving designated terrorists.
-
HLS Program on International Law and Armed Conflict releases report on ‘indefinite’ war
February 27, 2017
The Harvard Law School Program on International Law and Armed Conflict (HLS PILAC) has released a new report titled "Indefinite War: Unsettled International Law on the End of Armed Conflict."