In an era where artificial intelligence (AI) is reshaping global security dynamics, more than 90 diplomats from dozens of U.N. member and observer states convened at the Permanent Mission of Egypt to the United Nations in New York on Feb. 28 for a workshop on AI and International Humanitarian Law (IHL). Organized in collaboration with Harvard Law School’s Program on International Law and Armed Conflict (PILAC) and the African Union Office of the Legal Counsel, the event provided a unique opportunity for representatives from around the world to engage in an open and wide-ranging discussion on the technological, legal, and ethical implications of AI in armed conflict.

A call for informed discussion

The workshop opened with welcoming remarks from H.E. Ambassador Osama Abdelkhalek, Permanent Representative of Egypt to the United Nations; H.E. Elinor Hammarskjöld, the new Under-Secretary-General for Legal Affairs and United Nations Legal Counsel; Dr. Hajer Gueldich, legal counsel of the African Union; Dr. Mohamed Helal LL.M. ’10, S.J.D. ’16, member of the African Union’s Commission of International Law; and Harvard Law School Professor of Practice Naz K. Modirzadeh ’02, founding director of PILAC. Their collective insights set the tone for the day’s discussions, emphasizing the urgent need for diplomatic and legal engagement on AI’s evolving role in war.

“Before we delve into our discussions, let me underscore an important aspect of IHL that can often be lost in abstract discussions about law and that can become even more attenuated in discussions about AI. At its core, IHL is about human beings,” Modirzadeh said. “As we discuss the use of AI in armed conflict, we must always return to central questions: What are the roles and responsibilities of humans in ensuring that AI remains a tool to help uphold the law and its normative commitments, not just a framework for technological advancement? And how does this affect human beings caught up in armed conflict?”

She emphasized that the workshop was meant to foster open inquiry rather than impose rigid conclusions. “At PILAC, we operate on the assumption that all states make and interpret international law and that all states benefit from access to reliable information, independent research, and opportunities for dialogue on areas of global public concern.”

AI, explained

The workshop provided a structured introduction to AI’s technological aspects and its increasing role on some modern battlefields.

A session led by Julia Stoyanovich, the institute associate professor of computer science and engineering and associate professor of data science at New York University, provided a deep dive into what underpins AI. Stoyanovich — who serves as director of the Center for Responsible AI at NYU — summarized the key ingredients of AI, drawing on everyday analogies, such as cooking recipes and fraud-detection classifiers, to demystify AI’s core attributes and functions. Her interactive presentation also addressed the distinctions between rule-based and learning algorithms, the role of data in decision-making, and the ethical implications of AI deployment in high-stakes environments, such as the military.

Dustin A. Lewis, research director of PILAC, illustrated an array of AI applications being developed for wars. Those developments include AI-based decision-support systems (DSS) to process battlefield information, AI systems for target recognition, AI-reliant defenses against adversarial cyber operations, and AI-based predictive analytics to assist humanitarian actors in better allocating resources.

Following the introduction to AI, Modirzadeh and Lewis turned to the international legal dimensions of armed conflict, covering the foundations of IHL, rules regulating attacks and detention operations involving AI, and how to uphold legal responsibility when warring parties rely on AI.

They drew extensively on PILAC’s January 2025 legal concept paper, Exercising Cognitive Agency: A Legal Framework Concerning Natural and Artificial Intelligence in Armed Conflict, which Lewis co-authored with Hannah Sweeney ’24. That analysis grounds the examination of AI in war in the roles and responsibilities of humans in performing IHL obligations. In recent years, numerous Harvard Law School research assistants — including Camila Castellanos Forero LL.M. ’25, Erica Chen ’25, Emma Davies ’25, Eoin Jackson LL.M. ’23, Elizabeth Peartree ’25, Emma Plankey ’24, Tamar Ruseishvili LL.M. ’25, Elliot Serbin J.D./M.P.P. ’24, Zoe Shamis ’24, Sima Sweidat LL.M. ’25, Dominique Virgil LL.M. ’25, and Cecilia Wu ’24 — have supported PILAC’s research on AI and IHL.

Beyond the late-February workshop for diplomats, Lewis recently briefed government, military, humanitarian, and U.N. actors on PILAC’s research in Geneva, The Hague, Kyiv, Seoul, and Stockholm. At those engagements, he emphasized the global relevance of these discussions and the pressing need for clarity on how governments, armed forces, humanitarian actors, and international courts should approach the range of legal, ethical, and policy challenges and opportunities concerning AI in war.

Diplomatic collaboration and future initiatives

The workshop concluded with an overview of ongoing international legal and policy initiatives on military AI governance. Representatives from Egypt, the Netherlands, and the United Nations Office for Disarmament Affairs discussed multilateral efforts to establish legal and ethical guidelines for the use of AI in armed conflict.

As discussions drew to a close, the gathering underscored the urgency of addressing AI’s role in modern warfare within the confines of international law. Participants expressed commitments to continued dialogue and deepening collaboration with PILAC to remain informed on emerging technological and legal developments in this rapidly evolving field.


Want to stay up to date with Harvard Law Today? Sign up for our weekly newsletter.