Via International Human Rights Clinic

This piece originally appeared in The Conversation on June 16, 2016

New technology could lead humans to relinquish control over decisions to use lethal force. As artificial intelligence advances, the possibility that machines could independently select and fire on targets is fast approaching. Fully autonomous weapons, also known as “killer robots,” are quickly moving from the realm of science fiction toward reality.

The unmanned Sea Hunter gets underway. At present it sails without weapons, but it exemplifies the move toward greater autonomy.U.S. Navy/John F. Williams

These weapons, which could operate on land, in the air or at sea, threaten to revolutionize armed conflict and law enforcement in alarming ways. Proponents say these killer robots are necessarybecause modern combat moves so quickly, and because having robots do the fighting would keep soldiers and police officers out of harm’s way. But the threats to humanity would outweigh any military or law enforcement benefits.

Removing humans from the targeting decision would create a dangerous world. Machines would make life-and-death determinations outside of human control. The risk of disproportionate harm or erroneous targeting of civilians would increase. No person could be held responsible.

Given the moral, legal and accountability risks of fully autonomous weapons, preempting their development, production and use cannot wait. The best way to handle this threat is an international, legally binding ban on weapons that lack meaningful human control.

Continue Reading

Filed in: Clinical Spotlight, In the News

Tags: Bonnie Docherty, International Human Rights Clinic

Contact Office of Clinical and Pro Bono Programs

Website:
hls.harvard.edu/clinics

Email:
clinical@law.harvard.edu