Via The Guardian

The use of fully autonomous weapons in a theatre of war would breach international law, campaigners and experts say, as longstanding calls for a ban on “killer robots” intensify.

These AI-powered guns, planes, ships and tanks could fight future wars without being subject to any human control, as high-tech nations step up investment in the weapons and inch towards full autonomy.

Twenty-six countries explicitly support a prohibition on fully autonomous weapons, with Austria, Belgium and China recently joining thousands of scientists and artificial intelligence experts and more than 20 Nobel peace prize laureates in declaring their support.

In a new report published jointly by Human Rights Watch and Harvard Law School’s International Human Rights Clinic, the organisations have stated that fully autonomous weapons would violate the Martens Clause – a well established provision of international humanitarian law.

It requires emerging technologies to be judged by the “principles of humanity” and the “dictates of public conscience” when they are not already covered by other treaty provisions.

“Permitting the development and use of killer robots would undermine established moral and legal standards,” said Bonnie Docherty, senior arms researcher at Human Rights Watch, which coordinates the Campaign to Stop Killer Robots. “Countries should work together to preemptively ban these weapons systems before they proliferate around the world.

Read more here.

Filed in: In the News

Contact Office of Clinical and Pro Bono Programs

Website:
hls.harvard.edu/clinics

Email:
clinical@law.harvard.edu