Explainer: The automation of war


More than 100 high-profile business leaders have signed a letter urging the United Nations to prevent escalation of threats purportedly posed by autonomous weapons. The letter warns that such weapons development could lead to warfare at a greater-than-ever scale and at a speed “faster than humans can comprehend.”

Autonomous weapons systems, also known as “killer robots,” are designed to identify and engage enemy targets without human involvement. They are intended to be more “efficient” than conventional warfare, which requires humans to identify targets and assess risks before pulling a trigger.

Breakdown

The letter, signed by 116 leaders in artificial intelligence technology, was addressed to the UN’s Conference of the Convention on Certain Conventional Weapons. With signatures including Tesla’s Elon Musk and Google Deepmind’s Mustafa Suleyman, the letter does not explicitly call for the UN to ban these weapons.

However the group said that they felt “especially responsible in raising this alarm” as the technology from their companies is likely to contribute to the development of autonomous weapons.

At a conference on the Convention on Certain Conventional Weapons last year, experts agreed to establish a new Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS). The group was due to meet on 21 August to begin discussing potential international regulation of autonomous weapons, but the meeting was delayed as some member states had not paid their UN contributions.

Toby Walsh, professor of artificial intelligence at the University of New South Wales in Sydney, coordinated the letter and published it at the International Conference on Artificial Intelligence, which took place in Melbourne in August.

The letter encourages the GGE to “prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.”

The business leaders warned that autonomous weapons could lead to armed conflict fought “at a greater scale than ever and at timescales faster than humans can comprehend.”

The letter further warned that such weapons could be misused by despots and terrorists and hacked by opponents, posing great risk to civilians.

“We do not have long to act,” they wrote. “Once this Pandora’s box is opened, it will be hard to close.”

The tech

The development of artificial intelligence (AI) in weaponry builds on the use of unmanned aircraft (drones), which have the advantage of keeping the targeting combatants far from danger.

In practical terms, a drone, without any human intervention, could identify a target and make a calculation on the risk of collateral damage before launching a strike. Such technology could advance to become the overarching means of controlling weapons in war.

According to Human Rights Watch, South Korea has deployed automated gun towers in the demilitarized zone (DMZ) with North Korea. This system uses a laser rangefinder and infrared technology to seek targets in the DMZ, and can be deployed manually or autonomously.

The risk

Campaigners against the development of autonomous weapons argue that it is unlikely that artificial intelligence could replace human judgment and reliably assess risks, potentially causing a breach of humanitarian law.

Humanitarian law (the international standards that were created to govern combat) rests on the “cardinal principles” of proportionality and distinction.

These principles, relying on an assumption that any war can only be legally justified as defensive, require that any combative action must be proportionate to the defensive aim of the war, and any attack must distinguish between civilians and combatants.

An additional addendum to the canon of humanitarian law, known as the Martens Clause, established in 1899, requires governments to be mindful of the “public conscience.”

In a 2016 report, Human Rights Watch said, “Although progress is likely in the development of sensory and processing capabilities, distinguishing an active combatant from a civilian or an injured or surrendering soldier requires more than such capabilities.”

In a 2013 report for the UN’s Human Rights Council, Christof Heyns, the special rapporteur on extrajudicial, summary or arbitrary executions, wrote that “Proportionality is widely understood to involve distinctively human judgement.”

“The prevailing legal interpretations of the rule explicitly rely on notions such as ‘common sense,’ ‘good faith’ and the ‘reasonable military commander standard,’” Heyns wrote. “It remains to be seen to what extent these concepts can be translated into computer programmes, now or in the future.”

The Stop Killer Robots campaign group warned earlier this year that “low-cost sensors and advances in artificial intelligence are making it increasingly practical to design weapons systems that would target and attack without any meaningful human control.”

According to the campaign group, 19 countries have endorsed calls for a preemptive ban on fully autonomous weapons systems.

In 2015, Walsh, the AI professor, organized another letter signed by over 1,000 tech experts and scientists, which also warned against starting a military arms race powered by artificial intelligence.

Read more

How will humanity go extinct?

United Nations Institute for Disarmament Research (UNIDIR) “Framing discussions on the weaponization of increasingly autonomous technology”

Jack Barton is a WikiTribune journalist. He has an LLM in Human Rights and a background reporting on law and international development. Follow @jackbarton91

  • Share
    Share

Subscribe to our newsletter and be the first to collaborate on our developing articles:

WikiTribune Open menu Close Search Like Back Next Open menu Close menu Play video RSS Feed Share on Facebook Share on Twitter Share on Reddit Follow us on Instagram Follow us on Youtube Connect with us on Linkedin Email us