Ban on killer robots moves closer
25 September 2018
Killer robots seeking targets based on unreliable face recognition technology. Robotic dogs detecting heartbeats to home in on their human prey.
These are just some of the possible nightmarish scenarios that campaigners - including a Leeds Beckett University academic - are trying to make illegal.
According to The Campaign to Stop Killer Robots, the use of ‘killer robots’ will be widespread in warfare in a matter of years with the global spending on robotics set to double from £71.8bn in 2016 to £144bn in 2020, bringing full autonomy to the brink of realisation.
Dr Steve Wright, of Leeds School of Social Sciences at Leeds Beckett, recently attended talks in Geneva involving the United Nations in a bid to persuade members to outlaw the technology.
Dr Wright said: "The world is watching these processes intensely since the stakes are so high.
"The rate of technological innovation is continuing at a pace. The urgency to achieve a ban cannot be overstressed.
"Programmes such as unreliable face recognition, geo-location and even heartbeat detection targeting migrants will move us into an era when weapons not only kill humans at an unprecedented scale but can hunt humans in the most reprehensible fashion.
"In that context, human rights law will be demoted to a 20th century luxury. We cannot afford such an arms race and its consequences.”
A board member of ICRAC – the International Committee for Robot Arms Control – Dr Wright was among those calling on a ban on so-called killer robots - fully autonomous weapons systems such as tanks, planes, ships and guns that once activated, would supposedly select and attack targets without human intervention.
During the talks - the sixth Convention of Certain Conventional Weapons (CCW) - the US and Russia were among countries that voted not to begin dialogue that could lead to a ban on the weapons systems.
Instead, the group, which also included Israel, Australia and South Korea, argued that states have not yet agreed on a shared definition of a lethal autonomous weapons system.
A majority of the 88 states supported the proposed ban, but under the convention’s rules, all those attending had to agree. Instead, they pledged to continue to explore options for future work.
The ICRAC has argued that there are considerable moral values at risk and that no machine has the sophistication to adequately decide unless there is meaningful human control. Indeed, such weapons pose great dangers to global security, they claim.
Proponents counter-claim that these weapons would be more humane since they would not make mistakes, work 24/7, do not tire and their lethality would be more precise.
The final decision on future work will be taken by states at the CCW’s annual meeting on 23 November 2018, also at their next UN gathering in Geneva in 2019. In the meantime, the European Parliament has called for an outright ban.