‘Killer robots’ will implement ‘grim future’ on people if the world does not unite
The world must unite to halt the rise of “killer robots” or face a “grim future” with “civilians everywhere in grave danger”, a human rights chief has warned.
Artificial intelligence-powered lethal autonomous weapons systems – aka Laws – select and attack targets based on sensor processing with no human input.
They give machines the freedom and capability to take human life – reducing targets to mere data points. Laws drones are in development worldwide and have already been deployed in central Africa. They operate outside international law as decisions to strike are made by robots – not humans.
READ MORE: AI unveils ‘face of Jesus Christ’ using Turin Shroud after bombshell discovery
Click for more of the latest news from the Daily Star.
The United Nations has called on world leaders to sign up to a treaty agreeing not to use them. Mary Wareham, deputy crisis, conflict and arms director at Human Rights Watch, wants the decision to ban them to be taken at a UN summit next month.
“Without explicit legal rules the world faces a grim future of automated killing that will place civilians everywhere in grave danger,’’ she said. She said world leaders were aware of the “enormous detrimental effects removing human control over weapons systems would have on humanity”.
“The already broad international support for tackling this concern should spur governments to start negotiations without delay,’’ she added.
Her call came after United Nations Secretary-General António Guterres released a report demanding a new international treaty to “prohibit weapons systems that function without human control or oversight and that cannot be used in compliance with international humanitarian law”.
Technological advances were driving the development of weapons systems that operate without meaningful human control – delegating life-and-death decisions to machines. “Time is running out for the international community to take preventative action on this issue,’’ the Secretary-General warned, stressing: “The need to act urgently to preserve human control over the use of force”.
He said he hoped world leaders would agree a “pact for the future” outlawing killer robots at the UN’s September 22 summit. So far experts believe the weapons have only been used twice – once in conflict and the other in a training exercise. In 2020 Libyan government-backed forces deployed a Turkish-made Kargu-2 drone during a battle with enemy militia.
According to a UN report the drone “hunted down and remotely engaged’ fighters and may have selected targets autonomously. Casualties were unknown but experts said the drone could not distinguish legitimate military targets from civilians.
In May this year the US African Command tested an autonomous Triton drone in Libreville, Gabon, in an anti-piracy drill. It uses high resolution scanners and sensors to gather intelligence autonomously and carry out countermeasures, has air and sea variants and can loiter underwater for a week.
Experts fear drone technology could fall into the hands of terror networks. Several advanced drones have been lost in counter-terrorism operations. The US reportedly lost three MQ-9 Reaper drones to the Houthis in Yemen in May and previously MQ-1 Predator drones in Libya and Niger.
Ukraine is reportedly using drones integrated with basic AI to help them navigate and avoid being jammed when carrying out attacks in Russia. Israel has deployed an AI system in Gaza called Lavender which uses algorithms to identify Hamas operatives as targets. And the US has deployed systems using AI to find targets for airstrikes in Syria and Yemen.
None of those are classed as killer robots as they are not autonomous weapons. But they could be adapted if they fall into the wrong hands, experts fear.
For the latest breaking news and stories from across the globe from the Daily Star, sign up for our newsletters.