This may seem like unclear UN convenition, but this week’s meeting in Geneva was attentively attentive by experts in artificial intelligence (AI), military strategy, disarmament and humanitarian law.
The reason? Killer robots – drones, weapons, and bombs that decide themselves, with artificial brains, whether to attack and kill – and what to do, if any, to arrange or forbade them.
After the science fiction film domain such as “terminator” and “Robocop,” robot killer, better known technically as a deadly autonomous weapons system, has been found and tested at accelerated speed with a little supervision.
Some prototypes have even been used in actual conflicts.
The evolution of these machines is considered as a seismic event in the war, similar to the discovery of gunpowder and nuclear bomb.
This year, for the first time, the majority of 125 countries included in the agreement called the convention on certain conventional weapons said they wanted the sidewalk to the killer robot.
But they were opposed by members who were developing these weapons, especially the US and Russia.
This conference is widely considered by disitists to be the best opportunities so far to compile a way to regulate, if it does not forbid, use the killer robot.
However, it was concluded on Friday with only a vague statement about considering the possibility of acceptable steps by all.
The campaign to stop the robot killer, a disarmament group, said the results fell “drastically”.
The critics say it is disgusting morally to determine the deadly decision making to the machine.
How does the machine distinguish adults from a child, a warrior with a Bazooka from civilians with a broom? “The autonomic weapons system raises ethical concerns about replacing life and death decisions with sensors and software,” said Peter Maurer, President of the Red Cross International Committee, said.
NYT.