The war in Ukraine shows why the world needs a ban on weapons of artificial intelligence

What is wrong with a robot to kill us? Humans, Robots, and State Violence: Why we shouldn’t arm the police

We will not support others that weaponize their general-purpose robots or the software that enables them. When possible, we will carefully review our customers’ intended applications to avoid potential weaponization. We also pledge to explore the development of technological features that could mitigate or reduce these risks. We’re not taking issue with the technology that nations use to defend themselves and uphold their laws.

“There are a whole lot of reasons why it’s a bad idea to arm robots,” says Peter Asaro, an associate professor at The New School in New York who researches the automation of policing. He believes the decision is part of a broader movement to militarize the police. “You can conceive of a potential use case where it’s useful in the extreme, such as hostage situations, but there’s all kinds of mission creep,” he says. “That’s detrimental to the public, and particularly communities of color and poor communities.”

A week is a long time in politics and it should be used to think about the right ofrobots to kill humans in San Francisco.

The reversal is thanks to the huge public backlash and lobbying that occurred after the initial approval. The removal of humans from important life and death matters was a step too far. At least one supervisor who initially approved the decision later said they regretted their choice and a protest took place outside San Francisco City Hall on December 5.

“Despite my own deep concerns with the policy, I voted for it after additional guardrails were added,” Gordon Mar, a supervisor in San Francisco’s Fourth District, tweeted. I regret it. The vote sets a bad precedent for other cities that do not have a strong commitment to police accountability. I don’t believe making state violence more remote, distanced, and less human is a step forward.

The question being posed by supervisors in San Francisco is fundamentally about the value of a life, says Jonathan Aitken, senior university teacher in robotics at the University of Sheffield in the UK. He says that the use of lethal force in police and military operations has deep consideration. Those making a decision to take a life threatening action need contextual information to make that decision in a considered manner. “Small details and elements are crucial, and the spatial separation removes these,” Aitken says. “Not because the operator may not consider them, but because they may not be contained within the data presented to the operator. This can lead to mistakes.” There are mistakes when it comes to lethal force that can have a difference between life and death.

Asaro also downplays the suggestion that guns on the robots could be replaced with bombs, saying that the use of bombs in a civilian context could never be justified. (Some police forces in the United States do currently use bomb-wielding robots to intervene; in 2016, Dallas Police used a bomb-carrying bot to kill a suspect in what experts called an “unprecedented” moment.)

Soon, fully autonomous lethal weapon systems could become commonplace in conflict. Some are already on the market. None of the Ukrainians have ever been used for warfare, at the time of writing. evolving events are cause for concern.

What exactly are ‘lethal autonomous weapons systems’? The United Nations says they are tools that locate, select, and engage human targets. The word ‘engage’ in this definition is a euphemism for ‘kill’. I am not talking about weapons like the US Predator drone that are operated autonomously by humans, because these are not. Nor am I talking about anti-missile defence systems, or about the fully autonomous drones that both Russians and Ukrainians are using for reconnaissance, which are not lethal. And I am not talking about the science-fiction robots portrayed in the ‘Terminator’ films — controlled by the spooky emergent consciousness of the Skynet software system and driven by hatred of humanity — that the media often conjure up when discussing autonomous weapons. The issue here is not rogue machines taking over the world, but weapons deployed by humans that will drastically reduce our physical security.

Current AI systems exhibit all the required capabilities — planning missions, navigating, 3D mapping, recognizing targets, flying through cities and buildings, and coordinating attacks. There are lots of platforms. These include: quadcopters ranging from centimetres to metres in size; fixed-wing aircraft (from hobby-sized package-delivery planes and full-sized, missile-carrying drones to ‘autonomy-ready’ supersonic fighters, such as the BAE Systems Taranis); self-driving trucks and tanks; autonomous speedboats, destroyers and submarines; and even skeletal humanoid robots.

The Shahed is one of the missiles that show a form of autonomy. The Harpy can fly over a region for hours looking for targets that match a visual or radar signature and it can destroy them with its 22 kilo-pound bomb. (Russia’s Lancet missile, widely used in Ukraine, has similar characteristics.) The Chinese Blowfish A3 is a helicopter that can carry a machine gun and unguided gravity bombs, which is different than the Kargu and Harpy. It is not possible to know whether any given attack was carried out by a human operator because they have both autonomously and remotely operated modes.

Another point often advanced is that, compared with other modes of warfare, the ability of lethal autonomous weapons to distinguish civilians from combatants might reduce collateral damage. The United States, along with Russia, has been citing this supposed benefit with the effect of blocking multilateral negotiations at the Convention on Certain Conventional Weapons (CCW) in Geneva, Switzerland — talks that have occurred sporadically since 2014.

It is not likely that there will be any further progress in Geneva soon. The United States and Russia refuse to allow negotiations on a legally binding agreement. The United States worries that a treaty would be unverifiable, leading other parties to circumvent a ban and creating a risk of strategic surprise. Russia believes that it is being discriminated against because of the invasion of Ukraine.

Rather than blocking negotiations, it would be better for the United States and others to focus on devising practical measures to build confidence in adherence. These could include requirements for industrial suppliers to check customer eligibility, as well as design constraints that deter conversion to full autonomy. The Organization for the Prohibition of Chemical Weapons has devised a similar set of technical measures in order to implement the Chemical Weapons Convention. These measures have neither overburdened the chemical industry nor curtailed chemistry research. Similarly, the New START treaty between the United States and Russia allows 18 on-site inspections of nuclear-weapons facilities each year. It is believed that the Comprehensive Nuclear-Test-Ban Treaty would not have existed if scientists from all sides hadn’t collaborated to develop the International Monitoring System.

On 23–24 February, Costa Rica is due to host a meeting of Latin American and Caribbean nations on the ‘social and humanitarian impact of autonomous weapons’, which includes threats from non-state actors who might use them indiscriminately. The nations organized the first nuclear weapon-free zone, raising hopes that they might also initiate a treaty declaring an autonomously weapon-free zone.

Previous post A South Korean court says that same-sex partners should have government benefits
Next post The official hopes a peace summit with Russia will happen by the end of February