Lethal autonomous weapons

Posted: April 9, 2024 by chrisharper in Uncategorized
Tags: , ,

By Christopher Harper

As the U.S. Congress plans an investigation of artificial intelligence, one of the most challenging areas of concern is what’s known as LAWS.

LAWS stands for lethal autonomous weapons systems, which critics call killer robots.

I started gathering information about this type of A.I. when two of my favorite military authors, Mark Greaney and Gregg Hurwitz, posed some significant issues with LAWS.

Greaney ponders an attempt by one tech company to control the worldwide supply of such weapons, while Hurwitz warns about the absence of ethics when computers take over.

By combining A.I. with advanced robotics, the U.S. military and those of other advanced powers are already hard at work creating an array of self-guided “autonomous” weapons systems—combat drones that can employ lethal force independently of any human officers meant to command them. Such devices include a variety of uncrewed or “unmanned” planes, tanks, ships, and submarines capable of autonomous operation. For example, The U.S. Air Force is developing an unmanned aerial vehicle to join piloted aircraft on high-risk missions. The Army is similarly testing a variety of autonomous unmanned ground vehicles (UGVs), while the Navy is experimenting with both unmanned surface vessels (USVs) and unmanned undersea vessels (UUVs, or drone submarines). China, Russia, Australia, and Israel are also working on such weaponry.

Michael Klare of The Nation wrote recently: “For the most part, debate over the battlefield use of such devices hinges on whether they will be empowered to take human lives without human oversight. Many religious and civil society organizations argue that such systems will be unable to distinguish between combatants and civilians on the battlefield, and so should be banned in order to protect non-combatants from death or injury, as is required by international humanitarian law. American officials, on the other hand, contend that such weaponry can be designed to operate perfectly well within legal constraints.”

The imminent appearance of autonomous weapons has generated concern and controversy globally, with some countries already seeking a total ban on them. Others, including the United States, plan to authorize their use only under human-supervised conditions. In Geneva, a group of states has even sought to prohibit the deployment and use of fully autonomous weapons, citing a 1980 U.N. treaty that aims to curb or outlaw non-nuclear munitions believed to be especially harmful to civilians. Meanwhile, in New York, the U.N. General Assembly held its first discussion of autonomous weapons last October and is planning a full-scale review this fall.

Given China’s superior numbers, the so-called “swarm concept” of A.I. weapons is particularly appealing to U.S. strategists. The antonymous weapons would act like a swarm of bees, ants, or wolves.

This concept of warfare undergirds the new “replicator” strategy announced by Deputy Secretary of Defense Kathleen Hicks just last summer. “Replicator is meant to help us overcome [China’s] biggest advantage. More ships. More missiles. More people,” she told arms industry officials last August. By deploying thousands of autonomous weapons, she suggested, the U.S. military would be able to outwit, outmaneuver, and overpower China’s military, the People’s Liberation Army. “To stay ahead, we’re going to create a new state of the art.… We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat.”

Any participating robotic member of such swarms would be given a mission objective, such as destroying enemy radar, but not precise instructions on how to do so. This would allow them to select their battle tactics in consultation.  

Authors Greaney and Hurwitz have one overriding concern about the technology: its introduction would make nations more prone to war.

Alternatively, the technology might reduce battlefield injuries and deaths.

One concept favoring A.I. technology development harkens back to the Cold War: mutual-assured destruction. If all the major powers each have LAWS, it is less likely that one will use the weapons because of the retaliation it would face.

Comments are closed.