Arms control advocates fear the worst and worry existing guardrails offer insufficient checks, given the existential risks. Critics call self-operating weapons “killer robots” or “slaughterbots” because they are powered by artificial intelligence (AI) and can technically operate independently to take out targets without human help.
These types of systems have rarely been seen in action, and how they will affect combat is largely unknown, though their impact on the landscape of warfare has been compared to tanks in World War I.
But there are no international treaties governing the use of these weapons, and human rights groups are uneasy about Washington’s ethical guidelines on AI-powered systems and whether they will offer any protection against an array of humanitarian concerns.
“It’s really a Pandora’s box that we’re starting to see open, and it will be very hard to go back,” said Anna Hehir, who leads autonomous weapons system research for the advocacy organization Future of Life Institute (FLI).
“I would argue for the Pentagon to view the use of AI in military use as on par with the start of the nuclear era,” she added. “So this is a novel technology that we don’t understand. And if we view this in an arms race way, which is what the Pentagon is doing, then we can head to global catastrophe.”