Pentagon working on integrating AI-controlled robots in US armed forces
The Pentagon's Defense Advanced Research Projects Agency (DARPA) has been testing fully-autonomous AI to control drones carrying lethal force and authorized to engage enemy threats. This is a major departure from the standard protocol as the US military mandates human control and oversight on all drone missions using deadly force. A drill comprising several dozen AI-controlled drones and tank-like robots was conducted in Seattle last August.
For the first time, autonomous robots attacked without human intervention
Interestingly, the Pentagon's drill simulated an urban anti-terrorist operation, which is otherwise considered too delicate for anything but highly trained human special forces operatives. The drill required the AI-controlled drone swarm to take basic instructions from a human operator, such as finding and eliminating enemy combatants, but they could decide the course of action and attack without requiring human intervention.
The exercise was meant to test AI's effectiveness, not firepower
Instead of arming the drones with real weapons, they were outfitted with radio designators simulating weapons while using telemetry and signal data to detect hits and misses. This is based on hard-won lessons from earlier autonomous debacles such as the AI-controlled M247 Sergeant York, which took down toilets instead of intended target drones. The idea was to test the AI's efficacy and not firepower.
Robots executed military tactics and maneuvers without human intervention
The drill deliberately involved a large number of robots making human oversight impossible. The drones were fed with simple objectives (such as seek and destroy enemies) while relying on AI and algorithms to decide the course of action. This led to robots dividing military tactics such as search, target identification, flanking, perimeter defense, and attack between themselves on their own accord without human intervention.
One-in-five kills scored by human-controlled drones were civilians
While the National Security Commission on Artificial Intelligence (NSCAI) recently recommended the US to resist calls for an international ban on autonomous weapon development, there might be some merit to the Pentagon's approach. To put this into perspective, during Barack Obama's presidency, human-controlled drone strikes fed with human-sourced intelligence managed to kill a whopping 1,124 civilians, with a collateral damage ratio of a dismal 22%.
Pentagon intends AI to handle chaotic scenarios humans struggle with
The logic behind this exercise was to explore the use of AI in decision-making during combat scenarios that are too complicated and chaotic to be handled by humans. The Pentagon aims to figure out the viability of restricting humans to high-level decisions while letting AI handle complex, real-time calls on how to achieve those tasks in the most efficient manner.
Humans are the weakest link in modern electronic warfare
In fact, the human decision-making element has long been the weakest link in modern electronic warfare. For example, the AH-64 Apache attack helicopter from 1975 can identify and classify 256 moving and stationary targets in one sweep of its radar. Furthermore, it also prioritizes the top 16 targets based on the threat posed. However, the actual engagement is slowed down by the two-man aircrew.
With rivals already embracing combat drones, US has little choice
The Pentagon's experiments to automate this aspect of modern combat could yield rich dividends in leveraging existing advances in electronic warfare. This will make war a lot more efficient and less reliant on slower human decision-making. Whether or not the US chooses to take this route, its rivals such as Russia are already forging a path to inducting autonomous robot armies in the future.