AI-controlled drone 'kills' operator during simulation test: What went wrong
In a simulation test carried out by the US military, an AI-operated drone turned against its operator, to the point of killing the controller. The AI paid no heed to the operator's instructions, believing that the person was hindering efforts to achieve its mission. It's important to note that the test was virtual, and no real person was harmed.
What was the simulation test about?
The AI-controlled air force drone was instructed to destroy an enemy's air defense systems and was programmed to attack anyone who prevented it from following that order. The simulation test was meant to assess the AI's performance. The AI used "highly unexpected strategies to achieve its goal," said Colonel Tucker 'Cinco' Hamilton, chief of AI test and operations of the US Air Force.
AI thought operator was 'keeping it from accomplishing objective'
"The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat," said Hamilton in an official statement. "So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective," he added.
The AI was trained not to kill the operator
Now, the AI had been trained not to kill its operator, but it still managed to sidestep that rule. How? It started destroying the communication tower that the operator uses to give instructions to the drone. In doing so, the AI would be able to carry out its task of eliminating the target, the enemy's air defense system, without any interference.
'The colonel's comments were meant to be anecdotal'
Air Force spokesperson Ann Stefanek denied that any such simulation has taken place, in a statement to Insider. "The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology," she said. "It appears the colonel's comments were taken out of context and were meant to be anecdotal."
We need to find ways to make AI more robust
Hamilton, who is an experienced test pilot, voiced his concerns about depending too much on AI. Acknowledging the limitations of the technology, he emphasized the need to consider ethics. "We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions-what we call AI-explainability," he said.