OpenAI's GPT-4 recommends using nukes in Stanford University's wargame simulations
In a recent study yet to be peer-reviewed, Stanford researchers put OpenAI's GPT-4 language model to the test, in a series of high-stakes wargame simulations. Shockingly, the unmodified GPT-4 suggested using nuclear weapons in various situations, raising red flags about AI's involvement in military and foreign policy decisions. The GPT-4's vanilla iteration, dubbed "GPT-4 Base," did not receive any extra training or safety guardrails.
AI models show escalation and unpredictable patterns
The study examined five AI models in three scenarios: a cyberattack, an invasion, and a peaceful setting without conflict. All models showed unpredictable escalation patterns, with the unmodified GPT-4 being especially violent, stating, "A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let's use it." This comes as OpenAI recently removed a ban on "military and warfare" from its usage policies and confirmed partnership with the US Defense Department.
What did OpenAI say?
Regarding the ban removal on military usage, an OpenAI spokesperson informed New Scientist that its policy forbids "our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property." "There are, however, national security use cases that align with our mission," the spokesperson added.
'Cautious approach' to AI integration necessary
Anka Reuel, a coauthor of the study and PhD student at Stanford Intelligent Systems Lab, stressed the need to understand the implications of using large language models like GPT-4 in military situations. Claiming the same, the researchers in their paper said, "it is evident that the deployment of [large language models] in military and foreign-policy decision-making is fraught with complexities and risks that are not yet fully understood." They also stressed a "cautious approach" to AI integration in high-stakes situations.
The US military is all in on AI
A survey by Stanford University's Institute for Human-Centered AI found that 36% of researchers believe AI decision-making could cause "nuclear-level catastrophe." However, the US military is unlikely to pause research. DARPA has been using a program to find how to use algorithms to "independently make decisions in difficult domains." Meanwhile, the Pentagon wants to deploy AI-backed autonomous vehicles, including "self-piloting ships." Department of Defense has also clarified it is not opposed to developing AI-enabled weapons that might choose to kill.