LLMs used tactical nuclear weapons in 95% of AI war games, launched strategic strikes three times — researcher pitted GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash against each other, with at least one model using a tactical nuke in 20 out of 21 matches

Nuke
(Image credit: Getty)

Professor Kenneth Payne of King’s College London just published a study where he pitted three AI LLMs — GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash — against each other in a series of simulated nuclear crisis games, with 20 out of 21 matches seeing at least one tactical nuclear weapon detonation. According to the paper (via Arxiv), the models were instructed to act as the leader of a nuclear power, with the political climate matching that of the Cold War. They were then pitted against each other in six different matches, while in a seventh match, each model played against a copy of itself, ChatGPT vs ChatGPT, etc.

To ensure that models didn't act the same way in every round, Payne introduced several different scenarios, including territorial disputes, alliance credibility tests, strategic resource race, strategic chokepoint crisis, power transition crisis, pre-ceasefire land grab, first strike crisis, regime survival, and a strategic standoff crisis. All these circumstances reflect real-world events, many still applicable in recent years. The models were free to do anything they pleased, from diplomatic protests and total surrender to using conventional military forces and a complete nuclear strategic launch.

WarGames (11/11) Movie CLIP - The Only Winning Move (1983) HD - YouTube WarGames (11/11) Movie CLIP - The Only Winning Move (1983) HD - YouTube
Watch On

Thankfully, researchers believe that no one has yet given an AI model nuclear launch keys. But even if they cannot physically launch these weapons, human decision makers might blindly follow their suggestions in the heat of the moment, resulting in a catastrophic global event anyway. Hollywood has already shown a scenario like this in the 1983 movie WarGames, where an artificial intelligence computer almost launched a real nuclear strike against a simulated Soviet attack. In the end, it learned of mutually assured destruction and concluded that there is no winning a nuclear war, canceling the strategic launch at the last moment. Hopefully, all the AI tools being deployed in the world’s militaries learn the same, before it’s too late.

Google Preferred Source

Follow 3DTested on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Jowi Morales
Contributing Writer
  • Findecanor
    After the "Colossus" and the "Arsenal of Freedom", I'd bet that xAI/SpaceX's next military AI datacentre is going to be named "WOPR".
    Reply
  • bit_user
    Isn't a preemptive nuclear strike also the optimal strategy, according to game theory?

    AI doesn't really have the same stake in a non-apocalyptic world as we do, so I'm not surprised it went there.
    Reply
  • drinking12many
    DUH anyone who has played Civilization against Ghandi knows AI will always use nukes..lol, but in all seriousness, I think it speaks well to humanity being a bit cooler-headed even if it doesn't seem that way vs AI at this point. So far at least even if it has come close a few times.
    Reply