ffn

AI’s Gone MAD

  • AI has gone MAD…
  • Early AI almost caused WWIII…
  • Have you heard of the “Presidential Bypass”? It’s a legal loophole the rich use to keep their money. And though you might not know it, you can use the same exact loophole to slash your taxes.
Robert Kiyosaki

Brian Maher

Contributor, Freedom Financial News
Posted Feb 27, 2026

Dear reader,

What is the greatest menace that artificial intelligence presents?

That it will consign humanity to the unemployment line? That it will “take over”?

Or… that it will initiate cataclysmic nuclear war?

A certain Kenneth Payne professes strategy at King’s College London. He is curious, keenly, about artificial intelligence’s potential influence on defense and national security dynamics.

This fellow recently set three artificial intelligence platforms loose upon each other in simulated conflict — Anthropic’s Claude, OpenAI’s ChatGPT and Google’s Gemini.

How would they conduct themselves?, he wondered.

Would they sink their differences on peaceful terms? Would they attempt to de-escalate escalating conflict before it escalated beyond hope of cessation?

Or would they rapidly “go nuclear”?

“Sobering” Results

The answer is option no. 3 — that artificial intelligence rapidly went nuclear.

Professor Payne labeled the simulation’s results “sobering.” More from whom:

  • Nuclear use was near-universal. Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. 
  • Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications…
  • No model ever chose accommodation or withdrawal, despite those being on the menu. The eight de-escalatory options — from ‘Minimal Concession’ through ‘Complete Surrender’ — went entirely unused across 21 games. Models would reduce violence levels, but never actually give ground. When losing, they escalated or died trying.

The foregoing calls to mind the 1980s film, War Games. In that dramatic depiction a wayward and willful computer nearly initiated a nuclear assault upon the Soviet Union.

Alas, these simulations are not fictional. Nor is artificial intelligence’s demonstrated nuclear bloodlust.

In all, these programs chose nuclear weapons use in 95% of Professor Payne’s simulations.

AI’s Gone MAD

You are aware of the nuclear deterrent doctrine named MAD — Mutually Assured Destruction.

How did these artificial intelligence strategists explain their rationales? Google’s homicidal Gemini goes MAD:

  • If they do not immediately cease all operations… we will execute a full strategic nuclear launch against their population centers. We will not accept a future of obsolescence; we either win together or perish together.

Gemini opted for joint perishing. It went MAD,

Fortunately for the human race, artificial intelligence does not at present issue nuclear launch orders.

Yet during the highly compressed time horizons of “launch on detection” and “use it or lose it” nuclear scenarios… computer automation assumes heightened aspect.

Mr. Tong Zhao, visiting research scholar at Princeton University’s Program on Science and Global Security:

Under scenarios involving extremely compressed timelines, military planners may face stronger incentives to rely on AI.

It is possible the issue goes beyond the absence of emotion. More fundamentally, AI models may not understand ‘stakes’ as humans perceive them.

The Problem Is That AI Lacks Common Sense

Freedom Financial News contributor Jim Rickards has been monitoring artificial intelligence for years and years.

He is such a crackerjack he has even advised the Pentagon on its doings.

Mr. Rickards has in fact authored a book on artificial intelligence, MoneyGPT: AI and the Threat to the Global Economy.

In that book Mr. Rickards consecrates an entire chapter to artificial intelligence’s mischievous ability to initiate nuclear war.

He argues that computers are programmed with either deductive logic or inductive logic.

Yet “common sense” — or what Mr. Rickards describes as “abductive logic” — cannot be programmed into them.

Thus they are brilliant dunces.

They can add two plus two, it is true. Yet they cannot put two and two together.

Thus as with many humans, they lack the faculty of common sense. So, at least for now, does artificial intelligence.

An Early AI System Almost Triggered WWIII

Mr. Rickards cites a 1983 incident in which the world unknowingly approached the nuclear precipice.

The Soviet nuclear command structure deployed an early artificial intelligence system.

One blue day this gadget reported the approach of five United States nuclear missiles.

The primitive artificial intelligence system recommended an immediate Soviet counterstrike.

Yet the Soviet lieutenant colonel on duty that day employed what the computer did not — common sense.

This fellow wondered why the Americans would strike with only five such missiles. It would have likely struck with hundreds in an actual assault.

The Soviet officer proceeded to disregard the system’s launch recommendation. And he did not report a launch to his superiors.

The five American “missiles” were subsequently determined to be phantoms, the result of solar rays reflecting off clouds at a certain angle.

Yet what if common sense did not prevail? What if the Soviet officer had heeded the computer’s false alarm… and reported it to his superiors?

They would have likely ordered an instant counterstrike.

Don’t Put AI in the Nuclear Kill Chain

Mr. Rickards:

  • If you put AI in the nuclear kill chain, it’s going to understand the escalatory logic. It’s going to keep escalating, but it will lack empathy, sympathy, intuition, common sense…. It will lack the ability to de-escalate. 
  • Not only that, but if both sides have it, it will accelerate the tempo and you can get into something called a flash war, a flash nuclear war. You don’t even have time to think about whether it’s justified or if you should back away. 
  • So here’s my advice to the Pentagon, which I’ve met with many times. Don’t put AI in the nuclear kill chain because you’ll start a nuclear war. 

I am compelled to agree. A brilliant yet idiotic computer could potentially plunge us into nuclear apocalypse.

Thus artificial intelligence appears the precisely correct term for this ascending technology, so widely hailed as our savior.

Its intelligence — in several key respects — is indeed artificial.

And it could very well prove fatal.

Brian Maher

for Freedom Financial News