- AI’s role in starting the Israel/Iran war…
- Bad models, bad data and bad science…
- Robert Kiyosaki’s latest book shows you the compounding power of weekly income…
Dear Reader,
The official claim is that Iran was constructing a nuclear weapon, resulting in a hellfire of bombs and death in both Iran and Israel.
The same sketchy claims, obscured in shape-shifting language that blurred crucial distinctions between intentions and realities, were generated by an AI model.
It was constructed by the company Palantir for the International Atomic Energy Agency, was responsible for goading the U.S. to join the war with a spectacular display of military firepower in the form of B2 bombers and other missiles.
This strange mini-war was over nearly as fast as it began, when Donald Trump reversed himself suddenly, stopped calling for regime change, and later took to the media and his own social media site to blast both Iran and Israel in expletive-laden language.
He was clearly furious, claiming that neither government knows what it is doing.
The Trajectory of War
There appears to be a deeper story here regarding bad data and bad modeling that nearly set the world on fire. Let’s look at the trajectory of this mini-war.
The fiasco began on June 12, 2025, when the IAEA reported some noise in its usual report on Iran, enough to say in an official account that Iran was “noncompliant.” This opinion contradicted what everyone else in the intelligence community said, including Trump’s Director of National Intelligence Tulsi Gabbard.
She had several months earlier testified that Iran was taking no steps toward building nuclear weapons but could not rule out that they might at some point.
Several months earlier, on April 12, 2025, Trump had sent Special Envoy Steve Witkoff on a diplomatic effort to Iran, including high-level meetings with Iranian Foreign Minister Abbas Araghchi.
The IAEA report, however, changed the dynamic very suddenly. Israeli Prime Minister Benjamin Netanyahu, based on the IAEA report, commenced a bombing and assassination campaign based on the claim that Iran was in fact making a nuke.
Iran reported 220 deaths, many scientists among them. The next day, retaliatory bombs fell on Tel Aviv, fully 100 missiles, with 10 causing property damage, panic, and injuring over 40 Israelis.
The two-nation war carried on for days as innocents in both countries died and social media documented skies ablaze with rockets raining down on targets.
The Role of AI
On June 17, IAEA Director General Rafael Grossi took to CNN to clarify that there was no evidence that Iran was close to having a bomb. “We did not have any evidence of a systematic effort [by Iran] to move to a nuclear weapon,” Grossi confirmed on CNN.
What the heck happened then? What was the point of all this death and destruction?
As DD Geo-politics reported, “since 2015, the IAEA has relied on Palantir’s Mosaic platform, a $50-million AI system that sifts through millions of data points — satellite imagery, social media, personnel logs — to predict nuclear threats.”
In this particular case, reports Alastair Crooke:
- Its algorithm looks to identify and infer ‘hostile intent’ from indirect indicators — metadata, behavioral patterns, signal traffic — not from confirmed evidence. In other words, it postulates what suspects may be thinking, or planning.
- On 12 June, Iran leaked documents, which it claimed showed IAEA chief Rafael Grossi sharing Mosaic outputs with Israel. By 2018, Mosaic had processed more than 400 million discrete data objects and had helped impute suspicion to over 60 Iranian sites such as to justify unannounced IAEA inspections of those sites, under the JCPOA.
- These outputs, though dependent largely on the algorithmic equations, were incorporated into formal IAEA safeguard reports and were widely accepted by UN member states and non-proliferation regimes as credible, evidence-based assessments.
- Mosaic however is not a passive system. It is trained to infer from its algorithm hostile intent, but when repurposed for nuclear oversight, its equations risk translating simple correlation into malicious intent.
Trump’s Thinking
How did the false positive concerning Iran’s supposed nukes reach Trump?
Politico reports that “U.S. Central Command chief Gen. Erik Kurilla [with a long history of nation-building activities stretching from Panama, to Haiti, to Iraq] has played an outsized role in the escalating clashes between Tehran and Israel, with officials noting nearly all his requests have been approved, from more aircraft carriers to fighter planes in the region.”
It was apparently this very AI report from the IAEA, later repudiated, that was the driving force that convinced Trump himself to go ahead with military engagement, even to the point of foreswearing the opinions of his own Director of National Intelligence. Trump himself said he didn’t care what Gabbard thinks.
The U.S. strikes followed a few days later, with the launching of bunker-buster bomb attacks on three Iranian nuclear sites (Fordow, Isfahan, and Natanz), marking the first-ever US attack on another country’s nuclear program.
The problem: it was all based on modeling and sketchy data.
Dissent Within MAGA
The political problem for MAGA was unbearably obvious. Trump had long said that Iran cannot have nukes but distinguished himself from hawks like Nikki Haley precisely on grounds that she wanted to bomb Iran whereas Trump would make a deal and enforce it.
It was Palantir’s software report that flipped him from opposing to supporting strikes and intervention.
As might be expected, most MAGA influencers — Steve Bannon, Alex Jones, Tucker Carlson, Matt Gaetz, Matt Walsh, and many others — took the unusual step of blasting the Trump administration for its hair trigger and warning of the onset of WWIII.
None of them, so far as I can tell, could possibly have imagined that fake science generated by a Trump-friendly data company was the source of the misleading report.
What happened to change Trump’s opinion? Here we get into speculation. It seems likely that Tulsi’s own team and Trump’s own intelligence agencies began to take apart events and isolate the source of the problem in bad modelling, bad data, and bad science.
Cooler Heads Prevail — This Time
This began to shift Trump’s opinion but it was Iran’s own response of bombing Qatar that put it over the top. It seems that Iran gave the U.S. warning in order that there would be no loss of life.
This act of humanitarian rationality impressed Trump and caused him to rethink the fundamental idea that Iran had ambitions to possess weapons of mass destruction.
Bad modeling, bad data, and bad science conspired against freedom and peace, the very ideals that Trump had come to office to protect. Thus did he flip and go the other way fast: no more bombings, no more experts, no more attacks on life.
Or we can see this whole murderous fiasco as a real-life version of the movie Dr. Strangelove in which error, bureaucracy, and fanaticism combine to create outcomes no one in particular intended but which no one can stop once they start.
Fortunately, in this case, cooler heads prevailed. Don’t trust the models, don’t trust the experts, don’t trust the fake data, and don’t trust AI!
We can only hope that the lesson sticks.
Jeffrey Tucker
for Freedom Financial News