The Role of AI in the US-Israel-Iran War

In the 2026 US-Israel-Iran war, AI transformed operations via systems like Maven and Lavender, enabling unprecedented strike speed. Both sides leveraged AI for strategic advantage—the US-Israel alliance for targeting superiority, Iran for digital infrastructure defense and information operations.

The Role of AI in the US-Israel-Iran War
The Role of AI in the US-Israel-Iran War

The ongoing conflict between the United States, Israel, and Iran erupted into full-scale war on 28 February 2026 is being called “the first AI war” by analysts and military observers. Artificial intelligence is no longer a futuristic concept here; it is actively reshaping how battles are planned, executed, and narrated in real time.

Background: How the Conflict Escalated?

What distinguishes this conflict from previous Middle East confrontations is not just the scale, but the velocity of operations. Traditionally, military “kill chains,” the cycle from intelligence gathering to strike execution, could take days or even weeks. In this conflict, however, AI-enabled systems compressed that timeline into minutes.

Advanced AI platforms processed real-time satellite imagery, drone surveillance feeds, and signals intelligence (SIGINT). These systems then identified, prioritized, and recommended targets almost instantly. Hence, these platforms enabled continuous strike waves with minimal delay. The result was a form of “machine-speed warfare,” where operational tempo outpaced conventional human decision-making cycles.

In essence, the escalation was not only geopolitical—it was technological. The integration of AI into battlefield operations transformed how quickly and efficiently both sides could detect, decide, and strike. It fundamentally redefined modern conflict dynamics.

1. AI on the US Side: Turbocharging the Offensive “Kill Chain”

The United States and Israel have deployed the most sophisticated artificial intelligence systems ever used in active warfare. These systems form an integrated "digital kill chain" that processes:

  • Vast amounts of intelligence
  • Generates targeting recommendations at machine speed, and
  • Executes strikes with unprecedented efficiency.

The military refers to this process as F2T2EA: Find, Fix, Track, Target, Engage, Assess—and AI now accelerates every stage.

1.1. US Central Command's AI Toolkit

The US military has leveraged a powerful combination of defense technology platforms and commercial LLM models to create an AI-powered targeting apparatus. At the heart of this capability is the Maven Smart System. It was developed by data analytics company Palantir Technologies, which has been integrated with Anthropic's Claude AI model.

The Maven Smart System and Claude Integration

The Maven Smart System serves as the central data fusion and analysis platform for US Central Command (CENTCOM) operations. According to reports, this system processes massive volumes of classified intelligence from approximately 179 different data sources. It includes satellite imagery, drone surveillance feeds, signals intelligence (intercepted communications), electronic sensors, and open-source information.

What makes this system revolutionary is its integration with the Claude. It is a large language model developed by San Francisco-based Anthropic. Claude's role is to apply advanced semantic analysis and logical reasoning capabilities to the vast stream of data processed by the Maven platform. It allows the system to:

  • Extract high-value intelligence from fragmented and noisy data.
  • Generate precise geographic coordinates for potential targets.
  • Rank targets based on their strategic importance.
  • Recommend specific weapons systems based on target characteristics and available stockpiles.
  • Simulate potential combat outcomes before strikes are executed.

A US military commander can pose a plain-language question to the system—such as identifying the most vulnerable enemy logistics center—and the AI cross-references all available data to generate a clear operational response with prioritized targets.

Performance and Scale

The operational impact has been dramatic. During the first 24 hours of the conflict, the Maven Smart System reportedly helped US commanders select and prioritize over 1,000 Iranian targets. It was a scale that would traditionally require thousands of human intelligence analysts working for weeks or months.

Brigadier General Liam Hulin, CENTCOM's deputy director of operations, has publicly confirmed the use of AI tools in war. These tools enable the military to process intelligence and develop targeting options at "machine speed" rather than human speed. The system not only identifies targets before strikes but also analyzes post-strike results to assess operational effectiveness.

2. Israel's AI Systems – Overview

Israel has developed and deployed its own suite of AI-powered targeting systems. Many were refined during operations in Gaza before being applied to the Iran conflict.

2.1. The Lavender System

"Lavender" is an AI-powered database system that automatically marks potential targets based on massive surveillance data. Its key features include:

  • Mass-scale target generation: Can produce lists of tens of thousands.
  • Automated screening: Scores individuals (1–100) for links to hostile groups (e.g., Hamas/PIJ military wings).
  • Industrial-scale targeting: Described as enabling a "mass assassination factory" focused on quantity over precision.

2.2. The "Where's Daddy?" System – Verified Details

"Where's Daddy?" is a tracking tool that complements Lavender by monitoring targets in real-time. It tracks movements through multiple sensor inputs, identifies optimal engagement windows, and automatically alerts the command chain when strike opportunities emerge.

2.3. The Habsora (The Gospel) System

Habsora automatically selects airstrike targets at exponentially faster rates than human analysts. It contributes to an emphasis on "quantity over quality" of targets.

2.4. Project Nimbus: The Cloud Infrastructure

All Israeli AI systems rely on Project Nimbus. It was a $1.2 billion contract signed in 2021 between the Israeli government and Amazon Web Services (AWS) and Google Cloud. This contract provides cloud infrastructure required for massive data processing.

The Targeting Process: Human Validation or Rubber-Stamping?

Israeli officials state that AI targeting involves human oversight, with teams validating strike recommendations. However, critics argue that the speed and scale of AI-generated targeting can reduce human review to a mere "validation procedure." When algorithms generate tens of thousands of targets, the pressure on human operators to keep pace can lead to systematic authorization without genuine deliberation.

3. Iran's Use of AI

Iran has leveraged AI as an asymmetric tool for both cyber-physical attacks and information warfare.

3.1. Targeting Digital Infrastructure: The Data Center Strikes

The most significant kinetic application of AI by Iran has been in the realm of targeting digital infrastructure. Iran executed a series of precision strikes that fundamentally altered the understanding of modern warfare.

Iranian Shahed drones struck two Amazon Web Services (AWS) data centers in the United Arab Emirates and third AWS facility in Bahrain. These attacks represent the first time in military history that kinetic capabilities have been used against public cloud infrastructure.

The Use of AI for Spreading Misinformation

Iran’s Information Warfare

Iran's most sophisticated application of AI in the current conflict may be in the information domain. According to Bridget Bean, former acting director of CISA:

"They can’t win on the battlefield, so they’re going to try and win through AI and through a global narrative."

Bean explained the evolution of Iran's approach:

"Their old playbook was very discernible, but they've gotten very good on some of their AI manipulation. During the 12-day war, they did this, it was the first time for a global conflict where we saw AI-generated disinformation outpace traditional propaganda".

The key characteristics of Iran's AI information operations include:

  • Subtle manipulation: Taking real images and videos and adding "just a touch of AI" so that content "passes the gut test" for viewers scrolling quickly on their phones.
  • Volume and speed: AI enable the production of propaganda at scales that outpace traditional fact-checking and counter-narrative efforts.
  • Targeting Western audiences: The content is designed to weaken the will and resolve of American and allied populations by pushing narratives that are not true.

US-Israel Also Weaponized AI

The US and Israeli sides have also deployed AI-generated content:

The Minab School Strike: When AI Targeting Fails

A girls' elementary school in Minab was struck that killed 165 people. The Pentagon's preliminary investigation concluded the US was likely responsible. This tragedy illustrates critical failures in the AI targeting process.

The Error: The coordinates used were out of date.
The Core Problem: In the rush to operate at "machine speed," human oversight risks becoming a rubber-stamping process. Bryant explained:

"The human should be in the loop at every single point. AI could have easily caught what many humans should have caught all the way along the targeting process if used properly."

The Minab strike raises the central ethical question of AI warfare: when an AI-guided strike goes wrong, who is responsible?

Conclusion: AI as the New Battlefield Reality

The 2026 US-Israel-Iran war marks the first major conflict where artificial intelligence has become the central nervous system of warfare.

On the US-Israel side, AI systems like Maven, Claude, Gospel, and Lavender compressed the kill chain from weeks to minutes. These tools enabled unprecedented strike speed and scale at the expense of meaningful human oversight. These also raised serious risks of civilian casualties and ethical lapses. Iran turned AI into an asymmetric weapon. It targeted cloud infrastructure for disruption and flooding global platforms with generative disinformation. Yet the information domain proved bidirectional, with Israeli leaders and US-linked tools also amplifying AI-generated content.

This war reveals AI’s dual nature: it accelerates precision but compresses judgment, blurs truth, and escalates risks. Without urgent norms and safeguards, AI may make future conflicts not only faster and more lethal, but far harder to control or end.

What you think? Will AI shorten wars—or simply make them deadlier and more unpredictable?