Technology
Artificial intelligence is quietly becoming one of the most powerful forces in global geopolitics. But what happens when AI companies collide with military strategy?
In this episode of Human in the Loop, Richard and Ria dive into the growing tension between the Pentagon and leading AI companies like Anthropic and OpenAI. The debate started with two controversial red lines: no mass surveillance and no autonomous lethal weapons. But when those principles collide with national security… who decides what AI is allowed to do?
The discussion becomes even more provocative when we examine the recent U.S.–Iran strikes that reportedly killed Iran’s Supreme Leader. Was AI involved in the operation? Public reports say no — but intelligence analysts warn the reality may be more complicated.
Inside this episode:
• The hidden power struggle between AI companies and the Pentagon
• Why Anthropic refused certain military uses of its models
• How OpenAI ended up working with defense systems instead
• What “human in the loop” really means in modern warfare
• Whether AI could already be embedded inside classified intelligence systems
• And the unsettling possibility that the AI revolution in warfare may already be happening behind closed doors
There is no confirmed evidence that generative AI planned or executed the Iran strike. But as Richard and Ria explore, the absence of evidence does not necessarily mean the absence of influence.
If AI is already assisting intelligence analysis, target modeling, or strategic simulations… we may only learn about it years later — if we learn about it at all.
This is Human in the Loop, where technology, power, and accountability collide.d

