OpenAI to revise Pentagon AI deal after backlash

OpenAI says it will amend its recent agreement with the United States Department of Defense after criticism over how its AI systems could be used in classified military operations.

Donald Trump 2025 speaking Project 2025 policy blueprint influences immigration and Venezuela strategy

Donald Trump 2025 speaking Project 2025 policy blueprint influences immigration and Venezuela strategy

Chief executive Sam Altman acknowledged the company rushed the original announcement, calling it “opportunistic and sloppy,” and pledged clearer safeguards. The updated language will explicitly prohibit the use of OpenAI systems for domestic surveillance of US citizens and nationals. Intelligence agencies, including the National Security Agency, would require additional contractual modifications before accessing the technology.

Fallout from Anthropic dispute

The controversy followed tensions between the Pentagon and Anthropic, whose AI model Claude was reportedly restricted by the Trump administration after the company refused to remove a corporate “red-line” against developing fully autonomous weapons.

Despite that stance, Claude’s reported use in the US-Israel conflict with Iran raised further scrutiny over how frontier AI models are integrated into operational military workflows.

OpenAI had initially claimed its agreement included “more guardrails than any previous agreement for classified AI deployments.” However, backlash from users was swift. Reports suggest day-over-day uninstalls of the ChatGPT mobile app surged dramatically, while Claude climbed to the top of Apple’s App Store rankings.

How AI is already embedded in warfare

Artificial intelligence is now deeply integrated into military logistics, intelligence processing, and battlefield analysis. Companies like Palantir Technologies supply data-fusion platforms to US, Ukrainian and NATO forces.

Palantir’s AI-enabled defence platform Maven aggregates satellite imagery, battlefield telemetry, and intelligence reports. Commercial large language models — including Claude — can then assist analysts in synthesising that data to accelerate operational decisions.

NATO officials stress that these systems operate with “human in the loop” oversight. Lieutenant Colonel Amanda Gustave, chief data officer for NATO’s Task Force Maven, previously emphasised that AI systems do not independently make lethal decisions.

However, experts warn about reliability risks. Large language models are prone to “hallucinations” — generating confident but incorrect outputs — a vulnerability that becomes critical in high-stakes military contexts.

Professor Mariarosaria Taddeo of Oxford University argued that with Anthropic sidelined, one of the more safety-focused AI actors may be absent from Pentagon deliberations — potentially shifting the balance between capability and constraint.

Bigger questions for AI governance

The episode underscores widening tensions between:

  • National security imperatives

  • Corporate AI governance policies

  • Public trust in consumer AI tools

  • The ethical boundaries of autonomous weapons

As AI systems move from civilian chatbots into classified defence environments, the contractual architecture — including usage restrictions, audit rights, and human-oversight guarantees — is becoming as strategically important as the underlying models themselves.