AI Policy Is Now Defense Policy. The Pentagon's Contracts Prove It.

Pentagon AI contracts, Anthropic's ethical standoff, and the accelerating US-China race for autonomous military capability

AI Policy Is Now Defense Policy. The Pentagon's Contracts Prove It.

By Negotiate the Future

4/13/26

The Pentagon's Chief Digital and Artificial Intelligence Office awarded contracts worth up to $200 million each to four frontier AI companies last year: Anthropic, Google, OpenAI, and Elon Musk's xAI. The deals were designed to scale the use of agentic AI workflows across military and national security operations. Within months, one of those contracts collapsed, and the resulting dispute exposed the fault lines between Silicon Valley's safety commitments and the Defense Department's operational imperatives.

Anthropic's contract was terminated after the company and the Pentagon failed to agree on usage terms. Anthropic sought commitments that its models would not be used for mass surveillance of American citizens or to power autonomous weapon systems. The Defense Department sought unrestricted access. Hours after the termination was announced, OpenAI disclosed that it had struck its own deal to provide AI technology for classified networks.

The episode illuminated a deeper structural tension. The Replicator Initiative, the Pentagon's program to field thousands of low-cost autonomous drones, proceeds on the assumption that speed of deployment matters more than deliberation over ethical constraints. The $13.4 billion the Pentagon requested for autonomous systems in fiscal year 2026 signals institutional commitment to a pace of integration that leaves limited room for the kind of negotiation Anthropic attempted.

China's parallel trajectory compounds the urgency. Beijing's civil-military fusion doctrine enables faster movement from laboratory prototype to deployed capability than the American system typically permits. Chinese autonomous weapons programs emphasize mass production, swarm intelligence integration, and state-directed data harvesting. The gap in tactical autonomy deployment, once wide, is narrowing.

A recent investigation highlighted what researchers call the "benchmark fallacy" in military AI: agent capabilities that perform well in controlled test environments frequently fail under real-world combat conditions. The concern is not hypothetical. Critical command-and-control decisions are increasingly being delegated to algorithms that have never been validated beyond simulated environments. Military reliance on untested autonomous systems creates systemic risk that neither country's procurement process is designed to evaluate.

The international governance framework remains incomplete. The UN Secretary-General has called for a legally binding treaty prohibiting lethal autonomous weapons from operating without human oversight, with a target completion date of 2026. A majority of UN member states support such a treaty. Neither the United States nor China has endorsed binding restrictions on autonomous weapons development.

The convergence of AI capability and military application is now proceeding on two tracks simultaneously. On one, the technology advances at a pace set by commercial competition and trillion-dollar capital expenditure cycles. On the other, arms control diplomacy moves at the speed of multilateral consensus. The distance between those two tracks grows with every quarterly earnings call and every new defense appropriation.

AI policy is now defense policy. The companies building frontier models are simultaneously negotiating commercial API terms and classified military contracts.

The researchers training those models are producing capabilities that serve both cybersecurity defense and autonomous targeting. The line between civilian AI infrastructure and military AI infrastructure has not so much blurred as dissolved.

More from NtF

Continue reading

Stay Informed

Stay Informed