
Eight hours before US Tomahawk missiles hit targets in Tehran, Defense Secretary Pete Hegseth was finalizing a very different kind of operation. He had just blacklisted Anthropic—the maker of Claude—as a "supply chain risk to national security," a designation normally reserved for foreign adversaries. Hours later, OpenAI announced it had signed a deal to deploy its AI models across the Pentagon's classified networks.
The reason for all this? Two words Anthropic refused to remove from its contract: autonomous weapons and mass surveillance.
What followed was the most consequential week in AI ethics since the technology went mainstream—a story involving presidential executive orders, a $200 million contract, 1.5 million boycotting users, and the deeply uncomfortable question of whether AI was already being used to pick targets in a war zone.
The Timeline: Six Days That Shook the AI Industry
February 24: Defense Secretary Pete Hegseth delivered an ultimatum to Anthropic CEO Dario Amodei—agree to let the military use Claude for "all lawful purposes" by 5:01 PM on Friday, February 27, or face consequences.
February 26: Anthropic rejected the Pentagon's "final offer." The company said the proposed compromise language "was paired with legalese that would allow those safeguards to be disregarded at will."
February 27: Three things happened in rapid succession:
- President Trump ordered all federal agencies to "immediately cease" using Anthropic's technology
- Hegseth designated Anthropic a "supply chain risk"—a label typically applied to entities like Huawei or Kaspersky
- OpenAI announced its own deal with the Pentagon, complete with a blog post titled "Our Agreement with the Department of War"
February 28: The US and Israel launched strikes against Iran. Reports emerged that the military had used Claude—the very AI it had just banned—for intelligence assessment and target selection during the operation.
March 1-2: Claude surged to #1 on the US App Store. Over 1.5 million people joined the "QuitGPT" boycott movement. Anthropic's paid subscribers doubled.
What Anthropic Refused—and Why
Anthropic drew two non-negotiable red lines:
Autonomous weapons: "We do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons," the company stated. Current AI systems hallucinate, misidentify objects, and lack the contextual judgment that life-or-death targeting decisions demand. Building reliable AI agent systems for business workflows is challenging enough—the stakes of getting military targeting wrong are existential.
Mass surveillance of Americans: Anthropic wanted explicit contractual language preventing the bulk collection of citizens' location data, browsing history, and financial information. While much of this data is legally purchasable on the commercial market, Anthropic argued that AI would "supercharge" the collection and analysis, effectively creating a surveillance apparatus regardless of whether each individual data point was technically legal.
The Pentagon insisted it had no interest in either use case but wanted language permitting "all lawful purposes."
Amodei's position was clear: "We are patriotic Americans committed to defending the US, but we will not move on those red lines."
What OpenAI Agreed To—and the Loopholes
OpenAI's blog post outlined three red lines:
- No mass domestic surveillance
- No autonomous weapons systems
- No high-stakes automated decisions
The company also emphasized technical safeguards: cloud-only deployment (no edge devices), cleared OpenAI personnel in the loop, and full control of its safety stack.
On the surface, these look identical to what Anthropic demanded. But critics identified a critical gap.
OpenAI's surveillance prohibition covers only the "unconstrained" collection of Americans' private information. Commercially available public data—location tracking, social media activity, web browsing history—falls outside the ban. As MIT Technology Review put it: "This is exactly what Anthropic feared."
The Defense Intelligence Agency has previously purchased such commercial data. AI-powered analysis of legally obtained public data can achieve surveillance outcomes functionally identical to collecting private data—a distinction without a meaningful difference.
OpenAI engineer Leo Gao publicly criticized his own company on X, calling the safeguards "window dressing."
The Iran Strikes: AI Already in the Kill Chain?
The timing made everything worse. According to Axios, the US military used Anthropic's Claude to support the Iran strikes on February 28—hours after Trump ordered agencies to stop using it. The tool was reportedly used for target selection, theoretical battlefield simulations, and general intelligence assessments.
This raises uncomfortable questions about how deeply AI is already embedded in military operations. Project Maven, the Pentagon's AI initiative launched in 2017, now operates under the National Geospatial-Intelligence Agency. Its director has stated that by June 2026, Maven will transmit "100 percent machine-generated" intelligence to combatant commanders.
The global picture is no less concerning. DARPA's OFFSET program envisions swarms of 250+ autonomous drones for urban warfare. China's PLA-linked researchers are building autonomous drone swarms with Taiwan scenarios in mind. Germany is developing its AI targeting system Uranos KI for NATO's eastern flank. A December 2024 UN General Assembly resolution calling for international action on Lethal Autonomous Weapon Systems (LAWS) passed with 166 votes in favor—but only three countries voted against (Belarus, North Korea, Russia), while Israel, China, and India were among 15 nations that abstained. The US voted in favor of the resolution but has consistently opposed binding treaty negotiations that would restrict its autonomous weapons development.
The Public Verdict
The backlash against OpenAI was swift and measurable:
- 1.5 million+ people joined the QuitGPT movement, canceling ChatGPT subscriptions or sharing boycott messages
- Claude hit #1 on the US App Store, pushing ChatGPT to second place and Gemini to fourth
- Anthropic's growth exploded: free signups jumped 60%+ since January; paid subscribers doubled in 2026
- 300+ Google employees and 60+ OpenAI staff signed an open letter supporting Anthropic's ethical stance
Sam Altman acknowledged on X that the deal was "definitely rushed" and "the optics don't look good."
Sidewalk chalk graffiti outside OpenAI's San Francisco offices attacked the decision. Outside Anthropic's offices, the messages were supportive.
Why This Matters for Every Business Using AI
This crisis goes far beyond two companies and one government contract. It reveals structural risks that any organization building with AI needs to understand.
Vendor risk is now geopolitical. A technology consulting partner that seems stable today can be blacklisted tomorrow—not for technical failures, but for political ones. The Anthropic designation proves that "supply chain risk" labels can be weaponized against domestic companies.
Single-vendor dependency is dangerous. Federal agencies scrambled to switch from Anthropic to OpenAI overnight. Organizations that build modular, multi-model architectures can absorb these shocks. Those locked into a single provider cannot.
"Lawful" and "ethical" are not synonyms. OpenAI's contract permits everything legal. But legality is a floor, not a ceiling. The gap between what AI can legally do with publicly available data and what it should do is where the real risk lives—especially for cybersecurity-conscious organizations handling sensitive data.
Consumer sentiment is shifting. 1.5 million people boycotting an AI product over ethics would have been unthinkable two years ago. Brands that ignore the ethical dimension of their AI strategy risk real commercial consequences.
Key Takeaways
- AI safety became a political weapon — Anthropic's ethical stance was reframed as a national security threat, setting a dangerous precedent for any company that negotiates with government
- The autonomous weapons line is blurring — Project Maven's "100% machine-generated intelligence" and AI-assisted targeting in Iran show the human-in-the-loop concept is eroding fast
- "All lawful purposes" is not a safeguard — The gap between OpenAI's and Anthropic's contract language reveals how much room exists within "legal" for ethically questionable uses
- Public opinion rewards ethical stands — Claude's App Store surge and Anthropic's subscriber growth prove that principles can be commercially viable
- Vendor diversification is not optional — When a provider can be banned overnight by executive order, multi-model architecture becomes a business continuity requirement
- International regulation is fragmented — 166 nations support action on autonomous weapons, but major military powers resist binding limits, leaving the private sector to draw its own lines


