The Anthropic-Pentagon AI dispute has shaken the world. Can AI have a conscience?
The question may sound strange. AI is a tool — where does conscience come in? But the company that makes AI can have one.
Anthropic said “No” to the U.S. Department of Defense. Don’t use our AI for mass surveillance. Don’t use it for fully autonomous weapons. Just agree to these two things.
The price was the first-ever “Supply Chain Risk” designation of a domestic company in U.S. history. A weapon originally reserved for adversarial nations like Russia and China was turned on an American company.
And just hours later, OpenAI filled that vacancy. With the claim that it would uphold “the same red lines.” This contradictory situation poses a simple question: in the AI era, what is the price tag on conscience?
The Butterfly Effect of One Company’s Refusal
The story traces back to June 2024. That’s when Claude was deployed on the U.S. Department of Defense’s classified systems. It was the only AI model the military placed on its classified network.
A classified network AI means, simply put, an AI that handles the military’s most sensitive information. It runs on a network completely isolated from the general internet.
Anthropic didn’t refuse military cooperation outright. It allowed Claude to be used for intelligence analysis, decision support, and administrative tasks. It drew the line at exactly two things.
Anthropic’s Two Red Lines
First: Do not use Claude for mass surveillance activities targeting domestic citizens.
Second: Do not use Claude in fully autonomous weapons systems that select and engage targets without human intervention.
According to Anthropic’s official statement, these two exceptions did not affect a single government mission.
The Collapse of Anthropic-Pentagon Negotiations
According to Anthropic’s official statement, these two exceptions did not affect a single government mission. None of the military’s actual operational requests ran afoul of these red lines (GeekNews).
Anthropic’s Two Red Lines
First: Do not use Claude for mass surveillance of citizens.
Second: Do not use Claude for fully autonomous weapons systems that select and engage targets without human intervention.
According to Anthropic’s official statement, these two exceptions have not affected a single government mission.
Touching the Pentagon’s Nerve
The problem wasn’t the principles — it was the insistence on them.
Negotiations between the Defense Department and Anthropic went back and forth for months. On February 24, Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei held a direct meeting (GeekNews).
The Pentagon’s demand was clear: “Make it available for all lawful purposes without restriction.” Anthropic’s answer was equally clear: “We’ll do everything except those two things.”
In this deadlock, the Pentagon drew its sword. It threatened to invoke the Defense Production Act (DPA) and set a deadline of Friday at 5:01 PM ET for Anthropic to lift its model restrictions (The Hacker News).
When Anthropic let the deadline pass, Defense Secretary Hegseth directed the designation of Anthropic as a “Supply Chain Risk” under 10 USC 3252. President Trump ordered all federal agencies to cease using Anthropic technology and granted a 6-month transition period.
The nuclear option of a Supply Chain Risk designation — a nation’s response to one company’s ethical choice | Photo: AI-generated image
What the Anthropic-Pentagon Supply Chain Risk Designation Actually Means
“Supply Chain Risk” sounds somewhat vague. In plain terms, it means “this company’s products pose a national security risk, so exclude them from Defense Department contracts.” Think of it like a health inspector declaring “this food supplier fails hygiene standards — banned from deliveries.”
Originally, this measure was used against companies like Huawei and Kaspersky from Russia/China. Anthropic is the first U.S. company to receive it. Legal experts were astonished at how unprecedented this was.
The Legal Scope of the Designation
According to Just Security analysis, the designation has legal force only over Department of Defense (DoD) contracts. It has no direct legal binding power over the civilian market or other government agencies.
But the practical chilling effect is the problem. With 8 of the top 10 U.S. companies using Claude, the label of “company the Pentagon designated as a risk” will inevitably impact civilian contracts too.
According to Mayer Brown analysis, companies with government contracts now have an obligation to verify whether their subcontractors have been designated as supply chain risks — creating a structure where business with Anthropic must be reconsidered.
OpenAI Fills the Vacancy
The timing says everything.
Just hours after Trump’s Anthropic ban was announced, OpenAI CEO Sam Altman announced a classified network AI deployment contract with the Defense Department (GeekNews).
OpenAI claimed it included the same 3 prohibitions as Anthropic.
ANTHROPIC vs OPENAI: Pentagon AI Contract Approaches Compared
Anthropic
- Explicit prohibitions in contract (red lines)
- Direct classified network deployment
- Refused mass surveillance + autonomous weapons
- Result: Supply Chain Risk designation
OpenAI
- Technical safeguards (cloud-only)
- FDE personnel deployment
- Claims 3 prohibitions included
- Result: Contract secured
Contractual Red Lines vs Technical Safeguards
Here’s where the key difference emerges. Anthropic’s position was “write explicit prohibition clauses into the contract,” while OpenAI’s approach was “guarantee safety through technical architecture.”
A simple analogy: Anthropic’s approach is “let’s write it into law that you can’t use a knife,” while OpenAI’s approach is “we’ll only supply scissors instead of knives.” The result may seem the same, but scissors can stab someone too — that’s the fundamental difference.
Fortune analyzed this as “precisely the outcome Anthropic feared.”
MIT Technology Review also noted that because OpenAI’s compromise relies on technical safeguards, those safeguards could be neutralized if policy changes or technology evolves.
OpenAI
- Technical Safeguards (Cloud-Only)
- FDE Personnel Deployed
- Claimed 3 Red Lines Included
- Result: Contract Signed
Anthropic
- Explicit Prohibition Clauses (Red Lines) in Contract
- Direct Classified Network Deployment
- Mass Surveillance + Autonomous Weapons Refused
- Result: Supply Chain Risk Designation
대규모 감시 + 자율무기 거부
The Tech Industry’s Solidarity of Conscience
There’s a reason this incident didn’t end as a simple government-business dispute.
Hundreds of Google and OpenAI employees signed an open letter supporting Anthropic through notdivided.org. Named “We Are Not Divided,” this solidarity is the largest intra-tech-industry ethics resistance since the 2018 Google Project Maven opposition (The Hacker News). This solidarity connects to the chip export control issues analyzed in The New Map of AI Security.
The Legacy of Project Maven
Project Maven, for context: in 2018, Google participated in a military project providing AI for drone video analysis. Over 4,000 employees signed a petition against it and dozens resigned, ultimately forcing Google to terminate the contract. The current Anthropic solidarity is that same energy erupting again.
Simultaneously, within Google, a movement emerged demanding DeepMind establish red lines for military project participation — specifically, a demand not to use AI for weapons development or combat support (GeekNews).
According to TechCrunch, tech workers also sent letters to Congress demanding the withdrawal of Anthropic’s Supply Chain Risk designation.
ANTHROPIC-PENTAGON DISPUTE TIMELINE
2024.06
Claude Deployed on Classified Network
The only AI model placed on the military’s classified systems
2026.02.24
Hegseth-Amodei Meeting
Direct Defense Secretary-CEO negotiation, deadlock
2026.02.28
Supply Chain Risk Designation
First-ever Supply Chain Risk designation of a domestic U.S. company
2026.02.28 (hours later)
OpenAI Pentagon Contract Signed
OpenAI fills the vacancy left by Anthropic
Consumer choice becomes a new form of voting — the shifting AI app market landscape | Photo: Unsplash
ANTHROPIC-PENTAGON Dispute Timeline
2024.06
Claude Classified Network Deployment
The only AI model deployed on U.S. military classified systems
2026.02.24
Hegseth-Amodei Meeting
Defense Secretary-CEO direct negotiation, deadlock
2026.02.28
Supply Chain Risk Designation
First-ever supply chain risk designation of a domestic company in U.S. history
2026.02.28 (hours later)
OpenAI Signs Pentagon Contract
OpenAI fills the void left by Anthropic

The Paradox: Conscience Becomes a Brand
Here’s the ironic twist.
Immediately after the dispute, Claude hit #1 on the U.S. App Store free apps chart. In Korea, Claude’s total payment volume surged from 1.6 billion won to 19.7 billion won — a more than 12x increase. Per-transaction payment also rose 2.5x from 42,600 won to 106,000 won. Meanwhile, ChatGPT’s global daily share plunged from 69.1% to 45.3%, while Gemini’s share rose from 14.7% to 25.1% (Seoul Shinmun).
MARKET RESPONSE AFTER ANTHROPIC DISPUTE
19.7B won
Korea Claude Payments
$380B
Valuation
45.3%
ChatGPT Share (declining)
Anthropic’s valuation stands at $380 billion with annual revenue of $14 billion. Even after losing the Pentagon contract (maximum $200 million, 1.4% of annual revenue), it rode an upward growth trajectory. Conscience converted into brand value (CNBC). This dynamic shift among AI companies connects to the AI leadership landscape analyzed in The Day After AGI.
Market Reaction After ANTHROPIC Dispute
₩19.7B
Korea Claude Spending
$380B
Valuation
45.3%
ChatGPT Market Share (declining)
Why This Matters for Korean Professionals
“What does a U.S. Pentagon story have to do with me?” You might think that. It has a lot to do with you.
As of 2026, 85% of Korean enterprises are using generative AI, with 55.7% having already completed adoption (22.4% company-wide + 33.2% department-level). 79.3% are expanding AI budgets (CIO Korea).
The problem is that whether you use ChatGPT, Claude, or Gemini, those AI providers are directly impacted by U.S. government policy changes. Supplier risk becomes our risk.
The Cascading Effect of the Anthropic-Pentagon Dispute on Korean Companies
For example, suppose your company built a workflow system on Claude. When Anthropic is designated a Supply Chain Risk, partner companies with U.S. government contracts begin reconsidering their relationship with Anthropic. This cascading effect can reach your company.
Conversely, companies using OpenAI might think “our supplier is on good terms with the government, so we’re fine.” But OpenAI being more flexible with government demands also means its ethical standards could shift with political winds.
The individual perspective is changing too. Korean professionals have begun considering AI ethics as a criterion for model selection (Seoul Shinmun). Interest is expanding from “how does this AI use my data?” to “what principles does this AI company hold?”
Three Things AI Managers Should Do Now
First, develop an AI supplier diversification strategy. Reduce single-model dependence and secure at least two AI suppliers. This isn’t a technical decision — it’s geopolitical risk management.
Second, monitor supplier ethics policies. Periodically review how AI suppliers respond to government policy changes and what their data usage principles are. Companies with defense or public sector contracts must account for the cascading effects of Supply Chain Risk designations.
Third, simultaneously build AI literacy and ethical awareness. Korea’s AI Basic Act (December 2025) weighted promotion over regulation, but the military AI ethics debate has already risen to the global agenda. Korea is participating in this discourse through REAIM (the international conference on AI military use).
INSIGHT
The Anthropic-Pentagon dispute isn’t asking “does AI have a conscience?” — it’s asking “when the people building AI maintain their conscience, who bears the cost?” And part of that cost will ultimately be passed to all of us who use AI.
ACTION
AI tool selection is no longer just about features and price. Assess the political/ethical risks your company’s AI suppliers carry, and reducing single-supplier dependence is the most practical AI risk management strategy for 2026.
ACTION
AI tool selection is no longer just about features and price. Understanding the political and ethical risks of your AI vendor and reducing single-vendor dependency is the most realistic AI risk management strategy for 2026.
INSIGHT
The Anthropic-Pentagon dispute isn’t asking “does AI have a conscience” but rather “who bears the cost when AI makers uphold their conscience.” And part of that cost will ultimately be passed on to all of us who use AI.
Sources
- Pentagon Designates Anthropic Supply Chain Risk Over AI Military Dispute — The Hacker News, 2026-02-28
- Agreement with the Department of Defense — GeekNews, 2026-03-02
- Anthropic Official Statement on Secretary Hegseth’s Supply Chain Risk Designation — GeekNews, 2026-02-28
- Defense Secretary Directs Anthropic Supply Chain Risk Designation — GeekNews, 2026-02-28
- Google Employees Demand Red Lines for Military AI — GeekNews, 2026-02-28
- OpenAI Agrees to Deploy Models on DoD Classified Networks — GeekNews, 2026-02-28
- Trump Bans Anthropic, Signs Pentagon Deal with OpenAI — GeekNews, 2026-02-28
- The Pentagon’s fight with Anthropic was the first real test for how we will control powerful AI — Fortune, 2026-03-03
- What Hegseth’s Supply Chain Risk Designation Does and Doesn’t Mean — Just Security, 2026-03
- Pentagon Designates Anthropic as Supply Chain Risk — Mayer Brown, 2026-03
- OpenAI’s ‘compromise’ with the Pentagon is what Anthropic feared — MIT Technology Review, 2026-03-02
- Claude’s Rising Price Tag, ChatGPT’s Fall — Seoul Shinmun, 2026-03-04
- 85% of Korean Enterprises Adopt Generative AI in 2026 — CIO Korea, 2026
- Tech workers urge DOD to withdraw Anthropic label as a supply chain risk — TechCrunch, 2026-03-02
- Trump admin blacklists Anthropic — CNBC, 2026-02-27
FAQ
Q1. Does Anthropic’s Supply Chain Risk designation affect regular consumers?
Legally, it applies only to U.S. Department of Defense (DoD) contracts, with no direct legal restrictions on consumer use of Claude. Anthropic’s official statement confirmed no impact on general customers (GeekNews). However, the practical chilling effect may impact enterprise customers over the long term.
Q2. How does OpenAI’s Pentagon contract differ from Anthropic’s?
The biggest difference is approach. Anthropic demanded explicit prohibition clauses (red lines) in the contract, while OpenAI responded with technical safeguards — cloud-only deployment and FDE (Forward Deployed Engineer) personnel. OpenAI claims its 3 prohibitions are stronger than Anthropic’s, but critics note that technical safeguards can be neutralized by policy changes (MIT Technology Review).
Q3. How should Korean companies manage AI supplier risk?
Three essentials: (1) AI supplier diversification: reduce single-model dependence and secure at least 2 AI suppliers, (2) Monitor supplier ethics policies: periodically review their response to government policy changes and data usage principles, (3) Include supply chain risk response provisions in contracts: companies with defense/public sector contracts must account for the cascading effects of subcontractor Supply Chain Risk designations.
Q4. Did Anthropic suffer business damage from this incident?
Paradoxically, short-term effects were actually positive. Claude hit #1 on the U.S. App Store, Korean total payments surged over 12x, and the company’s valuation stands at $380 billion. The Pentagon contract (maximum $200 million) represents just 1.4% of annual revenue of $14 billion, limiting the financial blow. However, the impact of the Supply Chain Risk label on Anthropic’s IPO preparations remains uncertain.
