16 Million Stolen Queries: How Nation-States Weaponize AI in 2026

Anthropic’s security team noticed something strange. Across 24,000 accounts, over 16 million queries were flowing into Claude — not to use it, but to clone it. In a world where nation-state AI weaponization has reached industrial scale, this wasn’t a breach. It was a heist (The Hacker News).

The accounts weren’t asking Claude questions. They were systematically extracting its reasoning patterns, scoring methods, and tool-use capabilities. Think of it like someone photocopying every page of a proprietary textbook — not to read it, but to print their own version without the safety warnings.

This is Part 3 of the 2026 Cyber Threat Map series. Part 1 covers supply chain attacks. Part 2 covers MCP security vulnerabilities.

In Part 1 of this series, we saw supply chains become the primary entry point. In Part 2, we watched AI agents emerge as a new attack surface. Now meet the actors walking through both doors — and they’re bringing nation-state budgets.

The New Arms Race: Nation-State AI Weaponization as a Force Multiplier

Nation-state cyber operations used to require large teams of specialists. Building malware, crafting phishing campaigns, running intelligence operations — each demanded years of expertise.

AI changed the equation. Model distillation — essentially copying a frontier AI’s capabilities into a smaller, uncensored version — lets adversaries skip years of R&D. AI-generated deepfakes make social engineering scalable. And AI-assisted coding accelerates malware development across the board.

What we’re seeing in 2026 isn’t just nation-states using AI tools. It’s nation-states industrializing AI theft and nation-state AI weaponization at a scale that was impossible two years ago.


Nation-State AI Weaponization by the Numbers

16M+

Queries Stolen from Claude

24,000

Fraudulent Accounts

$4M

Paid for 8 Zero-Days

China: The 16-Million-Query Heist and Nation-State AI Weaponization

The Anthropic disclosure was staggering in its scope. Three Chinese AI companies — DeepSeek, Moonshot AI, and MiniMax — orchestrated what amounts to the largest known model distillation attack in history (The Hacker News).

Here’s how the numbers break down. DeepSeek sent 150,000+ queries focused on reasoning, scoring, and censorship bypass. Moonshot fired off 3.4 million+ queries targeting agentic reasoning, tool use, coding, and computer vision. MiniMax led the pack with 13 million+ queries on agentic coding and tool use.

The infrastructure behind this was industrial. A “hydra cluster” proxy network managed 20,000+ simultaneous fraudulent accounts — think of it as a bot farm on steroids, constantly rotating identities to evade detection.

AI model distillation attack - code screen data extraction dark
16 million queries — industrial-scale AI model distillation | Photo: Pexels

Why does nation-state AI weaponization through distillation matter? When you strip safety guardrails from a frontier model, you get something that can generate malware code, craft sophisticated phishing campaigns, or produce military intelligence analysis — without the “I can’t help with that” response.

Google confirmed the problem extends beyond Claude. They blocked over 100,000 extraction attempts on Gemini (Google GTIG). This isn’t isolated. It’s a coordinated, multi-front effort to vacuum up Western AI capabilities.

CompanyQuery VolumeFocus Areas
DeepSeek150,000+Reasoning, scoring, censorship bypass
Moonshot AI3,400,000+Agentic reasoning, tool use, coding, CV
MiniMax13,000,000+Agentic coding, tool use
Total16,000,000+Across 24,000 fraud accounts

The security implications go beyond intellectual property theft. Models distilled without safety layers could power autonomous cyber weapons, surveillance systems, or influence operations at scale. As one researcher put it — they aren’t building a chatbot. They’re building an arsenal.

North Korea: The Swiss Army Knife of Nation-State AI Weaponization

If China’s approach is industrial theft, North Korea’s is diversified portfolio management. Pyongyang has split its cyber operations into increasingly specialized units, each with distinct missions and revenue targets.

The most alarming development: Labyrinth Chollima — North Korea’s premier hacking group — has formally divided into three specialized units (The Hacker News).


North Korea’s 3 Specialized Cyber Units

🕵


Labyrinth Chollima
  • Mission: Cyber espionage
  • Tool: FudModule rootkit
  • Target: Long-term intelligence
  • Focus: State secrets

💰


Golden Chollima
  • Mission: Crypto theft
  • Volume: High-frequency raids
  • Target: DeFi & exchanges
  • Focus: Revenue generation

💥


Pressure Chollima
  • Mission: High-value hacks
  • Op: TraderTraitor heists
  • Target: Major exchanges
  • Focus: Headline operations

Labyrinth Chollima handles cyber espionage, deploying tools like the FudModule rootkit for long-term intelligence access. Golden Chollima runs day-to-day cryptocurrency theft — smaller targets, higher volume. Pressure Chollima takes on high-value operations like TraderTraitor, the kind of heists that make headlines.

Think of it as a corporate restructuring. Instead of one team doing everything, they’ve created specialized business units — except the “business” is state-sponsored crime.

AI Deepfakes Meet Social Engineering

UNC1069 (also known as CryptoCore/MASAN), active since 2018, has added AI deepfakes to its toolkit (Google Mandiant). The attack chain is disturbingly polished.

Step one: infiltrate a target’s Telegram contacts. Step two: schedule a Calendly meeting. Step three: host a fake Zoom call using AI-generated deepfake video — or replay recordings of the actual victim. Step four: trigger a ClickFix infection that deploys the payload.

The malware arsenal behind this is extensive — seven distinct families. WAVESHAPER (C++), HYPERCALL (Go), HIDDENCALL (Go), DEEPBREATH (Swift), SILENCELIFT (C++), CHROMEPUSH (C++ browser extension), and more. DEEPBREATH alone can manipulate macOS TCC databases to steal iCloud Keychain, Chrome, Brave, Edge, Telegram, and Notes data.

They’re even using Gemini AI to generate lure materials and assist with coding. The barrier to entry for sophisticated social engineering through nation-state AI weaponization has effectively collapsed.

AI deepfake social engineering - robot pointing technology illustration
AI deepfakes enable scalable social engineering attacks | Photo: Pexels

Lazarus Goes Franchise

Perhaps the most pragmatic shift: the Lazarus Group — North Korea’s most notorious hacking unit — has become a Medusa ransomware-as-a-service affiliate (Symantec / The Hacker News).

Instead of building custom ransomware, Lazarus is now using off-the-shelf RaaS platforms like Medusa and Qilin. Their targets include US healthcare organizations — mental health nonprofits, autism education facilities — with an average ransom demand of $260,000.

The toolset is a mix of custom and commodity: Mimikatz, Comebacker, BLINDINGCAN, ChromeStealer. It’s a pragmatic move — why invest in bespoke development when proven platforms deliver reliable returns?

Norway’s PST (security police) confirmed that DPRK IT workers have infiltrated Norwegian companies through remote job applications — using stolen LinkedIn identities with verified badges and company email addresses (The Hacker News). This isn’t limited to Silicon Valley anymore. The infiltration has gone global.

Russia: Stealth and Insider Threats in Nation-State AI Weaponization

APT28’s Low-Tech, High-Impact Campaign

Russia’s APT28 (Fancy Bear) ran Operation MacroMaze from September 2025 through January 2026, targeting institutions across Western and Central Europe (LAB52 / The Hacker News).

What makes MacroMaze notable isn’t sophistication — it’s the opposite. APT28 used basic building blocks — batch files, VBScript, HTML — arranged with surgical precision to maximize stealth.

The kill chain: spear-phishing email, then INCLUDEPICTURE beacon (confirming the target opened the document), then VBScript/CMD deployment, then Edge browser running in headless mode, then data exfiltration through webhook.site as a C2 server.

Later variants added SendKeys keyboard simulation to bypass security prompts automatically. Simple tools, expert execution — like a locksmith who can open any door with a paperclip.

The $4 Million Insider

The most disturbing Russia-linked story of 2026 isn’t a hack. It’s a betrayal. Peter Williams, a 39-year-old Australian executive at L3Harris’s Trenchant cybersecurity unit — one of the world’s largest defense contractors — sold eight zero-day vulnerabilities to Russia’s Operation Zero for up to $4 million in cryptocurrency (The Hacker News / DOJ).

Over three years (2022-2025), Williams systematically extracted proprietary vulnerabilities. L3Harris estimates total losses at $35 million. Williams was sentenced to seven years.

Operation Zero — the Russian exploit broker — operates openly on Telegram, offering a $20 million bounty for mobile zero-day chains. They’ve declared they sell exclusively to non-NATO countries. The US State Department and Treasury have sanctioned Operation Zero and its founder Sergey Zelenyuk.

This case illustrates a fundamental problem that no firewall can solve: insider threats in the defense supply chain. When one motivated employee can cause $35 million in damage, the human element remains the weakest link.


Nation-State AI Weaponization: Threat Exposure by Vector

AI Model Distillation (China)
16M+ queries

AI Deepfake Social Engineering (DPRK)
7 malware families

Insider Zero-Day Sales (Russia)
$35M damage

RaaS Franchise Model (Lazarus)
$260K avg ransom

APT28 Stealth Ops (Russia)
5-month campaign

The Korean Dimension: Nation-State AI Weaponization Hits Close to Home

For Korean companies and professionals, these threats aren’t abstract geopolitical issues — they’re operational risks.

DPRK IT worker infiltration is a direct threat to Korean employers. If North Korean operatives are successfully embedding in Norwegian companies, Korean tech firms — especially those with remote positions — face elevated risk. The crypto industry, where Korean trading volume remains globally significant, is a primary target for Golden Chollima and UNC1069.

Defense supply chain exposure matters acutely. South Korea’s growing defense exports — and close collaboration with US defense primes — mean the L3Harris-type insider scenario is relevant. Any Korean defense contractor employee with access to shared systems is a potential target for recruitment by Operation Zero or similar brokers.

AI model security is the emerging frontier. Korean AI companies developing proprietary models face the same distillation risks as Anthropic and Google. Without robust API abuse detection, months of R&D can be siphoned in weeks.

ThreatKorean ExposurePriority
DPRK IT Worker InfiltrationHigh — proximity, language, crypto industryCritical
Lazarus RaaS TargetingMedium — healthcare/enterprise targetsHigh
AI Model DistillationGrowing — as Korean AI companies scaleMedium
Defense Insider ThreatHigh — US-Korea defense partnershipHigh

Insider threat defense contractor - server racks data center security
The insider threat — when the attack comes from within | Photo: Pexels

Series Wrap-Up: The 2026 Threat Landscape

Across three parts, a clear picture has emerged. In Part 1, we saw supply chains become the primary entry point — Notepad++ plugins, Go crypto modules, six-month dormancy periods. In Part 2, we watched AI agents open an entirely new attack surface — 341 malicious MCP skills, Claude Code RCE exploits, trust-by-default architectures.

Now, in Part 3, we’ve met who’s walking through those doors. China distilling frontier AI at industrial scale. North Korea running specialized cyber crime units with AI deepfakes. Russia combining APT stealth with insider recruitment. Each country has found its niche in the threat ecosystem.

The connecting thread across all three parts: the attack surface is expanding faster than defenses can adapt. Supply chains, AI agents, and now AI itself have become both weapon and target.

The organizations that will navigate this successfully share a common trait: they treat security not as a technology problem, but as a continuous process of threat modeling against adversaries who are nation-state funded, AI-equipped, and playing the long game.


INSIGHT

The 2026 cyber threat map has three major coordinates: compromised supply chains (Part 1), exploitable AI agents (Part 2), and nation-state actors weaponizing AI at industrial scale (Part 3). They’re no longer separate threats — they’re a connected system.


ACTION

Whether you’re in engineering, security, HR, or management — understand that your organization’s attack surface now includes its AI models, its software supply chain, and its hiring pipeline. Push for AI API abuse monitoring, supply chain integrity verification (SBOMs), and enhanced identity verification for remote hires. The adversaries are specialized. Your defense can’t afford to be siloed.

References

  1. Chinese AI Companies Caught Distilling Claude’s Capabilities at Scale — The Hacker News, 2026-02-24
  2. North Korea’s Labyrinth Chollima Splits into Three Specialized Units — The Hacker News / SEAL, 2026-02-10
  3. UNC1069 Deploys AI Deepfakes in Cryptocurrency Theft Campaign — Google Mandiant / The Hacker News, 2026-02-11
  4. Lazarus Group Joins Medusa Ransomware-as-a-Service — Symantec / The Hacker News, 2026-02-24
  5. APT28 Operation MacroMaze Targets European Institutions — LAB52 / The Hacker News, 2026-02-23
  6. L3Harris Employee Sold Zero-Days to Russia’s Operation Zero — The Hacker News, 2026-02-25
  7. Google Blocks 100K+ AI Model Extraction Attempts on Gemini — Google Threat Intelligence Group, 2026
  8. Norway PST Confirms DPRK IT Worker Infiltration — PST / The Hacker News, 2026-02-10

Frequently Asked Questions

What is nation-state AI weaponization?

It refers to government-backed threat actors using artificial intelligence as both a target and a tool in cyber operations. This includes stealing AI model capabilities through distillation, using AI-generated deepfakes for social engineering, and leveraging AI to accelerate malware development and influence operations.

How does AI model distillation pose a security threat?

Model distillation extracts a frontier AI’s capabilities — reasoning, coding, tool use — into a smaller model without safety guardrails. This means adversaries can create uncensored AI tools capable of generating malware, crafting phishing campaigns, or conducting intelligence analysis at scale without ethical constraints.

Why did North Korea’s Labyrinth Chollima split into three units?

The split into Labyrinth (espionage), Golden (crypto theft), and Pressure (high-value hacks) reflects operational specialization. Each unit can focus on its core mission, improving efficiency — much like a company creating dedicated business divisions instead of running a single generalist team.

How can organizations defend against AI-powered deepfake attacks?

Key defenses include establishing out-of-band verification for high-stakes meetings (confirming identity through a separate channel), implementing multi-factor authentication for all collaboration tools, training employees to recognize ClickFix-style social engineering, and maintaining updated endpoint detection for emerging malware families like DEEPBREATH.

What makes insider threats in the defense sector so dangerous?

As the L3Harris case shows, a single insider with legitimate access can extract zero-day vulnerabilities worth millions. Traditional perimeter security doesn’t help because the threat is already inside. Organizations need behavioral analytics, strict access controls, and regular security clearance reviews to mitigate this risk.

This article is for informational purposes only and does not constitute security advice. All data cited comes from publicly available sources as of March 2026. Individual organizations should consult qualified cybersecurity professionals for threat assessments specific to their environment.

Found this helpful?

☕ Buy me a coffee

Leave a Comment