← Back
Business

Three Rivals, One Threat: Inside the Frontier Model Forum's Activation Against Chinese Distillation

OpenAI, Anthropic, and Google begin sharing threat intelligence to counter adversarial model extraction by Chinese laboratories

Three Rivals, One Threat: Inside the Frontier Model Forum's Activation Against Chinese Distillation

By Negotiate the Future

4/13/26

Three companies that spend most of their waking hours trying to outperform each other have agreed, for the first time, to pool threat intelligence against a common adversary. OpenAI, Anthropic, and Google announced on April 6 that the Frontier Model Forum, the industry nonprofit they co-founded with Microsoft in 2023, has been activated as an operational threat-detection center targeting adversarial distillation by Chinese AI laboratories.

The move marks a structural escalation in the standoff between American frontier labs and a cluster of Chinese firms accused of systematically extracting model capabilities without authorization or payment.

Anthropic's February 2026 report provided the evidentiary backbone. The company documented roughly 16 million unauthorized exchanges conducted through approximately 24,000 fraudulently created accounts, tracing the activity to three Chinese laboratories: MiniMax, Moonshot AI, and DeepSeek. The scale and specificity of the data transformed what had been an open secret in the industry into a formally documented threat.

The three firms pursued markedly different extraction strategies. MiniMax accounted for the bulk of the volume, logging approximately 13 million exchanges in what appeared to be a broad product-cloning operation. Moonshot AI, the company behind the Kimi assistant, ran roughly 3.4 million queries. DeepSeek, by contrast, conducted only about 150,000 exchanges, but its query patterns were almost entirely focused on how Claude handles refusals and policy-sensitive prompts. That signature suggests alignment research, not commercial replication.

Adversarial distillation works by creating thousands of fake accounts to systematically query frontier models, then using the captured outputs to train competing systems. The technique differs from legitimate distillation, where a smaller model is trained on a larger one's outputs under license, in that it operates without disclosure, authorization, or compensation. U.S. officials estimate the practice costs American AI labs billions annually.

The countermeasures now being coordinated through the Forum include behavioral detection systems designed to flag suspicious query patterns, more aggressive account verification requirements, new rate limits and API access controls, and tighter terms-of-service restrictions on training usage. The intelligence-sharing arrangement means that attack patterns identified by one company can be cross-referenced and blocked across all participating platforms.

The national security dimension extends beyond commercial losses. Distilled models typically lack the safety filters and alignment training built into their source systems. A model cloned through adversarial distillation inherits capability without inheriting constraint, a combination that has drawn concern from policymakers on both sides of the aisle.

The Forum's pivot from research and policy to active threat operations reflects a broader reckoning within the American AI sector. For years, the major labs treated model security as an internal engineering problem. The February disclosures, and the coordinated response announced this month, suggest the industry now views unauthorized extraction as a strategic threat requiring collective defense.

Whether the countermeasures prove durable against well-resourced adversaries remains an open question. Detection systems can be evaded. Account verification can be circumvented. Rate limits can be distributed across larger numbers of endpoints. The Forum's effectiveness will depend less on any single technical fix than on whether the participating companies sustain the operational discipline required to share intelligence continuously, not just in response to headlines.

More from NtF

Continue reading

Stay Informed

Stay Informed