United States artificial intelligence company Anthropic has accused three leading Chinese AI startups of improperly extracting capabilities from its proprietary Claude model to accelerate the development of their own systems. The allegations, which target DeepSeek, MiniMax, and Moonshot AI, have intensified an already heated debate over intellectual property, competitive practices, and national security in the rapidly evolving AI sector.
According to Anthropic, the companies created more than 24,000 fraudulent user accounts and generated over 16 million interactions with Claude in order to train rival models through a technique known as distillation. The company argues that this large-scale extraction effort not only violated its terms of service but also undermines safeguards embedded in its technology.
Allegations of Large-Scale Distillation
Distillation is a widely recognized method in artificial intelligence development, allowing companies to create lighter, more cost-efficient models by learning from larger systems. However, most major proprietary developers, including Anthropic, explicitly prohibit third parties from using their models for such training purposes.
Claude, Anthropic’s flagship AI model, is not officially available in mainland China. The company claims that the accused firms circumvented restrictions by creating extensive networks of accounts designed to mask automated interactions. Anthropic contends that such practices go beyond legitimate research and amount to systematic misuse of intellectual property.
The controversy follows similar claims by OpenAI, which recently alleged that DeepSeek and other Chinese firms may have distilled capabilities from its ChatGPT models. Those concerns were reportedly outlined in a memo addressed to the US House Select Committee on China, underscoring the geopolitical dimension of the dispute.
DeepSeek has not publicly responded to the accusations, while MiniMax and Moonshot AI have remained largely silent on the matter. The lack of direct rebuttals has left industry observers closely watching how regulators and policymakers might respond.
National Security and Export Controls
Anthropic maintains that the issue extends beyond commercial competition. The company warns that models derived through unauthorized distillation may not include critical safety guardrails designed to prevent misuse. Without such protections, advanced AI systems could be deployed in cybercrime, disinformation campaigns, or other malicious operations.
The firm argues that these concerns reinforce the rationale behind US export controls on advanced semiconductor technology. Restrictions on high-performance chips are intended to limit the development of cutting-edge AI systems by strategic competitors. Anthropic asserts that the alleged reliance on extracted capabilities from American frontier models demonstrates that innovation alone is insufficient without access to advanced hardware.
DeepSeek’s rise has already unsettled assumptions within the technology sector. When the company unveiled a powerful model approaching the performance of leading Western systems, it challenged prevailing beliefs that only massive computing resources could produce frontier-level AI. Industry benchmarks, including rankings published by Artificial Analysis, have placed several Chinese-developed models among the top global performers.
Anthropic contends that exposing the scale of alleged distillation attempts shows that export controls remain effective. The company insists that sustained leadership in AI requires not only innovation but also secure supply chains and enforceable safeguards around proprietary technology.




