The company also expressed concerns about the recent U.S. approval of H200 chip exports to China—Nvidia’s second-most advanced chip.
Anthropic, the maker of the Claude chatbot, accused three of China’s leading artificial intelligence (AI) companies of creating more than 24,000 fraudulent accounts to tap into its system and train their own models.
The three companies—DeepSeek, Moonshot AI, and MiniMax—allegedly used those accounts to send more than 16 million prompts to Claude, siphoning off output to refine their own products, Anthropic said in a Feb. 23 blog post.
“These campaigns are growing in intensity and sophistication,” the San Francisco-based company said.
The tactic, known as “distillation,” involves training a smaller, less powerful “student” model on the outputs, behavior, and knowledge of a much larger, more advanced “teacher” model. This allows the student system to imitate the teacher’s capabilities without the time and money required to develop them independently.
Anthropic said the scale of the three companies’ alleged distillation activities varied. DeepSeek alone generated about 150,000 interactions with Claude, while Moonshot and MiniMax logged more than 3.4 million and 13 million, respectively, according to Anthropic.
Since many China-based models such as DeepSeek’s R1 do not charge a monthly subscription fee, widespread distillation could make it harder for American providers, such as OpenAI and Anthropic, to monetize products they have spent billions of dollars to build and maintain. That imbalance, the company said, risks eroding the United States’ competitive advantage in AI that export controls are designed to preserve.
Anthropic, which emphasizes its focus on AI safety, further warned that it and other U.S. companies build safeguards to prevent bad actors from using AI to, for example, develop biological weapons or carry out cyber attacks. Illicitly distilled models, by contrast, may lack such guardrails.
“Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems—enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance,” the company warned.
If distilled models are later open-sourced, it added, the risk multiplies as those capabilities “spread freely beyond any single government’s control.”
DeepSeek, Moonshot, and MiniMax did not respond to requests for comment by publication time.
By Bill Pan







