The leak was caused by human error, not a security breach, the company said.
Anthropic said on March 31 that it accidentally leaked internal source code for its popular artificial intelligence (AI) chatbot, Claude Code.
The leak stemmed from version 2.1.88 of the @anthropic-ai/claude-code package on the npm registry. It included a 59.8-megabyte source map file that exposed roughly 512,000 lines of unobfuscated TypeScript code across about 1,900 to 2,300 files, an Anthropic spokesperson said in a statement. The code quickly spread to GitHub repositories that have been copied tens of thousands of times.
Anthropic called the incident a packaging mistake rather than a hack.
“Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed,” the statement read. “This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”
Anthropic did not immediately return a request for comment.
The exposed material covers the command-line interface and developer tools, not the underlying large-language model weights or training data.
Anthropic has wrestled with intellectual property and security issues for more than a year.
In February, the company accused three major Chinese AI developers—DeepSeek, Moonshot AI, and MiniMax—of creating more than 24,000 fraudulent accounts to extract 16 million prompts from Claude models and train their own systems in a tactic known as “distillation,” which involves training a smaller, less powerful “student” model on the outputs, behavior, and knowledge of a much larger, more advanced “teacher” model. This allows the student system to imitate the teacher’s capabilities without the time and money required to develop them independently.
Since many China-based models, such as DeepSeek’s R1, do not charge a monthly subscription fee, widespread distillation could make it harder for U.S. providers, such as OpenAI and Anthropic, to monetize products they have spent billions of dollars to build and maintain. That imbalance, the company said, risks eroding the United States’ competitive advantage in AI that export controls are designed to preserve.
In November 2025, Anthropic warned that hackers had weaponized its Claude Code chatbot to infiltrate at least 30 organizations, scanning for high-value databases and cybersecurity weaknesses.







