This morning I was scrolling through Twitter when I saw security researcher Chaofan Shou (@Fried_rice) post something wild. Anthropic accidentally dropped the entire unobfuscated source code for their Claude Code CLI tool right into the public's lap. I am talking over 512,000 lines of pure TypeScript.
No hackers were involved. Nobody breached a server. It was just a regular npm deployment gone wrong.
The source map trap
When you ship a JavaScript or TypeScript tool, you usually bundle and minify the code. Source maps are these handy files that map your ugly, minified code back to your original source. They make debugging in production a lot easier.
Anthropic published version 2.1.88 of the Claude Code npm package. In that specific release, they accidentally included a source map file which pointed directly to a zip archive of the full source. This archive was sitting in an Anthropic R2 storage bucket, completely open for download without any authentication required.
It is one of those mistakes that makes every developer sweat a little. You type your publish command, wait for the progress bar, and go grab a coffee, totally unaware you just open-sourced your company's crown jewels.
What was actually inside the archive?
The sheer scale of the leak is what gets me. We aren't looking at a small wrapper script here. The archive contained over 1,900 files.
Security researchers immediately started digging through the repository. Here is a rough list of what they found:
- The complete memory architecture for how Claude Code handles context.
- Orchestration logic for agent workflows.
- Around 44 hidden features and flags that aren't active in the public build.
Seeing the raw orchestration logic is fascinating. It shows exactly how Anthropic thinks about structuring complex agent tasks in the terminal. I genuinely don't know how to feel about seeing proprietary code this way, but strictly from a learning perspective, it is an absolute goldmine. Developers who build AI agents are always looking for better patterns to handle memory and context, and Anthropic just accidentally provided a masterclass.
Deployment pipelines are scary
We spend so much time worrying about zero-day exploits and sophisticated phishing attacks. But at the end of the day, a misconfigured npm ignore file or a stray build script can do just as much damage.
This highlights the real danger of packaging mistakes. It is so easy to accidentally include test files, environment variables, or in this case, a complete source map pointing to an unprotected bucket. Many teams rely on .gitignore to handle their npm publishes, but npm has its own set of rules for what gets included and excluded. If you aren't explicitly testing the output of your build step before it hits the registry, you are flying blind.
Anthropic will definitely fix their build pipeline after this. But it makes you wonder how many other CLI tools out there are quietly leaking more information than they should.
A wake-up call for the ecosystem
This incident is a great reminder that the most sensitive parts of an AI tool aren't always locked behind a robust API endpoint. Sometimes the vulnerabilities lie in the boring, everyday infrastructure we take for granted. Continuous integration and continuous deployment pipelines are complex, and it only takes one missing flag or one over-permissive bucket policy to expose months of engineering work.
The AI community is moving fast. Startups and enterprise teams alike are pushing code to production at breakneck speeds to keep up with the competition. But speed often comes at the cost of operational security.
Conclusion
Mistakes happen to the best engineering teams in the world. Anthropic's leak is a tough lesson in deployment security, but it is also a rare peek under the hood of a major AI tool.
If you maintain an npm package, take five minutes today to double-check your publish configuration. Run your local pack command and look at what exactly is going into your tarball. It might save you from being the next trending topic on Twitter.