Claude Code's Source Code Leaked. Here's What It Tells You About AI Tools.
512,000 lines of leaked TypeScript revealed how one of the most popular AI coding tools actually works — including what it remembers, what it uploads, and what it controls on your machine.

On March 31, security researcher Chaofan Shou posted something unusual on X. He'd found the full source code for Claude Code — Anthropic's AI coding assistant — sitting in the open. 512,000 lines of TypeScript. Not a snippet. The whole thing.
Within hours, developers were combing through it. What they found wasn't a scandal, exactly. But it was revealing. And if you're a business owner using AI tools — or thinking about it — the findings should shape the questions you ask before you hand an AI tool the keys to your codebase.
What Was Actually in the Code
The leaked source revealed several unreleased features that Anthropic hadn't publicly announced.
KAIROS — a headless mode that lets Claude Code run without a human at the keyboard. Think of it as an autonomous assistant that can work through tasks on its own, unattended. Useful for automation. Also means the tool can operate when nobody's watching.
autoDream — a subagent (a smaller AI process running inside the main one) that consolidates session memories. Every time you work with Claude Code, it's building a picture of your projects, your patterns, your codebase. autoDream organizes those memories behind the scenes.
Team Memory Sync — bidirectional syncing of those memories to a server, with a built-in secret scanner. The secret scanner is a good sign — it means Anthropic is trying to catch API keys and passwords before they get uploaded. But the fact that memories sync to a server at all is something most users didn't know about.
CHICAGO — desktop control capabilities. Mouse movement, keyboard input, clipboard access, screenshots. This goes beyond reading and writing code files. It's full desktop interaction.
The code also confirmed that files Claude Code examines are uploaded and retained for 30 days by default — or up to 5 years if you've given consent through Anthropic's terms. And the telemetry runs through Statsig, an analytics platform now owned by OpenAI. That last detail raised some eyebrows.
Why This Matters for Business Owners
None of this is necessarily malicious. Anthropic pointed to their SOC 2 compliance — an industry-standard security audit — and offered an environment variable (CLAUDE_CODE_DISABLE_AUTO_MEMORY) to turn off automatic memory. That's a reasonable response.
But here's what the leak actually exposed: the gap between what users assumed was happening and what was actually happening.
Most people using Claude Code thought of it as a smart text editor. Type a question, get code back. The reality is closer to a persistent assistant that remembers your work, can control your desktop, syncs data to external servers, and runs background processes you never explicitly started.
That's not unique to Claude Code. It's how most advanced AI tools work now. The difference is that we usually don't get to see the source code. This leak was a rare look under the hood — and what was under there looked a lot like what we described in our piece on securing AI agents. The threat categories are the same: tool access that's broader than expected, persistent memory that can be manipulated, and data leaving your environment without clear consent.
The Questions You Should Be Asking
If you're evaluating any AI tool for your business — not just Claude Code — this leak gives you a concrete checklist.
What does it remember? Does the tool maintain persistent memory across sessions? Can you see what it's stored? Can you delete it? If your AI vendor can't answer these questions clearly, that's a problem. Memory is where context becomes the competitive advantage — but it's also where sensitive data accumulates.
Where does your data go? When the tool reads your files, do they stay on your machine or get uploaded? For how long? Who has access? The 30-day retention in Claude Code's source wasn't hidden — it's in Anthropic's terms of service. But most users never read those terms. Now they have a reason to.
What can it control? File access is one thing. Desktop control — mouse, keyboard, clipboard — is another category entirely. Know the difference, and know which level of access you've actually granted.
Who gets the telemetry? Analytics are normal. But analytics flowing through a platform owned by a direct competitor is worth understanding. Ask what data is collected, where it goes, and who processes it.
Can you turn things off? Anthropic's offer of an opt-out environment variable is better than nothing. But an opt-out isn't the same as an opt-in. The best AI governance frameworks start with minimal access and add capabilities as needed — not the other way around.
The Bigger Picture
Within days of the leak, developers started porting Claude Code's agent logic to Python, Rust, and other languages. The proprietary patterns became open knowledge. That's the other lesson here — the value in AI tools is shifting from the model itself to the orchestration around it. How the agent manages memory, delegates to subagents, decides when to act autonomously. Those are engineering decisions, and now everyone can see how Anthropic made theirs.
For business owners, this means the moat around any single AI vendor is thinner than you think. The tool you're using today might be rebuilt in open source tomorrow. What matters more than the tool is how well you understand what it does when you're not looking.
That understanding starts with evaluating your AI vendors the same way you'd evaluate any contractor who has access to your office after hours. Not with suspicion — with clear expectations, documented agreements, and the ability to verify.
The source code leak didn't reveal that Claude Code was doing anything nefarious. It revealed that most of us never thought to ask what it was doing at all.
Related reading:
Stay Connected
Get practical insights on using AI and automation to grow your business. No fluff.