
Stan Girard builds AI infrastructure for a living. His company, Quivr, went through Y Combinator's Winter 2024 batch and has 28,000 stars on GitHub. He is not a hobbyist. He is not a script kiddie looking for attention. He is someone who builds the kind of tools that other companies depend on.
So when Girard decided to look inside Claude Code — the AI coding tool used by hundreds of thousands of developers — what he found was not a prank or a stunt. It was a discovery that matters for anyone who uses AI tools in their business.
He found a hidden door.
What He Actually Found
Claude Code is a command-line tool. You type instructions in a terminal, and it writes code, edits files, runs tests, and manages projects. It is one of the most capable AI development tools available today. We covered it in detail when someone built a browser interface on top of it.
Girard's team reverse-engineered the Claude Code binary — which means they took the compiled program apart to see how it works internally. Inside, they found an undocumented flag called --sdk-url.
When you use this flag, Claude Code stops talking to the terminal. Instead, it connects to a WebSocket server — a persistent network connection — at whatever address you specify. It sends its full list of capabilities, then waits for instructions over that connection.
In three days, Girard built "The Vibe Companion" — a full browser interface for Claude Code using this hidden flag. It got 772 stars on GitHub. The announcement tweet got 2,291 likes.
But the stars and likes are not the interesting part. The interesting part is what this tells us about the tool itself.
A Hidden Door in Plain Sight
Let's make this concrete. Imagine you hire a contractor to install a new lock on your front door. The lock works great. It is secure. You trust it.
Then someone discovers that the lock also has a hidden wireless receiver. If you send the right signal to it, the lock opens from across the street. The lock manufacturer built the receiver on purpose — they sell a separate product that uses it. But they never told you the receiver was there. They never documented it. And anyone who figures out the signal can use it.
That is roughly what happened with Claude Code.
The --sdk-url flag is not a bug. It is not a vulnerability someone exploited. It is a feature that Anthropic built into Claude Code deliberately. They have an official product — the Claude Agent SDK — that uses the exact same communication protocol. The hidden flag and the official SDK speak the same language: NDJSON (newline-delimited JSON), with 13 message types covering everything from sending prompts to approving file changes to managing sessions.
Anthropic built Claude Code with external connectivity in mind. They just did not tell anyone.
Why a Business Owner Should Care
If you are not a developer, you might be thinking: so what? A developer found a hidden feature in a developer tool. That sounds like a developer problem.
It is not. Here is why.
Claude Code is an AI tool that can read your files, write code, run commands on your computer, and make changes to your projects. When it operates through the terminal, you are physically present. You see what it is asking to do. You approve or deny each action. There is a human in the loop at every step.
The WebSocket mode changes that equation. When Claude Code connects to a network service instead of a terminal, several things become possible — and not all of them are good.
First, there is no built-in authentication. The WebSocket connection has no password, no login, no verification that the person on the other end is supposed to be there. For local use on your own laptop, that is probably fine. For anything exposed to a network — even a local office network — it is a problem. The bearer token used for authentication is visible in the WebSocket headers, which means anyone monitoring network traffic can intercept it.
Second, there is a mode called bypassPermissions. It does exactly what the name suggests — auto-approves every tool call Claude Code makes. File edits, command execution, file deletion, system modifications. All approved automatically with no human review. In a terminal, you are at least present when Claude Code asks for permission. In a browser running in a background tab with bypass mode enabled, you have given an AI tool complete authority to do whatever it decides to do on your machine.
Third, this pattern has precedent — and not the good kind. CVE-2025-52882 was a WebSocket authentication bypass found in Claude IDE extensions, rated 8.8 out of 10 on the severity scale. The vulnerability allowed attackers to connect to the WebSocket interface and send commands as if they were the authorized user. The hidden --sdk-url flag creates a similar surface area. Different code, same category of risk.
The Bigger Picture: What You Cannot See
Here is the part that goes beyond Claude Code specifically.
Every AI tool your business uses has internals you cannot see. ChatGPT, Claude, Gemini, Copilot — each one is a complex piece of software with communication protocols, data handling routines, permission systems, and failure modes that are invisible to the end user.
Most of the time, that is fine. You do not need to understand how your car's engine management system works to drive safely. But you do need to know that the engine management system exists, that it can malfunction, and that you should not ignore the check engine light.
AI tools are reaching a level of integration where they are no longer just answering questions in a chat window. They are reading your documents, accessing your calendars, connecting to your email, interfacing with your business systems. We wrote about how to secure AI agents before they become a liability, and the core principle has not changed: the more access you give a tool, the more important it is to understand how that tool works.
The Claude Code discovery is a check engine light. Not because Claude Code is unsafe — Anthropic has a strong security team and a track record of responsible AI development. But because it demonstrates that even well-built tools from reputable companies can have undocumented capabilities that change the security profile in ways the user does not expect.
What Anthropic Did Right
Credit where it is due. Anthropic has handled several aspects of this well.
The official Claude Agent SDK uses the same NDJSON protocol that Girard discovered. This means the communication protocol is not a secret hack — it is an officially supported interface. Anthropic built it on purpose, documented it in the SDK, and provides a legitimate way to use it. The protocol itself is well-designed: 13 message types covering the full range of operations, clean separation between control and data, structured error handling.
Anthropic has also shown they take unauthorized access seriously. In April 2025, they sent a DMCA takedown to a previous project that reverse-engineered Claude Code's internals. In January 2026, they "tightened safeguards against spoofing the Claude Code harness." They are aware that people are looking under the hood and they are actively responding.
The Agent SDK gives developers a legitimate, supported path to build the same kinds of browser interfaces and multi-session tools that the hidden flag enables. If you want to build on top of Claude Code's capabilities, there is a right way to do it — with Anthropic's blessing, documentation, and support guarantees.
What Anthropic Got Wrong
The hidden flag is the problem.
If Anthropic built Claude Code with WebSocket connectivity in mind — and the evidence says they did — then that capability should be documented, secured, and subject to the same scrutiny as every other feature. An undocumented network interface in a tool that has file system access and command execution privileges is a security concern regardless of the vendor's intentions.
The bypassPermissions mode makes it worse. A mode that auto-approves all tool calls should not exist without explicit, prominent documentation of what it does and what the risks are. The name alone tells you what it does, but not everyone who finds it will understand the implications. And in security, the gap between "technically available" and "safely used" is where incidents happen.
The lack of built-in authentication on the WebSocket connection is a design choice that prioritizes convenience over security. For a personal development tool, that tradeoff might be acceptable. For a tool that is increasingly being used in business contexts — team environments, client projects, production workflows — it is not.
These are not fatal flaws. They are design decisions that reveal a gap between how Claude Code was originally conceived (a personal developer tool running on your laptop) and how it is actually being used (a business-critical tool embedded in team workflows touching real projects and real data).
The Lesson for Your Business
This is not a story about Claude Code being dangerous. It is a story about what happens when smart people look inside tools that millions of others use without question.
Every AI tool you adopt in your business is a trust decision. You are trusting the vendor to build securely, to document honestly, and to respond quickly when problems arise. Most of the time, that trust is warranted. But trust without verification is not a security strategy. It is hope.
Here are the questions you should be asking about any AI tool your business uses.
What access does this tool have? Can it read your files? Execute commands? Access the internet? Connect to other services? Send emails? The more access a tool has, the more damage it can do if something goes wrong. We wrote a detailed breakdown of the specific threat categories and how to guard against each one.
What happens when it connects to a network? Most AI tools call home to a server for processing. What data are they sending? Is the connection encrypted? Is the authentication robust? If you cannot answer these questions, you do not fully understand what the tool is doing.
Does the vendor document everything the tool can do? Undocumented features are undocumented risks. If a vendor's tool has capabilities they have not told you about, that is a transparency problem — even if the capability was built for legitimate reasons.
What is the vendor's track record on security disclosures? When vulnerabilities are found, does the vendor respond quickly and transparently? Do they have a security team? Do they publish security advisories? Anthropic scores well on this — they have a responsible disclosure process and a history of responding to reports. Not every AI vendor does.
Who else is looking under the hood? Security researchers, open-source contributors, and curious developers like Girard serve an important function: they find things the vendor missed or chose not to disclose. A healthy security ecosystem includes external scrutiny. If a tool discourages or penalizes external security research, that is a red flag.
The Paradox of AI Security
Here is the uncomfortable truth at the center of this story.
The same capabilities that make AI tools useful — reading your documents, connecting to your systems, executing actions on your behalf — are the same capabilities that make them risky when something goes wrong. You cannot have a tool that automates your workflow without giving it access to your workflow. You cannot have an AI agent that sends emails on your behalf without giving it access to your email.
The question is not whether to use these tools. The question is whether to use them with open eyes.
Blind trust is not a security strategy. But neither is refusing to adopt tools that could genuinely improve your business. The answer — boring as it sounds — is due diligence. Understand what your tools can do. Understand what they are doing. Ask questions. Read the documentation. And when someone like Stan Girard finds a hidden door in a tool you use, pay attention to what that door reveals.
Not because the tool is broken. But because knowing where the doors are is the first step to making sure only the right people walk through them.
What This Means Going Forward
The AI tools your business uses today are simpler than the ones you will use next year. The integrations will get deeper. The access will get broader. The capabilities will get more powerful. Every step in that direction is also a step toward more complex security surfaces, more undocumented behaviors, and more trust decisions.
The businesses that navigate this well will be the ones that treat AI tools the way they should treat any tool with access to sensitive systems: with respect, with oversight, and with the understanding that "it is from a reputable company" is the beginning of security due diligence, not the end of it.
Stan Girard looked inside Claude Code and found a hidden communication protocol. What he really found was a reminder: the tools we trust most are the ones we need to understand best.
That is not a reason to stop using AI tools. It is a reason to start asking better questions about them.
Blue Octopus Technology helps businesses adopt AI tools without blind spots. If you want to understand the security profile of the AI tools your team is using, let's talk.
Related Posts
Stay Connected
Follow us for practical insights on using technology to grow your business.

