Karpathy Called OpenClaw a 'Vibe Coded Monster' — He's Right

By Blue Octopus Technology

Share:
Karpathy Called OpenClaw a 'Vibe Coded Monster' — He's Right

We have been writing about OpenClaw's security problems since January. We covered the risks of open-source AI agents when it felt like nobody wanted to hear it. We laid out the specific vulnerabilities in vibe-coded applications while the rest of the internet was posting screenshots of what their AI agents could do.

It felt a bit like being the one person at the party saying "maybe we should check the stove."

Then Andrej Karpathy weighed in. And suddenly the stove is on fire and everyone is paying attention.

Who Is Andrej Karpathy (And Why His Opinion Matters)

If you are not plugged into the AI world, the name might not ring a bell. So here is why this matters.

Andrej Karpathy was the Senior Director of AI at Tesla, where he led the team building the self-driving system. Before that, he was one of the founding members of OpenAI — the company behind ChatGPT. He has a PhD from Stanford. He taught Stanford's most popular deep learning course. When people in AI disagree about something, Karpathy is one of the handful of voices that makes both sides stop and listen.

He is not a hot-take artist. He is not a hype man. He is a serious researcher who builds things.

So when Karpathy looked at OpenClaw and called it a "400K lines of vibe coded monster," that was not a throwaway comment. That was one of the most credible people alive in artificial intelligence saying: this codebase has a fundamental problem.

What "400K Lines of Vibe Coded Monster" Actually Means

Let's break that phrase down, because it packs a lot into nine words.

400K lines. Four hundred thousand lines of code. For context, the original version of Minecraft — a game that generates infinite 3D worlds — was about 200,000 lines. The Linux kernel, which runs most of the internet's servers, started at about 10,000 lines and grew deliberately over decades. OpenClaw hit 400,000 lines in months.

Size alone is not the problem. Complex software is often large. The problem is how those 400,000 lines got there.

Vibe coded. This is the term Karpathy himself coined in early 2025. Vibe coding means using AI to generate code based on descriptions — "make a login page," "add a payment system," "connect to the email API" — without the person fully understanding what the AI produces. You describe the vibe. The AI writes the code. You run it. If it works, you move on.

For a personal project or a quick prototype, vibe coding can be fine. For a tool that connects to your email, your calendar, your messaging apps, and has the ability to take actions on your behalf? Four hundred thousand lines of code that no single human fully understands is not fine. It is a liability.

Monster. Karpathy chose that word deliberately. A codebase that is too large, too complex, and too poorly understood to be maintained or secured reliably.

Nanoclaw: The Alternative He Endorsed

Karpathy did not just criticize OpenClaw. He pointed to an alternative: a project called Nanoclaw. Where OpenClaw is 400,000 lines of AI-generated code sprawling across hundreds of files, Nanoclaw aims to be a stripped-down, human-readable implementation of the same core idea.

The philosophy behind Nanoclaw is straightforward. Instead of letting AI generate code for every feature someone asks for, you write clean, understandable code for the features that actually matter. You keep the codebase small enough that a developer can read the whole thing. You prioritize security and maintainability over feature count.

This is not a new idea in software engineering. It is, in fact, one of the oldest and most battle-tested ideas in the field: simpler is better. Smaller codebases have fewer bugs, fewer security holes, and are easier to audit. The industry has known this for fifty years.

What is new is that we apparently need to be reminded of it, because the speed of AI code generation has made it easy to forget.

Why This Validates What We Have Been Saying

We are not going to pretend we do not feel some vindication here. We have been making this exact argument for months.

When we wrote about what OpenClaw is and why everyone is talking about it, we were blunt: the technology is impressive, but the security situation is a mess. A critical remote code execution vulnerability. Hundreds of malicious add-ons on the official skill marketplace. An architecture that gives AI agents broad access to your personal data with insufficient safeguards.

When we wrote about how to secure AI agents, the core message was the same: these tools are powerful, but power without guardrails is dangerous.

But we are a small consultancy in Asheville. Karpathy is one of the most influential people in AI. When he says it, the industry listens differently.

That is not bitterness. That is just how information spreads. And if his comment gets more businesses to think carefully before connecting AI agents to their systems, then the outcome is good regardless of who said it first.

The Deeper Problem: Speed Without Understanding

Here is what concerns us most, and it goes beyond OpenClaw specifically.

Vibe coding is accelerating. The tools are getting better. AI can now generate entire applications from a paragraph of instructions. That is genuinely impressive, and for certain use cases — prototyping, internal tools, personal projects — it is a real productivity gain.

But there is a difference between "the AI wrote working code" and "someone understands this code well enough to maintain it, secure it, and fix it when something goes wrong."

OpenClaw grew to 400,000 lines because vibe coding makes it easy to add features and hard to say no. Need browser automation? Add it. Need WhatsApp integration? Add it. Need a social network for AI agents? Sure, add that too. Each feature works in isolation. The AI generates the code, you test it, it seems fine.

The problem shows up later. When features interact in unexpected ways. When a security researcher finds a vulnerability buried in code nobody fully reviewed. When a malicious add-on exploits an assumption that a human developer would have questioned but an AI did not.

400,000 lines of code that nobody fully understands is not a feature-rich product. It is an attack surface.

What This Means for Your Business

If you are a business owner — and our readers mostly are — here is the practical takeaway.

The AI agent market is still immature. OpenClaw is the most popular open-source AI agent platform, and the most respected name in AI just called it a monster. That does not mean AI agents are bad. It means the current implementations have serious growing pains. Before you give any AI tool access to your business data, understand what you are connecting and what could go wrong. We wrote a specific guide on securing your business against AI-powered attacks that walks through this.

Vibe coding is a tool, not a strategy. AI-generated code can save enormous amounts of time. But if nobody on your team understands the code running your business, you have a single point of failure that no one can fix. Whether you are building custom software or buying off-the-shelf tools, someone needs to understand what is under the hood.

Smaller and simpler usually wins. Karpathy's endorsement of Nanoclaw over OpenClaw is really an endorsement of a timeless engineering principle: do fewer things, but do them well. When you are evaluating AI tools for your business, the one with the longest feature list is not necessarily the best choice. The one that does what you need, securely and reliably, is.

Watch the serious people, not the hype cycle. The AI space is noisy. Influencers demo impressive things every day. But when someone with Karpathy's credentials and track record raises a flag, pay attention. His concerns are technical, specific, and informed by decades of experience building systems at the highest level.

The Road Forward

None of this means OpenClaw is dead. The project has massive community momentum, and the problems Karpathy identified are fixable — in theory. The team could refactor the codebase, improve security practices, and bring the project to a level of quality that matches its ambition.

Whether they will is a different question. Refactoring 400,000 lines of code is significantly harder than writing them was. Especially when much of the code was generated by AI and may not follow consistent patterns or conventions.

What we expect to see is a fork in the road. Nanoclaw and similar projects will appeal to developers and businesses that prioritize security and maintainability. OpenClaw will continue to appeal to people who want the maximum feature set and are willing to accept the risks.

For our clients, our recommendation has not changed. Use established, commercially supported AI tools for anything touching business data. Keep an eye on the open-source agent space — it is genuinely the future — but do not be the first one through the door when the door frame is still shaking.

And if the most respected AI researcher in the world calls something a monster, maybe listen.


Blue Octopus Technology helps businesses adopt AI tools safely and effectively. If you are evaluating AI agents or automation platforms and want an honest assessment of the risks and opportunities, let's talk.

Stay Connected

Follow us for practical insights on using technology to grow your business.