All articles
AI & Automation·

How I Run My Entire Business from Claude Code

Research processing, project management, content creation, security audits, scheduled automation — all from a single CLI. This is what it actually looks like when your entire business runs through one tool.

How I Run My Entire Business from Claude Code

It's 8 AM. I open a terminal and type six characters: /start.

Within 30 seconds I'm looking at a full status report. Which projects need attention. Which research links are piling up. What content is due. Whether any scheduled tasks failed overnight. What the recommended focus for the day is.

No dashboard to load. No tabs to open. No Slack to catch up on. One command, one terminal, one report.

This is how Blue Octopus Technology actually runs — not from a project management app, not from a spreadsheet, not from a constellation of SaaS tools stitched together with Zapier. From Claude Code, Anthropic's command-line interface for Claude, extended with custom skills and pointed at a knowledge base that's been accumulating context for over 120 sessions.

It's the actual production system — not a demo, not a template. The real thing we use every day.

What the System Looks Like

The intelligence-hub is a directory of Markdown files, Python scripts, shell scripts, and a configuration file called CLAUDE.md that tells Claude Code how to operate on them. That's it. No database server. No web framework. No deployment pipeline. Text files and a CLI.

But structure matters more than technology. Here's what's in the box:

  • 209 research links processed through an 8-step pipeline
  • 25 projects tracked across 6 priority tiers
  • 53 strategy documents — deep breakdowns on everything from security patterns to YouTube production
  • 70+ tags indexing the entire knowledge base
  • 300+ implementation ideas logged and categorized
  • 80 blog posts written and published
  • 9,000+ memory observations persisting across sessions

The system runs on three machines — a Mac for daily interactive work, a Windows workstation with 32GB of GPU memory for batch processing, and a NAS for file coordination between them. But you don't need three machines. The core of it works on a single laptop with Claude Code installed.

A Real Day, Start to Finish

Morning: The Status Report

/start reads a time-sensitive attention tracker first, then pulls project data, content pipeline status, research queue, and recent activity. It tells me things like:

  • "adsb-decode has 3 stale next actions — last touched 4 days ago"
  • "Content pipeline: 2 blog posts in Draft, 1 due this week"
  • "7 unprocessed research links in intake"
  • "Scheduled tasks: all healthy, last career sweep ran Monday 6 AM"

Some mornings the report says everything is clean. Most mornings it points me at the thing I was avoiding.

Mid-Morning: Processing Research

Say I found an interesting article yesterday about a new approach to AI security audits. I saved the URL. Now I run:

/research https://example.com/article

The system runs an 8-step pipeline:

  1. Logs the link with a timestamp and "pending" status
  2. Fetches the content — and follows any referenced links (if a tweet links to an article that links to a repo, it reads all three)
  3. Analyzes through five lenses: Can we implement this? Can we sell it as a service? Is it a tool we should use? Should we just monitor it? Is there a content idea here?
  4. Updates the knowledge base — creates or updates strategy docs, tool evaluations, or people-to-watch entries
  5. Updates the research bookmarks and intelligence brief
  6. Gives me a structured report with a verdict
  7. Marks the link as processed
  8. Checks whether the findings are relevant to any of the 25 tracked projects, and logs implementation ideas if so

I don't have to decide where the information goes. The pipeline decides based on what the content is about and what already exists in the knowledge base. If I've been tracking a topic for three months and this is the fourth data point, it gets connected to the pattern — not filed in isolation.

A barn at night with server racks visible through the open doors

Afternoon: Content and Project Work

When it's time to write, the system already knows what's in the pipeline. /atomize takes a published blog post and turns it into 10-15 social posts for X and Facebook, tailored to each platform's voice. The content pipeline tracks every piece from idea through draft through published.

For project management, portfolio/projects.md is the single source of truth. Each project has its stack, status, blockers, next actions, synergies with other projects, and a "last touched" date that makes stale work visible. The /prioritize skill moves projects between tiers. No separate tool. No context switching.

Background: The Stuff That Runs While I Sleep

Six scheduled tasks run on the Mac via launchd. Eight more run on the Windows GPU machine via cron. Between them they handle:

  • Daily pulse briefings — a morning summary generated before I wake up
  • Career sweeps — scanning target companies for new opportunities twice a week
  • GitHub intel scanning — watching repos relevant to our work
  • Git sync — keeping the two machines coordinated every 30 minutes
  • Security scans — checking staged content for prompt injection, leaked credentials, and other problems

The scheduled tasks use a three-phase security pipeline. Phase one fetches data with no filesystem write access. Phase two scans that data with a deterministic regex scanner — no AI, no network — looking for prompt injection and other threats. Phase three processes the pre-scanned content offline.

That security design isn't theoretical caution. When you're running AI tasks unattended, anything the AI reads from the internet could contain instructions designed to hijack it. The three-phase separation means fetched content never gets processed until a non-AI scanner has cleared it.

Why a CLI and Not a Dashboard

I built a dashboard too — mission-control reads from the same files and shows a visual overview. It's useful for a quick glance. But the actual work happens in the terminal because that's where Claude Code lives, and Claude Code is the part that does things.

A dashboard shows you information. Claude Code acts on it. When I run /research, it doesn't show me a form to fill out — it goes and reads the article, evaluates it, files it, and tells me what matters. When I run /start, it doesn't render a chart — it tells me what to do today and why.

The distinction matters. Most business tools are designed to present information and wait for you to make a decision. This system is designed to make the routine decisions on its own and surface only the ones that need a human.

What's Actually in the Config

The CLAUDE.md file — the instructions that tell Claude Code how to behave — is around 300 lines. It defines:

  • Skills — slash commands like /start, /research, /hunt, /pulse, /atomize, /analytics
  • File conventions — where things go, what format they use, how they're organized
  • Security rules — what never gets committed, how sensitive data is handled, the three-phase pipeline
  • Content access workarounds — how to fetch tweets, handle PDFs, deal with JavaScript-heavy pages
  • Multi-machine coordination — which machine owns which files, how they sync

There's also a SOUL.md that defines the agent's analytical personality — skeptical by default, opinions required, connect the dots rather than filing bookmarks in isolation. And a STYLE.md for external content voice.

A justfile provides 11 recipes for common operations — rebuilding the tag index, running tests, scanning for security issues, managing worktrees for parallel sessions.

Self-validating hooks run automatically to check that research entries follow the right format, that no personally identifiable information is being committed, and that content passes the security scanner.

The Honest Caveats

This system took 120+ sessions to build. It didn't spring into existence fully formed. The early versions were just a CLAUDE.md file and a few Markdown documents. The scheduled tasks came months later. The multi-machine setup came after that.

It also requires a specific kind of discipline. Every piece of knowledge has to follow the file conventions or the system can't find it later. Every research link has to go through the pipeline or it becomes a stale bookmark. The system is only as good as the consistency of what goes into it.

And Claude Code costs money. This isn't a free setup. You're paying for API usage every time you run a skill or process research. For a one-person consultancy where the time savings are real, the math works. For a business that processes two links a month, it probably doesn't.

The biggest limitation is that it's text-first. It's great for research, writing, project tracking, and automation. It's not great for visual design, spreadsheet analysis, or anything that requires looking at images. Different tools for different jobs.

What This Means for Your Business

You probably don't need this exact system. Most businesses don't process 209 research links or track 25 projects across three machines.

But the principle underneath it applies to every business: the gap between collecting information and acting on it is where most productivity dies. Whether you're saving articles you never read, maintaining a CRM you never check, or running reports nobody uses — the problem is the same. Information without a decision process is just clutter.

The tool matters less than the structure. Claude Code happens to be what we use. The real work was designing the pipeline — deciding that every piece of input gets a verdict, every project gets a "last touched" date, every research link either becomes an action item or gets filed with a reason why not.

If you're curious about building something similar, start with a CLAUDE.md file — that's the blueprint. Define your skills, your file conventions, and your processing pipeline. Everything else follows from it.

Share:

Stay Connected

Get practical insights on using AI and automation to grow your business. No fluff.