All articles
AI & Automation·

Inside Our 300-Line CLAUDE.md: The File That Runs Our Business

We published our actual production CLAUDE.md — the file that tells our AI agent how to run a 25-project portfolio, process 200+ research links, and coordinate work across three machines. Here's every section and why it exists.

Inside Our 300-Line CLAUDE.md: The File That Runs Our Business

Most CLAUDE.md examples you'll find online are 20 lines. "Here's my project. Use TypeScript. Run tests before committing." Fine for a weekend project. Useless for running a business.

Our CLAUDE.md is 300 lines. It tells an AI agent how to operate an intelligence system that tracks 25+ active projects, processes research links through an 8-step pipeline, coordinates work across three machines, manages a content calendar, and handles security for unattended scheduled tasks. It's been revised dozens of times over three months of daily use.

This post walks through every section of that file — what it does, why it exists, and what we learned building it. If you're using Claude Code for anything more complex than a single codebase, this is the reference we wish we'd had when we started.

If you're not sure what CLAUDE.md is, start with our comparison of CLAUDE.md, SOUL.md, and SKILL.md — three complementary standards for configuring AI agents.

The Opening: What This Is

# CLAUDE.md

This file provides guidance to Claude Code when working with
code in this repository.

## What This Is

A knowledge intelligence and project command center for Blue
Octopus Technology — a software consultancy for non-technical
businesses. Tracks AI industry trends, tools, strategies,
content ideas, and all active projects. Processes research links
into actionable business intelligence, manages a multi-project
portfolio with synergy tracking, and provides prioritization
recommendations.

This seems obvious. It's not. Without it, every new conversation starts with Claude asking "what is this project?" or making wrong assumptions. With it, the agent immediately understands the scope and purpose.

The description is deliberately specific. It's not "a business tool" — it's an intelligence system that tracks trends, processes research, manages a portfolio. That specificity changes how the agent approaches every task. When it processes a research link, it knows to evaluate it through a business lens. When it updates a file, it knows there's a portfolio context to consider.

Lesson learned: The more specific your project description, the less you repeat yourself in every conversation. Vague descriptions produce vague behavior.

The Skills Table: What the Agent Can Do

## Skills

### Intelligence & Portfolio

| Skill        | Purpose |
|--------------|---------|
| /research    | Process a link through the full 8-step pipeline |
| /start       | Status report — project pulse, recommended focus |
| /add-project | Scan a directory, auto-detect stack, add to registry |
| /prioritize  | Move projects between tiers, vote on recommendations |
| /security    | Security audit — secrets, dependencies, permissions |
| /atomize     | Turn a blog post into 10-15 social posts |
| /analytics   | Pull performance data — Buffer, Search Console |

Each of these is a SKILL.md file in a .claude/skills/ directory. The table in CLAUDE.md serves as a quick reference — the agent knows what commands exist without loading every skill into memory.

This is the progressive disclosure pattern in action. The table costs maybe 200 tokens. The full skill definitions run thousands of tokens each. The agent reads the cheap table first, then loads the expensive skill only when it's triggered.

Lesson learned: Your CLAUDE.md should list available skills, not contain them. Keep the instructions in SKILL.md files. Keep the index in CLAUDE.md.

The Pipeline: Step-by-Step Workflow

## The /research Pipeline

1. Timestamp & log — Add entry to intake-log.md with status pending
2. Fetch & understand — WebFetch the content; follow linked references
3. Analyze — Evaluate through 5 lenses: Implement, Offer, Tool, Monitor, Content
4. Update knowledge base — Create/update files in knowledge/ as appropriate
5. Update docs — Add to bookmarks, update intelligence brief, update people-to-watch
6. Report — Structured summary with business applicability and action items
7. Mark processed — Change intake log status from pending to processed
8. Cross-project check — Check if findings are relevant to any tracked project
9. Log implementation ideas — Add actionable ideas to the backlog

This is the most important section in the file. It defines the core workflow that the agent executes dozens of times a week.

Notice the specificity. It doesn't say "analyze the content." It says "evaluate through 5 lenses: Implement, Offer, Tool, Monitor, Content." Those five lenses are the difference between a generic summary and a business-relevant analysis. Every link gets the same structured evaluation, whether it's a tweet about a new tool or a 30-minute conference talk.

The pipeline has evolved significantly. The original version had 5 steps. We added the cross-project check after realizing the agent was missing connections between research findings and active projects. We added the implementation idea logging after realizing good ideas were getting buried in research reports instead of being tracked separately.

Lesson learned: Write your workflows as numbered steps, not paragraphs. The agent follows numbered steps more reliably than prose instructions. Be explicit about what happens at each step, including which files get updated.

File Conventions: The Formatting Bible

## File Conventions

### research/

**research/intake-log.md** — Newest first. Format:
`[YYYY-MM-DD HH:MM]` | STATUS | Title (@author) | URL
Statuses: pending, processed, actioned

**research/bookmarks.md** — Organized by topic sections.
Each entry has Author, Date, URL, Content summary, Tags, Notes.

**research/intelligence-brief.md** — The "so what?" doc.
Contains: Key Signals (numbered), Action Items (tables by
category), Knowledge Gaps, Ecosystem map, Revenue thesis.
Update the "Last updated" date when modifying.

This section exists because without it, the agent invents its own formatting every time. Intake log entries would sometimes be bullets, sometimes tables, sometimes paragraphs. Bookmarks would have different fields each time. The intelligence brief would get updated without changing the "Last updated" date.

Every file the agent reads or writes gets a format specification. Not suggestions — specifications. The format for an intake log entry. The required fields for a bookmark. The structure of the intelligence brief. The categories in the implementation backlog.

This is the most tedious section to write and the most valuable section to have. Consistent formatting means consistent behavior. It means you can search and parse files reliably. It means a dashboard can read the same files the agent writes.

Lesson learned: Specify exact formats for every file your agent touches. Include field names, ordering rules, and status values. The agent will follow them if they're explicit. It will improvise if they're not.

A fire tower converted into a multi-monitor command center

Companion Docs: The Personality Layer

## Companion Docs

- SOUL.md — Agent personality: skeptical, direct, opinionated.
  Defines how to assess tools, frame recommendations, avoid hype.
- STYLE.md — Brand voice for external content: plain language,
  stories first, no buzzwords.
- HOW-IT-WORKS.md — Full system design doc explaining architecture,
  data flow, and design decisions.

This is where the three-layer architecture comes together. CLAUDE.md handles the operational configuration. SOUL.md handles personality. STYLE.md handles brand voice for content creation.

The SOUL.md reference is critical. It tells the agent: when you're evaluating a tool, read SOUL.md first. The result is an agent that won't hype immature tools, won't give artificially positive assessments, and will flag security concerns before recommending anything. That's a personality decision, not an operational one — so it lives in SOUL.md, not CLAUDE.md.

Lesson learned: Reference your companion docs explicitly. Don't assume the agent will find them. Tell it exactly when to read which file.

Security: The Non-Negotiable Section

## Sensitive Data — NEVER Commit to GitHub

This is a public repository. Scrub every file before committing.

Never commit:
- Private/local IP addresses
- Email addresses, usernames, real names
- NAS hostnames, mount paths, or share names
- File paths containing usernames
- API keys, tokens, passwords, .env contents
- Audit reports or scan outputs containing any of the above

Before every commit: Mentally scan staged diffs for PII.

Safe patterns:
- Use [NAS_IP], [USERNAME], [EMAIL] as placeholders
- Keep audit reports in gitignored staging directories
- Reference machine roles ("the NAS") not hostnames

This section exists because we operate a public repository. Every commit is visible to the world. The agent handles files that contain IP addresses, file paths with usernames, and references to internal infrastructure. Without explicit rules, it will commit those details.

We also define a security infrastructure section that documents the 3-phase pipeline for scheduled tasks:

  1. FETCH — Agent gathers data from the internet. No filesystem writes allowed.
  2. SCAN — A deterministic regex scanner checks for prompt injection. No AI involved.
  3. PROCESS — Agent processes pre-scanned content. No network access allowed.

This is documented in CLAUDE.md so the agent understands the security architecture it operates within. When it runs a scheduled task, it knows the phases and their constraints. We wrote about this security model in detail in how we built self-validating AI agents.

Lesson learned: Security rules must be in CLAUDE.md, not just in your head. The agent will follow explicit security constraints. It won't invent them on its own.

Multi-Machine Coordination

## Multi-Machine Setup

The intelligence-hub runs across two machines on the same LAN:

| Machine        | Role                    | Key workloads |
|----------------|-------------------------|---------------|
| Mac (primary)  | Human-interactive       | /research, /start, content |
| Windows (GPU)  | GPU compute, batch work | Transcription, local LLMs, image gen |

Ownership rules:
- Mac owns everything except windows/ and knowledge/transcripts/
- Windows writes only to windows/staging/ and knowledge/transcripts/
- Both machines use git pull --rebase before work

This is where a 300-line CLAUDE.md earns its keep. Two machines writing to the same repository is a recipe for merge conflicts and data loss — unless the rules are explicit and the agent knows them.

The ownership model is simple: each machine has specific directories it's allowed to write to. The Mac owns the canonical state of every file except the GPU workstation's staging area and transcripts. The Windows machine can read everything but can only write to its designated directories. Because write paths never overlap, merges are always clean.

We covered this architecture in running AI agents across three machines. The CLAUDE.md section is what makes it work in practice — without it, the agent on the Windows machine would try to update files it doesn't own.

Lesson learned: If your agent operates in a multi-machine or multi-user environment, ownership rules in CLAUDE.md are mandatory. "Both machines can write anywhere" is how you get data loss.

Key Principles: The Decision Framework

## Key Principles

1. Depth over speed — Better to deeply understand one link than skim five
2. Business lens — Every analysis must answer "what do we DO with this?"
3. Honest assessment — Don't hype things that aren't ready
4. Connect the dots — Link new intel to existing knowledge
5. Timestamp everything — Always be able to answer "what was the last link?"
6. Follow the thread — If a tweet references an article, fetch that too
7. Track people, not just content — Add high-value producers to watch list

These seven principles shape every decision the agent makes. They're not workflow steps — they're judgment guidelines.

"Depth over speed" means the agent won't rush through a research link to get to the next one. "Business lens" means every analysis includes a "what do we do with this?" section. "Honest assessment" means the agent won't recommend a tool that has no security documentation, even if it's popular.

These principles compound over time. After 200+ processed links, the "connect the dots" principle means every new piece of research gets cross-referenced against a substantial knowledge base. A tweet about a new tool gets evaluated against 16 tracked signals, 25+ strategy documents, and hundreds of existing bookmarks. That cross-referencing is what turns raw links into actual intelligence.

Lesson learned: Principles > rules for judgment calls. Rules tell the agent what to do. Principles tell it how to think. Both matter, but principles handle the cases your rules didn't anticipate.

The Learning Rule: How the File Evolves

## Learning-First Rule

When the user gives corrective feedback that would apply to future work:
1. Save the pattern to memory FIRST
2. Then fix the current instance

Do NOT fix the problem and move on. Do NOT batch learnings for later.
The save happens before the edit. Every time. No exceptions.

This is meta-configuration — it tells the agent how to improve its own behavior over time. When we correct the agent, it saves the correction to persistent memory before fixing the immediate issue. This means the same mistake doesn't happen twice.

This rule exists because we noticed the agent would fix problems in the moment but repeat them in the next session. By requiring the learning to be saved first, corrections become permanent. The CLAUDE.md evolves based on real usage, not just initial assumptions.

Lesson learned: Include instructions for how your CLAUDE.md should evolve. An agent that learns from corrections is dramatically more useful than one that repeats the same mistakes.

What We'd Do Differently

After three months of daily use:

Start smaller. Our first CLAUDE.md tried to document everything. The useful parts were buried in noise. Start with project description + file conventions + one workflow. Add sections as you discover what the agent gets wrong.

Format specs first. The file conventions section should have been written before any content was created. Reformatting 200+ entries because the original format wasn't specified is painful.

Test each section. After adding a new section, deliberately test it. Ask the agent to do the thing you just documented. See if it follows the instructions. Adjust the wording until it does.

Version control the evolution. Every CLAUDE.md change is a git commit. When something breaks, you can see exactly what changed and revert if needed. This file is code — treat it like code.

Build Yours

A 300-line CLAUDE.md didn't happen overnight. It started at 30 lines and grew because every section solved a real problem — a repeated mistake, a missed connection, a formatting inconsistency.

Your CLAUDE.md will be different from ours. A law firm needs sections about confidentiality and citation standards. A restaurant needs sections about menu terminology and health code compliance. An e-commerce company needs sections about product data formats and inventory conventions.

The structure is the same regardless of industry:

  1. What this is — one paragraph, be specific
  2. Available skills — table of slash commands
  3. Key workflows — numbered steps for core processes
  4. File conventions — exact formats for every file the agent touches
  5. Companion docs — references to SOUL.md and STYLE.md
  6. Security rules — what to never do
  7. Principles — judgment guidelines for ambiguous situations

Start with sections 1, 4, and 7. Those three alone will change how your agent behaves. Add the rest as you use the system and notice what's missing.

We've packaged starter templates for all three files — CLAUDE.md, SOUL.md, and SKILL.md — into a free Agent Configuration Starter Kit. Each template has commented instructions explaining every section.

For deeper reading on the skills layer, see our guide on how to package your business knowledge for AI. For the personality layer, our SOUL.md examples post covers that in detail.


If you want a CLAUDE.md that actually reflects how your business works — not a 20-line template — let's build it together.

Blue Octopus Technology configures AI agents that understand your business. See what we build.

Share:

Stay Connected

Get practical insights on using AI and automation to grow your business. No fluff.