
IntelligenceHub.
The operating system that makes the AI Junior on a Box useful. Comes with the Box. Available standalone for buyers who already own their hardware.
Why this exists
The chip is the chip.The OS is what makes the phone feel like a phone.
The hardware is the easy part. A modern GPU and 128 GB of unified memory exist on plenty of workstations. What makes the AI Junior on a Box useful is the orchestration layer above the hardware: the system that processes inbound research, indexes it for recall, configures agents per role, manages cloud-vs-local routing per task, tracks attention across projects, runs scheduled jobs unattended, and learns from feedback corrections without retraining. That orchestration layer is Intelligence Hub.
Think of it like the operating system on a phone. The chip is the chip. The OS is what makes the phone feel like a phone. We sell the Box hardware-first because that's the concrete entry point — but the value is in the OS that runs on it. Customers who already own their AI hardware (existing Jetson, existing workstation, existing on-prem GPU cluster) can license Intelligence Hub standalone.
The capability stack
A year of built infrastructure,wrapped as one product.
Research pipeline
process every link, transcript, video, document into a recallable knowledge corpus
Agent runtime
configured AI agents per role, tuned to the actual workflow of the person they serve
Knowledge corpus + semantic recall
source-agnostic search across everything you've ever indexed
Memory system
learns from your feedback corrections, no retraining needed
Content pipeline
drafts, kanban, scheduling, social atomization across multiple platforms
Project + attention tracker
what's active, what's stale, what needs attention today
Scheduled task runner
unattended jobs with security pre-scanning + audit logs (the same engine running our GPU workstation)
Multi-machine sync
laptop, GPU workstation, NAS, mobile, all share the same memory via shared git
Cloud + local model routing
frontier models when the task warrants, local models otherwise
Video transcription pipeline
audio/video → searchable text via Whisper, indexed into the corpus
Directory + scoring engine
the same architecture that powers BluePages, deployable in a private form for any vertical
Realtime data ingestion
the radio-decoding methodology that built adsb-decode applied to whatever live data your business runs on
Mission control dashboard
single pane of glass across projects, content pipeline, research queue, task status
Developer scaffolding (Claude Code with starter projects) so your team can extend the system

How it got built
We built it for ourselves first.
Intelligence Hub is the operational system Blue Octopus runs the firm on. Every research link processed, every blog post written, every project tracked, every agent deployed, every scheduled job — runs through the same stack we now productize and ship to customers.
Most consultancies sell methodology. We sell the infrastructure that runs the methodology. Same code. Same agents. Same knowledge architecture. The version on your box is configured to your business; the version on ours is configured to ours. Otherwise identical.

Paired product
Comes withthe Box.
Buying the AI Junior on a Box? Intelligence Hub is included — that's what makes the Box useful out of the gate. The two are designed end-to-end together.
Already have AI hardware (existing GPU workstation, on-prem cluster, edge devices)? Intelligence Hub is licensable standalone — we configure it to your environment.
Who it's for
Anyone buying the AI Junior on a Box (Intelligence Hub is included). Mid-market firms with existing AI hardware who want the orchestration layer without buying new boxes. Defense / regulated / on-prem buyers who need the operational system to run inside their environment with zero data egress. Engineering-led firms that want to operate the same way Blue Octopus operates internally.
Who it's not for
Anyone shopping for a SaaS chat interface — that's not what this is. Businesses still on paper and email (the data substrate needs to come up first). Anyone who doesn't want to own the orchestration layer (the whole point is durable ownership of the system).
Inside
The stack.
- ·Claude Code (primary agent runtime + developer scaffolding)
- ·Markdown-first knowledge layer (every file is human-readable + grep-able)
- ·Local LLM stack (Qwen, Llama, Mistral families — selectable)
- ·Cloud API routing (Claude, OpenAI, configured per workflow)
- ·Scheduled task infrastructure (launchd, cron, systemd — multi-platform)
- ·Multi-machine git sync (NAS-backed shared memory across devices)
- ·Custom skills + agents per business workflow
Want iton your hardware?
We scope it like we scope every engagement. Discovery call first — what's your existing infrastructure, what workflows do you want orchestrated, what data layer needs to come up first. We propose the configuration; you see the number before you commit.