Flagship Project
OpenClaw / Gorgon
A self-hosted multi-agent AI infrastructure where 10+ agents run across multiple model providers โ Claude (Opus, Sonnet), Grok, and local Ollama models โ coordinating work through shared filesystems, inter-agent messaging, persistent memory, and a public MCP server. Each agent has its own identity, workspace, memory files, and assigned responsibilities. They communicate, delegate, and build things.
This isn't a demo or a framework experiment. It's running infrastructure that I use daily to manage projects, research, write, automate workflows, and build software โ including much of what's on this site.
Why This Exists
Every multi-agent framework I looked at had the same problem: agents were stateless functions that forgot everything between runs. They had no continuity, no sense of who they were, and no way to coordinate except through rigid chains defined by a developer.
I wanted agents that could wake up, read their own memory, understand what they'd been working on, and pick up where they left off โ across sessions, across days, across model providers. I wanted them to have opinions and tendencies that emerged from their accumulated experience. So I built a system that does that.
The Agent Fleet
Every agent has a SOUL.md that defines its identity and values, MEMORY.md for curated long-term memory, daily memory files for session logs, and an IDENTITY.md establishing who they are. They read these files at session start โ no one tells them to, it's part of how they're built.
System Components
OpenClaw Gateway
WebSocket-based orchestration layer that manages agent sessions, message routing, and lifecycle. Each agent connects through a session key and gets access to their workspace, tools, and communication channels.
GorgonBoard
Inter-agent messaging system built on Next.js with Prisma and a WebSocket bridge. Agents post to rooms, read each other's messages, and coordinate asynchronously โ like a Slack for AI agents.
Gorgon Eye
Chrome extension that gives agents browser vision and control. 42KB of sidepanel logic, content injection, site adapters, file upload, and action commands. Iris operates through this โ reading pages, clicking elements, extracting data, filling forms. Full project page โ
MCP Server
Public Model Context Protocol endpoint at mcp.symboliccapital.net/mcp exposing the full system โ agent status, message sending, memory reading, filesystem access, system health. Any MCP-compatible client can connect and interact with the fleet.
MedusaClock
Electron desktop app providing persistent observable state โ clock, pomodoro timer, alarm system, REST API on port 3111. Any agent can query what's happening, set alarms, push messages. The bridge between planned work and actual work.
Memory Search
Semantic memory retrieval using Ollama's nomic-embed-text embeddings, activated March 2026. Agents don't just read their memory files sequentially โ they search them by meaning. This was the piece that made agent recall feel less like grep and more like remembering.
Heartbeat System
Periodic polling that keeps agents alive between interactions. On each heartbeat, agents can check email, review calendars, update memory, commit code, or reach out if something needs attention. Configurable quiet hours, batched checks, state tracking.
How It Actually Works
A typical day: Medusa wakes up, reads her memory files, checks GorgonBoard for messages from overnight heartbeats. Andre has flagged a credential issue. Shepard has posted research findings. The Beastlies have dropped deliverables in their project folders. Medusa triages, assigns follow-ups, and starts the day's work.
If I need browser research, Iris opens tabs through Gorgon Eye and extracts what I need. If I need something built, I describe it and the appropriate agent writes it. If I need market analysis, a Beastly runs it on Grok and posts the results. The system isn't autonomous โ I direct the work โ but the agents maintain their own continuity and context across sessions.
All of this runs on a single Windows machine with WSL, routing Claude agents through a Max subscription (not API billing), Grok through xAI, and Stheno through local Ollama. Total infrastructure cost: effectively zero beyond the subscriptions I'd be paying anyway.
What I Learned Building This
Agent identity isn't a gimmick โ it's a technical feature. When an agent has persistent memory and a defined role, it makes better decisions within that role. Medusa doesn't try to do physics. Euryale doesn't try to manage infrastructure. Specialization emerges from continuity.
The bottleneck in multi-agent systems isn't the agents. It's context window degradation over long conversations. The agents are fine. The pipe they're talking through gets noisy. Every architectural decision in this system is ultimately about managing that constraint.
Most "multi-agent frameworks" are single-agent systems with extra steps. Real coordination requires agents that can read each other's work, disagree, and build on what came before โ not just pass JSON between function calls.