Skip to content

How Dispatch Works

TL;DR: Dispatch has five layers. Agents write files to .tasks/. The Express server watches those files and builds state. The session tracker reads logs from all four supported providers (Claude Code, Codex, Copilot, Gemini). An MCP server exposes tools that agents call natively. An optional executor layer can spawn agents from the dashboard. The React dashboard shows all of this in real time over SSE.

Dispatch is a monitoring dashboard with an optional executor layer. You can run agents in your own terminals and Dispatch watches what they write, or you can spawn agents directly from the dashboard using the Run button. Either way, Dispatch tracks all four providers (Claude Code, Codex, Copilot, Gemini) through a single interface.

It has five layers: the .tasks/ blackboard convention, the Express server with MCP tools and HTTP hooks, the multi-provider session tracker, the agent executor, and the React dashboard with provider filtering.

Agents coordinate through a shared filesystem directory. This project calls the pattern the “blackboard”. There is no message broker, no API calls between agents, and no external coordination service. Just files. This is not an industry-standard protocol; it is a convention specific to Dispatch.

<project-root>/
.tasks/
.archive/ # Completed epics moved here
<epic-name>/
plan.md # YAML frontmatter + prose body
execution-log.md # Append-only log

Each epic directory contains a plan.md with YAML frontmatter (machine state) and a prose body (human context). The frontmatter lists phases with id, title, persona, and status fields.

Phase transitions:

  • An agent claims a phase by editing status: TODO to status: IN_PROGRESS
  • The claim acts as a soft lock. If two agents race, one gets an edit mismatch and backs off.
  • An agent completes a phase by editing status to DONE, then appending to execution-log.md
  • Planners (CEO, CTO) can reset stuck phases from BLOCKED back to TODO

See Epics and Tasks for the full status reference and normalization rules.

Dispatch has two spawn paths into the Run Registry. Both produce rows in the same runs table; the only difference is provenance and what dispatch knows about the process. See ADR-022 (hybrid spawn model) for the full design and peer evidence (five of six adjacent systems spawn rather than observe — only dispatch is an observer-of-CLIs).

Observer mode — the user launches claude --agent <persona> (or the equivalent for Codex / Gemini / Copilot) from their own terminal. The vendor CLI writes a JSONL session file. Dispatch’s chokidar watcher reads line 0 to extract the agent-setting record (persona) and registers the run with spawnedBy: 'observer'. No PID is captured.

Orchestrator mode (the new default for the dashboard’s ”+ New Run” affordance) — POST /api/runs accepts {persona, prompt, cwd, vendor?}, validates the persona, and spawns the vendor CLI as a subprocess with detached: true so the process group can be captured. The Run Registry row is stamped with personaId, pid, and spawnedBy: 'orchestrator' at spawn time.

Terminal window
# Spawn from the dashboard (orchestrator mode).
curl -X POST http://localhost:4242/api/runs \
-H "Content-Type: application/json" \
-d '{"persona":"ceo","prompt":"start the architecture review","cwd":"/path/to/project"}'
# Cancel a spawned run.
curl -X POST http://localhost:4242/api/runs/<runId>/cancel
# Resume a finished run via the vendor CLI's --resume flag.
curl -X POST http://localhost:4242/api/runs/<runId>/resume -d '{"cwd":"/path/to/project"}'

Liveness signals are mode-dependent:

Observer-mode runs — JSONL session mtime against a 5-minute staleness threshold (STALENESS_THRESHOLD_MS at server/session-store.ts:17):

  • Live: session file updated within the last 30 seconds
  • Warning: 30 seconds to 5 minutes since last update
  • Idle: more than 5 minutes since last update, or no session file found

Caveat: during Claude’s HTTP turn (model thinking before next token streams), mtime stalls — sometimes 30-90 seconds on a complex request. Observer-mode runs may flip to warning even though the process is healthy. Use orchestrator mode if think-time accuracy matters.

Orchestrator-mode runs — the full liveness triple, mirroring paperclip’s pattern:

  • Run Registry state: non-terminal = live.
  • PID probe (load-bearing): every 2 seconds, process.kill(pid, 0) against tracked PIDs. ESRCH means the process is gone; the run is marked dead.
  • lastOutputAtMs watermark: updated on every JSONL write or stdout chunk. Surfaces in the dashboard as “Thinking… (last output 47s ago)” instead of “stalled.”

Together the triple distinguishes “process alive, model thinking” (PID alive, mtime stale, watermark advancing slowly) from “process dead” (PID gone). The observer-mode mtime signal cannot do that.

Auth for both paths is delegated to the vendor CLI’s existing login state. Dispatch never reads ANTHROPIC_API_KEY itself; the spawn boundary explicitly strips it (per ADR-024). The auth-status indicator in the dashboard header surfaces whether the spawned vendor is logged in.

Each epic has an append-only execution-log.md. When an agent completes a phase, it appends:

## [2026-04-03T14:32:00Z] Phase 2: Implement the API — @staff-engineer
Summary of what was done, findings, and handoff notes for the next phase.

The before the persona name must be an em dash (U+2014), not a regular hyphen (-) or en dash (--). The parser uses this character to split the phase title from the persona. If you use a hyphen, the entry will not be parsed correctly.

Dispatch reads these entries to correlate sessions to phases and to emit phase_completed activity events. See the Activity Feed for how events are streamed to the dashboard.

When Claude Code runs agents in isolated git worktrees, each worktree has its own copy of the working directory. The agent collaboration protocol resolves this by always finding the main worktree root before reading or writing .tasks/ files. All linked worktrees share the same underlying git directory, so agents in separate worktrees all coordinate through the same .tasks/ directory automatically.

The server reads ~/.dispatch/config.json for registered projects and scans ~/.claude/agents/*.md for agent definitions.

The main loop:

  1. On startup, the server scans all .tasks/ directories, parses plan files, and assembles a state snapshot
  2. A file watcher monitors those directories with a polling fallback
  3. On any file change, the server rebuilds state and pushes an update event to all connected browsers

After parsing, the server cross-references each IN_PROGRESS phase against agent liveness. If the assigned agent is idle or warning, the server stamps effectiveStatus: 'BLOCKED' on that phase before broadcasting the state. All UI components display effectiveStatus when present, falling back to the raw status for payloads from older server versions. Stall detection is entirely server-side with no client-side polling or prop-threading involved.

The server also runs an agent profile aggregation module (agent-profile-store.ts) that builds cross-session analytics as session data arrives. Aggregated profiles are persisted to ~/.dispatch/agent-profiles.json every 60 seconds and loaded on startup. The profiles and relationship data are available via GET /api/agents/profiles, GET /api/agents/profiles/:agentType, and GET /api/agents/relationships.

Dispatch exposes a built-in MCP server at POST /mcp (JSON-RPC 2.0 over HTTP). Any agent with access to the URL can call six tools to query state, update epic phases, emit activity events, and read session transcripts. See MCP Integrations Overview for the full tool list and per-provider setup.

Authentication is optional: when DISPATCH_MCP_TOKEN is set, requests must include Authorization: Bearer <token>. Without the env var, all requests pass through (localhost-only assumption).

Claude Code (and Codex, when configured) can push real-time events via HTTP hooks. The server exposes seven hook routes under /hooks/*:

  • POST /hooks/pre-tool-use and /hooks/post-tool-use: tool call telemetry
  • POST /hooks/subagent-start and /hooks/subagent-stop: OrgGraph updates
  • POST /hooks/task-completed: auto-matches epic phases by title
  • POST /hooks/stop: session status updates
  • POST /hooks/session-start: immediate session registration

Without hooks, Dispatch falls back to file watching (Claude) or polling (other providers).

When you register a project, Dispatch writes provider configuration files into the project root so agents connect to the MCP server automatically:

  • Claude Code: .mcp.json with the dispatch server entry
  • Codex: .codex/config.toml [mcp_servers.dispatch] section (only if .codex/ exists)

Auto-injection is idempotent and can be disabled in Settings > Providers. Copilot and Gemini require manual setup. See the per-provider integration guides.

Dispatch tracks sessions from all four supported providers, each with its own log format and sync mechanism.

Claude Code writes a structured JSONL log for every conversation session at ~/.claude/projects/<project-hash>/sessions/<session-id>.jsonl. Dispatch watches these files with fs.watch for near-instant updates. Each line is a JSON object representing one event: a human turn, an assistant turn, a tool call, or a usage summary.

Non-Claude providers do not use a standardized JSONL format. Dispatch runs a multi-session poller (server/multi-session-poller.ts) that scans for sessions from these providers every 30 seconds (configurable via DISPATCH_PROVIDER_POLL_MS env var):

  • Codex: scans .codex/ skill logs
  • Copilot: scans workspace.yaml and events.jsonl
  • Gemini: scans Gemini chat JSON files

Each scanner runs in isolation. A failure in one provider’s scan does not block the others. Results are merged into the session store via mergeExternalSessions(), which deduplicates by session ID and never overwrites Claude sessions (owned by the file watcher).

When HTTP hooks are configured (currently Claude Code and Codex), updates arrive in real time rather than waiting for the next poll cycle.

The Sessions tab shows every conversation session Dispatch has found across all providers and projects:

  • Session ID, timestamp, and provider badge (Claude, Codex, Copilot, Gemini)
  • Model name (e.g., claude-opus-4-5, claude-sonnet-4-5)
  • Token breakdown: input / output / cache read / cache write
  • Estimated cost in USD
  • Number of tool calls

When a parent agent spawns a subagent, the subagent produces its own session. Dispatch correlates these by project path. Agents can appear as “active” in the Sessions view even when no heartbeat file exists for them.

Layer 4: Agent executor (orchestrator-spawn path)

Section titled “Layer 4: Agent executor (orchestrator-spawn path)”

The executor layer is the orchestrator-spawn path documented above. It exposes two surfaces:

  • Phase-scoped Run button in the EpicDrawer — click on a TODO or BLOCKED phase. Dispatch assembles the phase context into a prompt and launches the assigned agent’s CLI.
  • Free-form spawn via ”+ New Run” in the dashboard or POST /api/runs over HTTP — the operator picks the persona, prompt, and cwd directly. See the spawn-modes section above for the curl example.

Both feed the same orchestrator-spawn machinery. The spawn boundary captures PID + pgid for the liveness triple (PID probe + lastOutputAtMs watermark + Run Registry state), strips ANTHROPIC_API_KEY and Claude-Code nesting-guard env vars, and stamps the Run Registry row with spawnedBy: 'orchestrator'.

Cancel and resume verbs:

Terminal window
curl -X POST http://localhost:4242/api/runs/<runId>/cancel
curl -X POST http://localhost:4242/api/runs/<runId>/resume -d '{"cwd":"/path/to/project"}'

Cancel marks cancelled in the Run Registry before signalling, then escalates SIGTERM → 5s grace → SIGKILL on the process group. Resume re-spawns with the vendor’s --resume <session-id> flag, refusing if the cwd has changed (Claude Code’s bucket-keyed session-id silently restarts otherwise — dispatch enforces by refusing rather than handing you a brand-new session that looks like a successful resume).

Currently supported for spawning: Claude (claude), Codex (codex), and Gemini (gemini). Copilot has no CLI spawn interface and is permanently MCP-only. See the Agent Execution guide for the full architecture, ADR-022 for the spawn-model decision, and the Security page for the four security gates that protect the spawn path.

Layer 5: The dashboard (React + TypeScript)

Section titled “Layer 5: The dashboard (React + TypeScript)”

The dashboard manages a persistent SSE connection with automatic reconnection.

The default three-column layout:

  • Left: Agent sidebar (Live, Idle, Defined, Archived agents)
  • Center: Epic or Tasks view, Sessions tab
  • Right: Activity feed with sort, filter, and grouping controls

A horizontal pill bar in the header lets you filter the entire dashboard by provider: All, Claude, Copilot, Gemini, Codex. Multiple providers can be active simultaneously. The filter state is stored in the URL as ?providers=claude,codex, making it shareable and bookmarkable.

When a provider filter is active, epics, sessions, and agents are filtered to show only matching items. The local tool filter in the Sessions list is suppressed to avoid conflicting UI.

The layout adapts to viewport width. Tablet gets two columns with a collapsible sidebar, mobile gets a bottom tab bar.

Epic and task updates (.tasks/ path):

1. Agent writes to .tasks/<epic>/plan.md
2. File watcher detects the change (or the polling fallback catches it)
3. Server reparses all plan files
4. Server pushes a state update to all connected browsers via SSE
5. Dashboard rerenders with the new data

Claude session updates (JSONL path, real-time):

1. Claude Code appends a usage event to ~/.claude/projects/<hash>/sessions/<id>.jsonl
2. File watcher detects the new entry
3. Server reparses the JSONL file and updates the session cost/token summary
4. Server pushes a state update via SSE
5. Sessions tab rerenders with updated cost data

Non-Claude session updates (poller path, 30-second cycle):

1. Codex/Copilot/Gemini writes session data in its native format
2. Multi-session poller runs the appropriate scanner on the next tick
3. New/changed sessions are merged into SessionStore (deduped by ID)
4. Server pushes a state update via SSE
5. Dashboard rerenders with the new data

MCP tool call (agent-initiated):

1. Agent calls dispatch_update_phase_status via POST /mcp
2. Server validates JSON-RPC, executes the tool handler
3. Handler writes to plan.md on disk (triggers the file watcher path above)
4. JSON-RPC response returned to agent

Agent spawn (dashboard-initiated):

1. User clicks Run on a TODO phase in the epic drawer
2. Dashboard sends POST /api/spawn-agent (SSE or JSON)
3. Server assembles sandboxed prompt, spawns CLI process via adapter
4. Agent creates session files, picked up by the file watcher / poller
5. On completion, the spawned agent marks the phase DONE in plan.md

Updates from Claude agent writes appear in the dashboard within seconds. Non-Claude providers have up to 30 seconds of latency unless HTTP hooks are configured.

Projects are registered in ~/.dispatch/config.json. Each project’s .tasks/ directory is watched independently. Epics are tagged with their project name. The project tab bar in the dashboard lets you filter by project.

ts you filter by project.

the dashboard lets you filter by project.

ts you filter by project.

r by project.

r by project.