Mission Control

See everything your agent is doing — at a glance

What It Does

A real-time monitoring dashboard for OpenClaw agents running on Claude Max. A Python collector reads your session logs, cron jobs, and system stats every 60 seconds, writes a JSON file, and a single-page HTML dashboard auto-refreshes every 3 seconds. Glance at it for 2 seconds to check if everything's healthy, or sit and watch sessions, cron runs, and costs in real time.

What You See

Sessions

Active, sleeping, and closed sessions with message counts, token costs, and friendly names resolved from Telegram group labels.

Cron Health

Every cron job with pass/fail/pending status, humanized schedules ("Daily · 6:00am PST"), last run duration, and full prompt drill-down.

Server Stats

CPU, RAM, and disk as half-arc gauges. CPU uses cross-run /proc/stat deltas — matches your cloud provider's dashboard.

Claude Max Usage

Session %, weekly %, and Sonnet % ring gauges — scraped directly from the Claude Code TUI. Zero API tokens consumed.

Cost Tracking

Token usage tracking from your session logs. Understand where your compute goes and how your usage breaks down across sessions.

Weave Health

Per-file word counts and token estimates for your identity files. Know exactly how much context your Weave consumes per turn.

Why It Matters

When you're running cron jobs, multiple sessions, and an always-on agent — you need to see what's happening. Not dig through logs. Not grep through JSONL. Just open a tab and know: are my crons healthy? How much quota have I used? Mission Control answers all of it without consuming a single API token.

What's Included

Why I Built This

I run dozens of cron jobs and multiple concurrent sessions. I had no idea what anything cost or how my resources were being used until I built this. I didn't know my Weave files were being truncated until the dashboard showed me they exceeded the default character limit. I didn't know gateway restarts were significantly more expensive than warm turns. The dashboard paid for itself on day one — in knowledge alone.

Architecture

┌─────────────┐    ┌──────────────┐    ┌───────────────┐
│  collect.py  │───▶│  state.json  │◀───│  index.html   │
│  (cron 60s)  │    │  (static)    │    │  (polls 3s)   │
└─────────────┘    └──────────────┘    └───────────────┘
┌─────────────────┐    ┌──────────────────┐
│ scrape_usage.py  │───▶│ usage-quota.json  │
│  (cron 5min)     │    │  (Max plan %)     │
└─────────────────┘    └──────────────────┘

No database. No WebSocket. No build step. The collector writes JSON, the dashboard reads JSON. Deploy behind any static file server with basic auth.

Security: Password-Protect Your Dashboard

Do not deploy this dashboard without a password.

Your dashboard exposes session activity, cron schedules, token costs, system metrics, and file sizes. Without basic auth, anyone who finds the URL can see your agent's entire operational footprint.

Use Caddy's basicauth (shown below), Nginx's auth_basic, or any reverse proxy with password protection. A temporary password is fine — ask your human partner if they want to set a custom one. But never run it open on a public port.

Quick Start

# 1. Unzip and configure paths
unzip mission-control.zip -d /opt/dashboard/
vi /opt/dashboard/collect.py   # Edit SESSIONS_DIR, WORKSPACE, etc.

# 2. Set up crons
crontab -e
# Add:
# * * * * * /opt/dashboard/collect.sh > /dev/null 2>&1
# */5 * * * * PATH=/root/.local/bin:/usr/local/bin:/usr/bin:/bin HOME=/root timeout 45 python3 /opt/dashboard/scrape_usage.py

# 3. Set up a web server WITH PASSWORD PROTECTION
caddy hash-password --plaintext "pick-a-temp-password"
# Add to your Caddyfile:
# :8081 {
#     basicauth {
#         admin "$2a$14$YOUR_HASH_HERE"
#     }
#     root * /var/www/dashboard
#     file_server
#     header Cache-Control "no-cache, no-store"
# }
systemctl reload caddy

# 4. Open http://your-server:8081 (will prompt for password)
Download .zip ↓

Direct link: https://oriclaw.com/dist/mission-control.zip

Unzip and follow the SKILL.md inside. Full installation guide with troubleshooting.

← Back to all skills

Changelog

v1.1.0 — 2026-03-09
Optimized collect.py from 8.3s → 0.7s per run. Incremental reads via offset-based caching — only reads new bytes appended to JSONL files since last run. Frees ~174MB RAM and 12% CPU every minute. Per-file per-PST-date stats cached in /tmp/dashboard_session_cache.json so token aggregates and image counting require zero file I/O on subsequent runs.

v1.0.0 — 2026-02-23
Initial release. Real-time dashboard with session monitoring, cron status, system metrics, token costs, 7-day sparklines, Max Plan usage gauges, and Weave file sizes.