agent-usage-desktop — AI Coding Agent Usage Query
Query your AI coding agent usage data directly in conversation. Supports Claude Code, Codex CLI, OpenClaw, and OpenCode.
When to Use
Activate when the user asks about:
- Cost / spending / billing / how much did I spend
- Token usage / consumption / input / output tokens
- Model comparison / which model costs most
- Session history / recent sessions / session details
- API call counts
- Usage trends over time
- Any question involving "usage", "cost", "tokens", "spend", "sessions" related to AI coding tools
How It Works
This skill has two backends. Always detect which one to use first.
Step 1: Detect Backend
Run the detection script to check if the agent-usage-desktop server is running:
bash
bash SKILL_DIR/scripts/detect.sh
- Output → use API mode (Step 2a)
- Output → use Local mode (Step 2b)
Where
is the directory containing this SKILL.md file.
Step 2a: API Mode (preferred)
Use
to call the agent-usage-desktop REST API. This is faster and has accurate pricing data.
bash
bash SKILL_DIR/scripts/query-api.sh <command> [options]
Commands:
| Command | Description | Key Options |
|---|
| Summary: total cost, tokens, sessions, prompts, API calls | , , |
| Cost breakdown per model | , , |
| Cost trend over time | , , , |
| Token usage trend | , , , |
| List all sessions with cost/tokens | , , |
| Per-model breakdown for one session | |
Options:
- — Start date (default: today)
- — End date (default: today)
--source claude|codex|openclaw|opencode
— Filter by source (default: all)
--granularity 1m|30m|1h|6h|12h|1d|1w|1M
— Time bucket (default: 1d)
- — Session ID for detail query
Step 2b: Local Mode (fallback)
Use
to parse JSONL session files directly. No server needed, but pricing is approximate (built-in price table for common models).
bash
python3 SKILL_DIR/scripts/usage.py <command> [options]
Commands:
| Command | Description |
|---|
| Summary totals |
| Cost per model |
| Session list |
| Top N models by cost |
Same
,
,
options as API mode. Additional:
for top-models count.
Step 3: Interpret and Respond
After getting JSON output from either backend:
- Parse the JSON response
- Format numbers: costs as , tokens as or
- Answer the user's specific question — don't dump raw data
- Use markdown tables for multi-row data (sessions, model breakdown)
- Add brief insights when relevant (e.g., "claude-opus-4-6 accounts for 85% of your spending")
Time Range Mapping
Map natural language to date parameters:
| User says | --from | --to |
|---|
| today | today's date | today's date |
| yesterday | yesterday | yesterday |
| this week | Monday of this week | today |
| this month | 1st of this month | today |
| this year | Jan 1 of this year | today |
| last 7 days | 7 days ago | today |
| last 30 days | 30 days ago | today |
| last N days | N days ago | today |
Calculate actual YYYY-MM-DD dates before passing to scripts.
Source Mapping
| User says | --source value |
|---|
| claude / claude code | claude |
| codex / openai codex | codex |
| openclaw | openclaw |
| opencode | opencode |
| all / everything / total | (omit --source) |
Examples
User: "How much did I spend this month?"
→
bash scripts/query-api.sh stats --from 2026-04-01 --to 2026-04-07
User: "Which model costs the most?"
→
bash scripts/query-api.sh cost-by-model --from 2026-01-01 --to 2026-04-07
User: "Show me today's Claude Code sessions"
→
bash scripts/query-api.sh sessions --from 2026-04-07 --to 2026-04-07 --source claude
User: "Token usage trend this week by hour"
→
bash scripts/query-api.sh tokens-over-time --from 2026-04-01 --to 2026-04-07 --granularity 1h
Notes
- Local mode pricing is approximate — only common models have built-in prices
- For accurate pricing, deploy the agent-usage-desktop server: https://github.com/hongshuo-wang/agent-usage-desktop
- Local mode scans , , ,
~/.local/share/opencode/opencode.db
by default