Loading...
Loading...
ABSOLUTE MUST to debug and inspect LLM/AI agent traces using PostHog's MCP tools. Use when the user pastes a trace URL (e.g. /llm-observability/traces/<id>), asks to debug a trace, figure out what went wrong, check if an agent used a tool correctly, verify context/files were surfaced, inspect subagent behavior, investigate LLM decisions, or analyze token usage and costs.
npx skill4agent add posthog/skills exploring-llm-traces| Tool | Purpose |
|---|---|
| Search and list traces (compact — no large content) |
| Get a single trace by ID with full event tree |
| Ad-hoc SQL for complex trace analysis |
$ai_trace (top-level container)
└── $ai_span (logical groupings, e.g. "RAG retrieval", "tool execution")
├── $ai_generation (individual LLM API call)
└── $ai_embedding (embedding creation)$ai_parent_id$ai_span_id$ai_trace_idposthog:query-llm-trace
{
"traceId": "<trace_id>",
"dateRange": {"date_from": "-7d"}
}$ai_span$ai_generation$ai_span_name$ai_parent_id_posthogUrl$ai_input$ai_output_choices# 1. Overview: metadata, tool calls, final output, errors
python3 scripts/print_summary.py /path/to/persisted-file.json
# 2. Timeline: chronological event list with truncated I/O
python3 scripts/print_timeline.py /path/to/persisted-file.json
# 3. Drill into a specific span's full input/output
SPAN="tool_name" python3 scripts/extract_span.py /path/to/persisted-file.json
# 4. Full conversation with thinking blocks and tool calls
python3 scripts/extract_conversation.py /path/to/persisted-file.json
# 5. Search for a keyword across all properties
SEARCH="keyword" python3 scripts/search_traces.py /path/to/persisted-file.jsonMAX_LEN=N$ai_span$ai_span_name$ai_input_state$ai_output_state$ai_is_error$ai_generation$ai_input$ai_span$ai_output_state$ai_parent_id$ai_output_state$ai_is_error$ai_generationsearch_traces.pySEARCH="the text" python3 scripts/search_traces.py FILE$ai_input_posthogUrlhttps://app.posthog.com/llm-observability/traces/<trace_id>?timestamp=<url_encoded_timestamp>&event=<optional_event_id>_posthogUrlquery-llm-traces-listtimestampcreatedAttimestamp=2026-04-01T19%3A39%3A20Zposthog:query-llm-traces-listposthog:read-data-schemaposthog:read-data-schemakind: "events"$ai_*posthog:read-data-schemakind: "event_properties"event_name: "$ai_generation"posthog:read-data-schemakind: "event_property_values"event_name: "$ai_generation"property_name: "$ai_model"query-llm-traces-listproject_idconversation_iduser_tier$ai_*emailposthog:query-llm-traces-list
{
"dateRange": {"date_from": "-1h"},
"filterTestAccounts": true,
"limit": 20,
"properties": [
{"type": "event", "key": "$ai_model", "value": "gpt-4o", "operator": "exact"}
]
}posthog:query-llm-traces-list
{
"dateRange": {"date_from": "-1h"},
"filterTestAccounts": true,
"properties": [
{"type": "event", "key": "$ai_provider", "value": "anthropic", "operator": "exact"},
{"type": "event", "key": "$ai_is_error", "value": ["true"], "operator": "exact"}
]
}read-data-schemakind: "entity_properties"entity: "person"posthog:query-llm-traces-list
{
"dateRange": {"date_from": "-1h"},
"filterTestAccounts": true,
"properties": [
{"type": "person", "key": "email", "value": "@company.com", "operator": "icontains"}
]
}posthog:read-data-schemaposthog:read-data-schemakind: "event_properties"event_name: "$ai_trace"posthog:query-llm-traces-list
{
"dateRange": {"date_from": "-7d"},
"properties": [
{"type": "event", "key": "project_id", "value": "proj_abc123", "operator": "exact"}
]
}query-llm-traces-listSELECT
properties.$ai_trace_id AS trace_id,
properties.$ai_model AS model,
timestamp
FROM events
WHERE
event = '$ai_generation'
AND timestamp >= now() - INTERVAL 1 HOUR
AND properties.$ai_input ILIKE '%search term%'
ORDER BY timestamp DESC
LIMIT 20TraceQuery[{ "type": "text", "text": "{\"results\": [...], \"_posthogUrl\": \"...\"}" }]results (array for list, object for single trace)
├── id, traceName, createdAt, totalLatency, totalCost
├── inputState, outputState (trace-level state)
└── events[]
├── event ($ai_span | $ai_generation | $ai_embedding | $ai_metric | $ai_feedback)
├── id, createdAt
└── properties
├── $ai_span_name, $ai_latency, $ai_is_error
├── $ai_input_state, $ai_output_state (span tool I/O)
├── $ai_input, $ai_output_choices (generation messages)
├── $ai_model, $ai_provider
└── $ai_input_tokens, $ai_output_tokens, $ai_total_cost_usd| Script | Purpose | Usage |
|---|---|---|
| Trace metadata, tool calls, errors, and final LLM output | |
| Chronological event timeline with I/O summaries | |
| Full input/output of a specific span by name | |
| LLM messages with thinking blocks and tool calls | |
| Find a keyword across all event properties | |
| Show JSON keys and types without values | |
dateRange-30m-1h-7d-30d_posthogUrl$ai_input_state$ai_output_state$ai_input$ai_output_choicesfilterTestAccounts: true$ai_traceeventsinputStateoutputStatetraceName