Loading...
Loading...
Debug errors, test failures, and unexpected behavior with log analysis and correlation. Use when encountering issues, error messages, analyzing logs, or investigating production errors.
npx skill4agent add htlin222/dotfiles debug/debug [issue] [--logs] [--correlate] [--trace] [--type bug|build|perf|deploy]| Flag | Purpose |
|---|---|
| Enable log pattern analysis (error spikes, frequency, types) |
| Run SQL correlation queries on structured logs |
| Deep stack trace analysis with context |
| Issue category: bug, build, perf(ormance), deploy(ment) |
--logs--correlate--trace# Check recent changes that might have caused the issue
git log --oneline -10
git diff HEAD~3
# Find error patterns in logs
grep -r "error\|Error\|ERROR" logs/ 2>/dev/null | tail -20
# Check test output
npm test 2>&1 | tail -50 # or pytest, cargo test, etc.--logs# Recent errors with context
grep -B 5 -A 10 "ERROR" /var/log/app.log
# Count by error type
grep -oE "Error: [^:]*" app.log | sort | uniq -c | sort -rn
# Errors in time range
awk '/2024-01-15 14:/ && /ERROR/' app.log
# Find repeated errors
grep "ERROR" app.log | cut -d']' -f2 | sort | uniq -c | sort -rn | head -20
# Find error spikes
grep "ERROR" app.log | cut -d' ' -f1-2 | uniq -c | sort -rn| Pattern | Indicates | Action |
|---|---|---|
| NullPointer | Missing null check | Add validation |
| Timeout | Slow dependency | Add timeout, retry |
| Connection refused | Service down | Check health, retry |
| OOM | Memory leak | Profile, increase limits |
| Rate limit | Too many requests | Add backoff, queue |
--correlate-- Errors by endpoint
SELECT endpoint, count(*) as errors
FROM logs
WHERE level = 'ERROR' AND time > NOW() - INTERVAL '1 hour'
GROUP BY endpoint ORDER BY errors DESC;
-- Error rate over time
SELECT
date_trunc('minute', time) as minute,
count(*) filter (where level = 'ERROR') as errors,
count(*) as total
FROM logs
WHERE time > NOW() - INTERVAL '1 hour'
GROUP BY minute ORDER BY minute;
-- Correlate request IDs across services
SELECT service, message, time
FROM logs
WHERE request_id = 'req-12345'
ORDER BY time;--traceimport re
def parse_stack_trace(log_content: str) -> list[dict]:
pattern = r'(?P<exception>\w+Error|\w+Exception): (?P<message>.*?)\n(?P<trace>(?:\s+at .+\n)+)'
traces = []
for match in re.finditer(pattern, log_content):
traces.append({
'type': match.group('exception'),
'message': match.group('message'),
'trace': match.group('trace').strip().split('\n')
})
return traces## Debug Report
**Issue:** [Brief description]
**Root Cause:** [What's actually wrong]
### Evidence
- [Finding 1]
- [Finding 2]
### Fix
[Code or configuration change]
### Verification
[How to confirm the fix works]
### Prevention
[How to prevent this in the future]/debug --logs "API returning 500 errors"/debug --correlate "intermittent failures"