Loading...
Loading...
Detects common LLM coding agent artifacts by spawning 4 parallel subagents
npx skill4agent add existential-birds/beagle review-llm-artifacts--all--parallel$ARGUMENTS# Default: changed files from main
git diff --name-only $(git merge-base HEAD main)..HEAD | grep -E '\.(py|ts|tsx|js|jsx|go|rs|java|rb|swift|kt)$'
# If --all flag: scan entire codebase
find . -type f \( -name "*.py" -o -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" -o -name "*.go" -o -name "*.rs" -o -name "*.java" -o -name "*.rb" -o -name "*.swift" -o -name "*.kt" \) ! -path "*/node_modules/*" ! -path "*/.git/*" ! -path "*/vendor/*" ! -path "*/__pycache__/*"# Get unique extensions
echo "$FILES" | sed 's/.*\.//' | sort -u.py.ts.tsx.js.jsx.go.rs.java.rb.swift.kt--parallelTaskSkill(skill: "beagle-core:llm-artifacts-detection").beaglemkdir -p .beagle.beagle/llm-artifacts-review.json{
"version": "1.0.0",
"created_at": "2024-01-15T10:30:00Z",
"git_head": "abc1234",
"scope": "changed" | "all",
"files_scanned": 42,
"languages": ["Python", "TypeScript", "Go"],
"findings": [
{
"id": 1,
"category": "tests" | "dead_code" | "abstraction" | "style",
"type": "dry_violation" | "unused_import" | "over_abstraction" | "verbose_comment" | ...,
"file": "src/utils/helper.py",
"line": 42,
"description": "Repeated setup code in 5 test functions",
"suggestion": "Extract to a pytest fixture",
"risk": "Low" | "Medium" | "High",
"fix_safety": "Safe" | "Needs review",
"fix_action": "refactor" | "delete" | "simplify" | "extract"
}
],
"summary": {
"total": 15,
"by_category": {
"tests": 4,
"dead_code": 5,
"abstraction": 3,
"style": 3
},
"by_risk": {
"High": 2,
"Medium": 8,
"Low": 5
},
"by_fix_safety": {
"Safe": 10,
"Needs review": 5
}
}
}## LLM Artifacts Review
**Scope:** Changed files from main | Entire codebase
**Files scanned:** 42
**Languages:** Python, TypeScript, Go
### Findings by Category
#### Tests (4 issues)
1. [src/tests/test_api.py:15] **DRY violation** (Medium, Safe)
- Repeated setup code in 5 test functions
- Suggestion: Extract to a pytest fixture
2. [src/tests/test_utils.py:42] **Wrong mock boundary** (High, Needs review)
- Mocking internal implementation details
- Suggestion: Mock at the adapter boundary instead
#### Dead Code (5 issues)
3. [src/utils/legacy.py:1] **Unused module** (Low, Safe)
- Module imported nowhere in codebase
- Suggestion: Delete file
...
#### Abstraction (3 issues)
...
#### Style (3 issues)
...
### Summary Table
| Category | Safe Fixes | Needs Review | Total |
|----------|------------|--------------|-------|
| Tests | 3 | 1 | 4 |
| Dead Code | 4 | 1 | 5 |
| Abstraction | 2 | 1 | 3 |
| Style | 1 | 2 | 3 |
| **Total** | **10** | **5** | **15** |
### Next Steps
- Run `/beagle-core:review-llm-artifacts --fix` to auto-fix Safe issues (coming soon)
- Review the JSON report at `.beagle/llm-artifacts-review.json`.beagle/llm-artifacts-review.jsongit_headgit_head# Verify JSON is valid
python3 -c "import json; json.load(open('.beagle/llm-artifacts-review.json'))" 2>/dev/null && echo "✓ Valid JSON" || echo "✗ Invalid JSON"
# Check for staleness (if previous report exists)
STORED_HEAD=$(jq -r '.git_head' .beagle/llm-artifacts-review.json 2>/dev/null)
CURRENT_HEAD=$(git rev-parse --short HEAD)
if [ "$STORED_HEAD" != "$CURRENT_HEAD" ]; then
echo "⚠️ Report was generated on $STORED_HEAD, current HEAD is $CURRENT_HEAD"
fi[FILE:LINE] **ISSUE_TYPE** (Risk, Fix Safety)
- Description
- Suggestion: Specific fix recommendationbeagle-core:llm-artifacts-detectionTask.beagle