Loading...
Loading...
Found 19 Skills
Asks for user feedback after each task or cron job completion and runs a recursive learning flow. If output is good, asks what was good until 10 approvals; if needs improvement, asks why/how/what via multiple choice plus optional examples, uses web search and iterative thinking to resolve, and caps iterations by severity (slight 5, medium 10, severe 20). Keeps feedback non-intrusive. Use when completing discrete tasks or cron jobs for the user.
A methodology for iteratively improving agent-facing text instructions (skills / slash commands / task prompts / CLAUDE.md sections / code-generation prompts) by having a bias-free executor actually run them and evaluating two-sidedly (executor self-report + instruction-side metrics). Keep iterating until improvements plateau. Use it right after creating or substantially revising a prompt or skill, or when you want to attribute an agent's unexpected behavior to ambiguity on the instruction side.
Reflect on previus response and output, based on Self-refinement framework for iterative improvement with complexity triage and verification
Orchestrate copy exploration. Brief, generate 5 distinct approaches, adversarial review, iterate to 90+ composite, present catalog, user selects, execute.
Reflect on previus response and output, based on Self-refinement framework for iterative improvement with complexity triage and verification
Iteratively auto-optimize a prompt until no issues remain. Uses prompt-reviewer in a loop, asks user for ambiguities, applies fixes via prompt-engineering skill. Runs until converged.
· Batch-improve skill collections with evaluation loops, lint checks, behavioral tests, peer review. Triggers: 'skill refiner', 'improve skills', 'quality sweep', 'batch improve', 'skill loop'. Not for one skill.