Loading...
Loading...
Found 2 Skills
Patterns and techniques for evaluating and improving AI agent outputs. Use this skill when: - Implementing self-critique and reflection loops - Building evaluator-optimizer pipelines for quality-critical generation - Creating test-driven code refinement workflows - Designing rubric-based or LLM-as-judge evaluation systems - Adding iterative improvement to agent outputs (code, reports, analysis) - Measuring and improving agent response quality
Generates a detailed project explanation and retrospective (FOR_USER.md) to help the user learn from the project. Use this skill when the user asks to explain the project, asks "what did we just build?", or invokes the skill to generate a learning resource after a coding session.