Creates production-grade, reusable skills that extend Claude's capabilities.
This skill should be used when users want to create a new skill, improve an
existing skill, or build domain-specific intelligence. Gathers context from
codebase, conversation, and authentic sources before creating adaptable skills.
Create production-grade skills that extend Claude's capabilities.
How This Skill Works
User: "Create a skill for X"
↓
Claude Code uses this meta-skill as guidance
↓
Follow Domain Discovery → Ask user clarifying questions → Create skill
↓
Generated skill with embedded domain expertise
This skill provides guidance and structure for creating skills. Claude Code:
Uses this skill's framework to discover domain knowledge
Asks user for clarifications about THEIR specific requirements
Decides how to structure the generated skill based on domain needs
What This Skill Does
Guides creation of new skills from scratch
Helps improve existing skills to production quality
Key Principle: Users want domain expertise IN the skill. They may not BE domain experts.
Phase 1: Automatic Discovery (No User Input)
Proactively research the domain before asking anything:
Discover
How
Example: "Kafka integration"
Core concepts
Official docs, Context7
Producers, consumers, topics, partitions
Standards/compliance
Search "[domain] standards"
Kafka security, exactly-once semantics
Best practices
Search "[domain] best practices 2025"
Partitioning strategies, consumer groups
Anti-patterns
Search "[domain] common mistakes"
Too many partitions, no monitoring
Security
Search "[domain] security"
SASL, SSL, ACLs, encryption
Ecosystem
Search "[domain] ecosystem tools"
Confluent, Schema Registry, Connect
Sources priority: Official docs → Library docs (Context7) → GitHub → Community → WebSearch
Phase 2: Knowledge Sufficiency Check
Before asking user anything, verify internally:
- [ ] Core concepts understood?
- [ ] Best practices identified?
- [ ] Anti-patterns known?
- [ ] Security considerations covered?
- [ ] Official sources found?
If ANY gap → Research more (don't ask user for domain knowledge)
Only if CANNOT discover (proprietary/internal) → Ask user
Phase 3: User Requirements (NOT Domain Knowledge)
Only ask about user's SPECIFIC context:
Ask
Don't Ask
"What's YOUR use case?"
"What is Kafka?"
"What's YOUR tech stack?"
"What options exist?"
"Any existing resources?"
"How does it work?"
"Specific constraints?"
"What are best practices?"
The skill contains domain expertise. User provides requirements.
Required Clarifications
Ask about SKILL METADATA and USER REQUIREMENTS (not domain knowledge):
Skill Metadata
1. Skill Type - "What type of skill?"
Type
Purpose
Example
Builder
Create artifacts
Widgets, code, documents
Guide
Provide instructions
How-to, tutorials
Automation
Execute workflows
File processing, deployments
Analyzer
Extract insights
Code review, data analysis
Validator
Enforce quality
Compliance checks, scoring
2. Domain - "What domain or technology?"
User Requirements (After Domain Discovery)
3. Use Case - "What's YOUR specific use case?"
Not "what can it do" but "what do YOU need"
4. Tech Stack - "What's YOUR environment?"
Languages, frameworks, existing infrastructure
5. Existing Resources - "Any scripts, templates, configs to include?"
6. Constraints - "Any specific requirements or limitations?"
Performance, security, compliance specific to user's context
Note
Questions 1-2: Ask immediately
Domain Discovery: Research automatically after knowing domain
Questions 3-6: Ask after discovery, informed by domain knowledge
Question pacing: Avoid asking too many questions in a single message. Start with most important, follow up as needed.
Core Principles
Reusable Intelligence, Not Requirement-Specific
Skills must handle VARIATIONS, not single requirements:
❌ Bad: "Create bar chart with sales data using Recharts"
✅ Good: "Create visualizations - adaptable to data shape, chart type, library"
❌ Bad: "Deploy to AWS EKS with Helm"
✅ Good: "Deploy applications - adaptable to platform, orchestration, environment"
Identify what VARIES vs what's CONSTANT in the domain. See
references/reusability-patterns.md
.
Concise is Key
Context window is a public good (~1,500+ tokens per skill activation). Challenge each piece:
"Does Claude really need this explanation?"
"Does this paragraph justify its token cost?"
Prefer concise examples over verbose explanations.
Appropriate Freedom
Match specificity to task fragility:
Freedom Level
When to Use
Example
High
Multiple approaches valid
"Choose your preferred style"
Medium
Preferred pattern exists
Pseudocode with parameters
Low
Operations are fragile
Exact scripts, few parameters
Progressive Disclosure
Three-level loading system:
Metadata (~100 tokens) - Always in context (description ≤1024 chars)
SKILL.md body (<500 lines) - When skill triggers
References (unlimited) - Loaded as needed by Claude
Anatomy of a Skill
Generated skills are zero-shot domain experts with embedded knowledge.
Lowercase, numbers, hyphens; ≤64 chars; match directory
description
[What] + [When]; ≤1024 chars; third-person style
Description style
"This skill should be used when..." (not "Use when...")
Form
Imperative ("Do X" not "You should X")
Scope
What it does AND does not do
What Goes in references/
Embed domain knowledge gathered during discovery:
Gathered Knowledge
Purpose in Skill
Library/API documentation
Enable correct implementation
Best practices
Guide quality decisions
Code examples
Provide reference patterns
Anti-patterns
Prevent common mistakes
Domain-specific details
Support edge cases
Structure references/ based on what the domain needs.
Large files: If references >10k words, include grep search patterns in SKILL.md for efficient discovery.
When to Generate scripts/
Generate scripts when domain requires deterministic, executable procedures:
Domain Need
Example Scripts
Setup/installation
Install dependencies, initialize project
Processing
Transform data, process files
Validation
Check compliance, verify output
Deployment
Deploy services, configure infrastructure
Decision: If procedure is complex, error-prone, or needs to be exactly repeatable → create script. Otherwise → document in SKILL.md or references/.
When to Generate assets/
Generate assets when domain requires exact templates or boilerplate:
Domain Need
Example Assets
Starting templates
HTML boilerplate, component scaffolds
Configuration files
Config templates, schema definitions
Code boilerplate
Base classes, starter code
What NOT to Include
README.md (SKILL.md IS the readme)
CHANGELOG.md
LICENSE (inherited from repo)
Duplicate information
What Generated Skill Does at Runtime
User invokes skill → Gather context from:
1. Codebase (if existing project)
2. Conversation (user's requirements)
3. Own references/ (embedded domain expertise)
4. User-specific guidelines
→ Ensure all information gathered → Implement ZERO-SHOT
Include in Generated Skills
Every generated skill should include:
markdown
## Before ImplementationGather context to ensure successful implementation:
| Source | Gather ||--------|--------||**Codebase**| Existing structure, patterns, conventions to integrate with ||**Conversation**| User's specific requirements, constraints, preferences ||**Skill References**| Domain patterns from `references/` (library docs, best practices, examples) ||**User Guidelines**| Project-specific conventions, team standards |Ensure all required context is gathered before implementing.
Only ask user for THEIR specific requirements (domain expertise is in this skill).
Type-Aware Creation
After determining skill type, follow type-specific patterns: