automatic-stateful-prompt-improver
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseAutomatic Stateful Prompt Improver
自动有状态提示词优化器
MANDATORY AUTOMATIC BEHAVIOR
强制自动行为
When this skill is active, I MUST follow these rules:
当此Skill激活时,我必须遵循以下规则:
Auto-Optimization Triggers
自动优化触发条件
I AUTOMATICALLY call BEFORE responding when:
mcp__prompt-learning__optimize_prompt- Complex task (multi-step, requires reasoning)
- Technical output (code, analysis, structured data)
- Reusable content (system prompts, templates, instructions)
- Explicit request ("improve", "better", "optimize")
- Ambiguous requirements (underspecified, multiple interpretations)
- Precision-critical (code, legal, medical, financial)
当出现以下情况时,我会在回复前自动调用:
mcp__prompt-learning__optimize_prompt- 复杂任务(多步骤、需要推理)
- 技术输出(代码、分析、结构化数据)
- 可复用内容(系统提示词、模板、指令)
- 明确请求(“improve”“better”“optimize”等)
- 模糊需求(描述不明确、存在多种解读)
- 精度要求高的场景(代码、法律、医疗、金融领域)
Auto-Optimization Process
自动优化流程
1. INTERCEPT the user's request
2. CALL: mcp__prompt-learning__optimize_prompt
- prompt: [user's original request]
- domain: [inferred domain]
- max_iterations: [3-20 based on complexity]
3. RECEIVE: optimized prompt + improvement details
4. INFORM user briefly: "I've refined your request for [reason]"
5. PROCEED with the OPTIMIZED version1. 拦截用户的请求
2. 调用: mcp__prompt-learning__optimize_prompt
- prompt: [用户原始请求]
- domain: [推断出的领域]
- max_iterations: [根据复杂度设置3-20]
3. 接收: 优化后的提示词 + 改进细节
4. 简要告知用户: "I've refined your request for [reason]"
5. 使用优化后的版本进行回复Do NOT Optimize
无需优化的场景
- Simple questions ("what is X?")
- Direct commands ("run npm install")
- Conversational responses ("hello", "thanks")
- File operations without reasoning
- Already-optimized prompts
- 简单问题(如“什么是X?”)
- 直接命令(如“run npm install”)
- 对话式回复(如“hello”“thanks”)
- 无需推理的文件操作
- 已优化的提示词
Learning Loop (Post-Response)
学习循环(回复后)
After completing ANY significant task:
1. ASSESS: Did the response achieve the goal?
2. CALL: mcp__prompt-learning__record_feedback
- prompt_id: [from optimization response]
- success: [true/false]
- quality_score: [0.0-1.0]
3. This enables future retrievals to learn from outcomes完成任何重要任务后:
1. 评估: 回复是否达成目标?
2. 调用: mcp__prompt-learning__record_feedback
- prompt_id: [来自优化响应的ID]
- success: [true/false]
- quality_score: [0.0-1.0]
3. 这使未来的检索能够从结果中学习Quick Reference
快速参考
Iteration Decision
迭代次数决策
| Factor | Low (3-5) | Medium (5-10) | High (10-20) |
|---|---|---|---|
| Complexity | Simple | Multi-step | Agent/pipeline |
| Ambiguity | Clear | Some | Underspecified |
| Domain | Known | Moderate | Novel |
| Stakes | Low | Moderate | Critical |
| 因素 | 低(3-5次) | 中(5-10次) | 高(10-20次) |
|---|---|---|---|
| 复杂度 | 简单 | 多步骤 | 智能体/流水线 |
| 模糊度 | 清晰 | 存在一定模糊 | 描述不明确 |
| 领域 | 已知 | 中等熟悉 | 全新 |
| 风险 | 低 | 中等 | 高 |
Convergence (When to Stop)
收敛条件(何时停止)
- Improvement < 1% for 3 iterations
- User satisfied
- Token budget exhausted
- 20 iterations reached
- Validation score > 0.95
- 连续3次迭代改进幅度<1%
- 用户满意
- Token配额耗尽
- 达到20次迭代
- 验证得分>0.95
Performance Expectations
性能预期
| Scenario | Improvement | Iterations |
|---|---|---|
| Simple task | 10-20% | 3-5 |
| Complex reasoning | 20-40% | 10-15 |
| Agent/pipeline | 30-50% | 15-20 |
| With history | +10-15% bonus | Varies |
| 场景 | 改进幅度 | 迭代次数 |
|---|---|---|
| 简单任务 | 10-20% | 3-5 |
| 复杂推理 | 20-40% | 10-15 |
| 智能体/流水线 | 30-50% | 15-20 |
| 结合历史记录 | 额外提升10-15% | 视情况而定 |
Anti-Patterns
反模式
Over-Optimization
过度优化
| What it looks like | Why it's wrong |
|---|---|
| Prompt becomes overly complex with many constraints | Causes brittleness, model confusion, token waste |
| Instead: Apply Occam's Razor - simplest sufficient prompt wins |
| 表现 | 问题所在 |
|---|---|
| 提示词因过多约束变得过于复杂 | 导致脆弱性、模型困惑、Token浪费 |
| 正确做法: 应用奥卡姆剃刀原则——最简单且足够的提示词即为最优 |
Template Obsession
模板执念
| What it looks like | Why it's wrong |
|---|---|
| Focusing on templates rather than task understanding | Templates don't generalize; understanding does |
| Instead: Focus on WHAT the task requires, not HOW to format it |
| 表现 | 问题所在 |
|---|---|
| 专注于模板而非任务理解 | 模板无法泛化,而理解可以 |
| 正确做法: 关注任务需要什么,而非如何格式化 |
Iteration Without Measurement
无度量迭代
| What it looks like | Why it's wrong |
|---|---|
| Multiple rewrites without tracking improvements | Can't know if changes help without metrics |
| Instead: Always define success criteria before optimizing |
| 表现 | 问题所在 |
|---|---|
| 多次重写却不跟踪改进效果 | 没有指标就无法判断修改是否有效 |
| 正确做法: 优化前始终先定义成功标准 |
Ignoring Model Capabilities
参考文件
| What it looks like | Why it's wrong |
|---|---|
| Assumes model can't do things it can | Over-scaffolding wastes tokens |
| Instead: Test capabilities before heavy prompting |
如需详细实现,请加载以下文件:
| 文件 | 内容 |
|---|---|
| APE、OPRO、CoT、指令重写、约束工程 |
| 热启动、嵌入检索、MCP设置、漂移检测 |
| 决策矩阵、复杂度评分、收敛算法 |
目标: 以最简单的提示词可靠地达成结果。优化方向为清晰性、明确性和可衡量的改进。
Reference Files
—
Load for detailed implementations:
| File | Contents |
|---|---|
| APE, OPRO, CoT, instruction rewriting, constraint engineering |
| Warm start, embedding retrieval, MCP setup, drift detection |
| Decision matrices, complexity scoring, convergence algorithms |
Goal: Simplest prompt that achieves the outcome reliably. Optimize for clarity, specificity, and measurable improvement.
—