ai-fixing-errors

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Fix Your Broken AI

修复故障AI

Systematic approach to diagnosing and fixing AI features that aren't working. Run through these checks in order.
这是一套系统化的AI功能故障诊断与修复方法,请按顺序完成以下检查。

Quick Diagnostic Checklist

快速诊断清单

1. Is the AI provider configured?

1. AI提供商是否已配置?

python
import dspy
python
import dspy

Check current config

Check current config

print(dspy.settings.lm) # Should show your LM, not None
print(dspy.settings.lm) # Should show your LM, not None

If None, configure it:

If None, configure it:

lm = dspy.LM("openai/gpt-4o-mini") dspy.configure(lm=lm)

**Common issues:**
- Forgot to call `dspy.configure(lm=lm)`
- API key not set in environment
- Wrong model name format (should be `provider/model-name`)
lm = dspy.LM("openai/gpt-4o-mini") dspy.configure(lm=lm)

**常见问题:**
- 忘记调用`dspy.configure(lm=lm)`
- 环境变量中未设置API密钥
- 模型名称格式错误(应为`provider/model-name`格式)

2. Does the AI respond at all?

2. AI是否有任何响应?

python
undefined
python
undefined

Test the AI provider directly

Test the AI provider directly

lm = dspy.LM("openai/gpt-4o-mini") response = lm("Hello, respond with just 'OK'") print(response)
undefined
lm = dspy.LM("openai/gpt-4o-mini") response = lm("Hello, respond with just 'OK'") print(response)
undefined

3. Is the task definition correct?

3. 任务定义是否正确?

python
undefined
python
undefined

Check your signature defines the right fields

Check your signature defines the right fields

class MySignature(dspy.Signature): """Clear task description here.""" input_field: str = dspy.InputField(desc="what this contains") output_field: str = dspy.OutputField(desc="what to produce")
class MySignature(dspy.Signature): """Clear task description here.""" input_field: str = dspy.InputField(desc="what this contains") output_field: str = dspy.OutputField(desc="what to produce")

Verify by inspecting

Verify by inspecting

print(MySignature.fields)

**Common issues:**
- Missing `dspy.InputField()` / `dspy.OutputField()` annotations
- Wrong type hints (use `str`, `list[str]`, `Literal[...]`, Pydantic models)
- Vague or missing docstring (the docstring IS the task instruction)
print(MySignature.fields)

**常见问题:**
- 缺少`dspy.InputField()` / `dspy.OutputField()`注解
- 类型提示错误(请使用`str`、`list[str]`、`Literal[...]`或Pydantic模型)
- 文档字符串模糊或缺失(文档字符串就是任务指令)

4. Are you passing the right inputs?

4. 传入的输入是否正确?

python
undefined
python
undefined

Check that input field names match

Check that input field names match

result = my_program(question="test") # field name must match signature
result = my_program(question="test") # field name must match signature

Wrong:

Wrong:

result = my_program(q="test") # 'q' doesn't match 'question' result = my_program("test") # positional args don't work
undefined
result = my_program(q="test") # 'q' doesn't match 'question' result = my_program("test") # positional args don't work
undefined

5. Is the output being parsed?

5. 输出是否被正确解析?

python
result = my_program(question="test")
print(result)                    # see all fields
print(result.answer)             # access specific field
print(type(result.answer))       # check type
Common issues with typed outputs:
  • Literal
    type doesn't match any of the provided options
  • Pydantic model validation fails
  • List output returns string instead of list
python
result = my_program(question="test")
print(result)                    # see all fields
print(result.answer)             # access specific field
print(type(result.answer))       # check type
带类型输出的常见问题:
  • Literal
    类型与提供的选项不匹配
  • Pydantic模型验证失败
  • 列表输出返回字符串而非列表

Inspect What the AI Actually Sees

查看AI实际接收的内容

The most powerful debugging tool — shows exactly what prompts were sent and what came back:
python
undefined
这是最强大的调试工具——它会准确显示发送给AI的提示词以及返回的内容:
python
undefined

Show the last 3 AI calls

Show the last 3 AI calls

dspy.inspect_history(n=3)

This shows:
- The full prompt sent to the AI
- The AI's raw response
- How DSPy parsed the response

**What to look for:**
- Is the prompt clear? Does it describe the task well?
- Is the AI's response in the expected format?
- Are few-shot examples (if any) helpful or misleading?
dspy.inspect_history(n=3)

它会显示:
- 发送给AI的完整提示词
- AI的原始响应
- DSPy如何解析响应

**需要关注的点:**
- 提示词是否清晰?是否准确描述了任务?
- AI的响应是否符合预期格式?
- 少样本示例(如果有)是否有帮助还是起到了误导作用?

Common Errors and Fixes

常见错误与修复方案

AttributeError: 'NoneType' has no attribute ...

AttributeError: 'NoneType' has no attribute ...

Cause: AI provider not configured. Fix: Call
dspy.configure(lm=lm)
before using any module.
原因: AI提供商未配置。 修复方案: 在使用任何模块前调用
dspy.configure(lm=lm)

ValueError: Could not parse output

ValueError: Could not parse output

Cause: AI output doesn't match expected format. Fix:
  • Check
    dspy.inspect_history()
    to see what the AI returned
  • Simplify your output types
  • Add clearer field descriptions
  • Use
    dspy.ChainOfThought
    instead of
    dspy.Predict
    (reasoning helps formatting)
原因: AI输出不符合预期格式。 修复方案:
  • 调用
    dspy.inspect_history()
    查看AI返回的内容
  • 简化输出类型
  • 补充更清晰的字段描述
  • 使用
    dspy.ChainOfThought
    替代
    dspy.Predict
    (推理过程有助于格式规范)

TypeError: forward() got an unexpected keyword argument

TypeError: forward() got an unexpected keyword argument

Cause: Input field name mismatch. Fix: Make sure you're passing keyword arguments that match your signature's
InputField
names.
原因: 输入字段名称不匹配。 修复方案: 确保传入的关键字参数与签名中的
InputField
名称一致。

Search/retriever returns empty results

搜索/检索器返回空结果

Cause: Retriever not configured or wrong endpoint. Fix:
python
undefined
原因: 检索器未配置或端点错误。 修复方案:
python
undefined

Check retriever config

Check retriever config

print(dspy.settings.rm)
print(dspy.settings.rm)

Test retriever directly

Test retriever directly

rm = dspy.ColBERTv2(url="http://...") results = rm("test query", k=3) print(results)
undefined
rm = dspy.ColBERTv2(url="http://...") results = rm("test query", k=3) print(results)
undefined

Optimizer makes things worse

优化器导致效果变差

Cause: Bad metric, too little data, or overfitting. Fix:
  • Manually verify your metric on 10-20 examples
  • Add more training data
  • Reduce
    max_bootstrapped_demos
  • Use a validation set to check for overfitting
原因: 指标设置不当、数据量过少或过拟合。 修复方案:
  • 手动在10-20个示例上验证指标
  • 添加更多训练数据
  • 减小
    max_bootstrapped_demos
    的值
  • 使用验证集检查是否过拟合

dspy.Assert
/
dspy.Suggest
failures

dspy.Assert
/
dspy.Suggest
验证失败

Cause: AI output doesn't meet constraints. Fix:
  • Check if constraints are reasonable (not too strict)
  • Make constraint messages more descriptive
  • Ensure the AI can reasonably satisfy the constraints
原因: AI输出未满足约束条件。 修复方案:
  • 检查约束条件是否合理(不要过于严格)
  • 优化约束条件的提示信息,使其更具描述性
  • 确保AI能够合理满足约束条件

Advanced Debugging

高级调试

Enable verbose tracing

启用详细追踪

python
dspy.configure(lm=lm, trace=[])
python
dspy.configure(lm=lm, trace=[])

Now run your program — trace will be populated

Now run your program — trace will be populated

result = my_program(question="test")
undefined
result = my_program(question="test")
undefined

Inspect module structure

检查模块结构

python
undefined
python
undefined

Print the module tree

Print the module tree

print(my_program)
print(my_program)

See all named predictors

See all named predictors

for name, predictor in my_program.named_predictors(): print(f"{name}: {predictor}")
undefined
for name, predictor in my_program.named_predictors(): print(f"{name}: {predictor}")
undefined

Test individual components

测试单个组件

Break your pipeline into pieces and test each one:
python
class MyPipeline(dspy.Module):
    def __init__(self):
        self.step1 = dspy.ChainOfThought("question -> search_query")
        self.step2 = dspy.Retrieve(k=3)
        self.step3 = dspy.ChainOfThought("context, question -> answer")

    def forward(self, question):
        query = self.step1(question=question)
        print(f"Step 1 output: {query.search_query}")  # Debug

        context = self.step2(query.search_query)
        print(f"Step 2 retrieved: {len(context.passages)} passages")  # Debug

        answer = self.step3(context=context.passages, question=question)
        print(f"Step 3 output: {answer.answer}")  # Debug

        return answer
将你的流水线拆分为多个部分,逐个测试:
python
class MyPipeline(dspy.Module):
    def __init__(self):
        self.step1 = dspy.ChainOfThought("question -> search_query")
        self.step2 = dspy.Retrieve(k=3)
        self.step3 = dspy.ChainOfThought("context, question -> answer")

    def forward(self, question):
        query = self.step1(question=question)
        print(f"Step 1 output: {query.search_query}")  # Debug

        context = self.step2(query.search_query)
        print(f"Step 2 retrieved: {len(context.passages)} passages")  # Debug

        answer = self.step3(context=context.passages, question=question)
        print(f"Step 3 output: {answer.answer}")  # Debug

        return answer

Compare prompts before/after optimization

比较优化前后的提示词

python
undefined
python
undefined

Before optimization

Before optimization

baseline = MyProgram() baseline(question="test") print("=== BASELINE PROMPT ===") dspy.inspect_history(n=1)
baseline = MyProgram() baseline(question="test") print("=== BASELINE PROMPT ===") dspy.inspect_history(n=1)

After optimization

After optimization

optimized = MyProgram() optimized.load("optimized.json") optimized(question="test") print("=== OPTIMIZED PROMPT ===") dspy.inspect_history(n=1)
undefined
optimized = MyProgram() optimized.load("optimized.json") optimized(question="test") print("=== OPTIMIZED PROMPT ===") dspy.inspect_history(n=1)
undefined

Additional resources

额外资源

  • For complete error index, see reference.md
  • To measure and improve accuracy, use
    /ai-improving-accuracy
  • Use
    /ai-tracing-requests
    to trace a specific request end-to-end (every LM call, retrieval, latency)
  • For DSPy API details, see
    docs/dspy-reference.md
  • 完整错误索引请查看reference.md
  • 如需衡量并提升准确率,请使用
    /ai-improving-accuracy
  • 使用
    /ai-tracing-requests
    进行端到端追踪特定请求(包括所有LM调用、检索操作、延迟信息)
  • DSPy API详情请查看
    docs/dspy-reference.md