getting-started
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCrewAI Getting Started & Architecture
CrewAI 入门与架构
How to choose the right abstraction, scaffold a project, and wire everything together.
如何选择合适的抽象层、搭建项目框架并将所有组件整合在一起。
MANDATORY WORKFLOW — Read This First
必看工作流 — 请先阅读本节
NEVER manually create crewAI project files. Always scaffold with the CLI:
bash
crewai create flow <project_name>This is not optional. Even if you only need one crew, even if you know the file structure by heart — run the CLI first, then modify the generated files. Do NOT write , , , , or by hand from scratch.
main.pycrew.pyagents.yamltasks.yamlpyproject.tomlWhy: The CLI sets up correct imports, directory structure, pyproject.toml config, and boilerplate that is easy to get subtly wrong when done manually. The reference material below teaches you how the pieces work so you can modify scaffolded code, not so you can replace the scaffolding step.
Workflow:
- Run (use underscores, not hyphens)
crewai create flow <name> - Edit the generated YAML and Python files to match your use case
- Run then
crewai installcrewai run
切勿手动创建CrewAI项目文件。请始终使用CLI搭建项目框架:
bash
crewai create flow <project_name>这一步并非可选。哪怕你只需要一个crew,哪怕你对文件结构烂熟于心 — 也要先运行CLI,再修改生成的文件。请勿从头手动编写、、、或。
main.pycrew.pyagents.yamltasks.yamlpyproject.toml原因: CLI会自动配置正确的导入语句、目录结构、pyproject.toml配置以及模板代码,这些内容手动编写很容易出现细微错误。以下参考内容会教你各个组件的工作原理,以便你修改脚手架生成的代码,而非替代脚手架搭建步骤。
工作流:
- 运行(请使用下划线,而非连字符)
crewai create flow <name> - 编辑生成的YAML和Python文件以匹配你的使用场景
- 运行,然后执行
crewai installcrewai run
1. Choosing the Right Abstraction
1. 选择合适的抽象层
crewAI has four levels of abstraction. Pick the simplest one that fits your need:
| Level | When to Use | Overhead | Example |
|---|---|---|---|
| Single prompt, no tools, structured extraction | Lowest | Parse an email into fields |
| One agent with tools and reasoning, no multi-agent coordination | Low | Research a topic with web search |
| Multiple agents collaborating on related tasks | Medium | Research + write + review pipeline |
| Production app with state, routing, conditionals, error handling | Full | Multi-step workflow with branching logic |
CrewAI有四个层级的抽象层。请选择最符合你需求的最简方案:
| 层级 | 使用场景 | 复杂度 | 示例 |
|---|---|---|---|
| 单提示词、无需工具、结构化提取 | 最低 | 将电子邮件解析为指定字段 |
| 单个带工具和推理能力的Agent,无需多Agent协作 | 低 | 使用网页搜索研究某个主题 |
| 多个Agent协作完成相关任务 | 中等 | 研究+写作+审核流水线 |
封装crews/agents/LLM调用的 | 具备状态管理、路由、条件判断、错误处理的生产级应用 | 完整 | 带有分支逻辑的多步骤工作流 |
Decision Flowchart
决策流程图
Do you need tools or multi-step reasoning?
├── No → LLM.call()
└── Yes
└── Do you need multiple agents collaborating?
├── No → Agent.kickoff()
└── Yes
└── Do you need state management, routing, or multiple crews?
├── No → Crew (but still scaffold as a Flow for future-proofing)
└── Yes → Flow + Crew(s)Rule of thumb: For any production application, always start with a Flow. You can embed , , or inside Flow steps. This gives you state management, error handling, and room to grow.
LLM.call()Agent.kickoff()Crew.kickoff()是否需要工具或多步骤推理?
├── 不需要 → LLM.call()
└── 需要
└── 是否需要多个Agent协作?
├── 不需要 → Agent.kickoff()
└── 需要
└── 是否需要状态管理、路由或多个crews?
├── 不需要 → Crew(但仍建议以Flow方式搭建框架以保证可扩展性)
└── 需要 → Flow + Crew(s)经验法则: 对于任何生产级应用,始终从Flow开始。你可以在Flow步骤中嵌入、或。Flow能为你提供状态管理、错误处理能力,以及后续扩展的空间。
LLM.call()Agent.kickoff()Crew.kickoff()2. LLM.call() — Direct LLM Invocation
2. LLM.call() — 直接调用大语言模型
Use for simple, single-turn tasks where you don't need tools or agent reasoning.
python
from crewai import LLM
from pydantic import BaseModel
class EmailFields(BaseModel):
sender: str
subject: str
urgency: str
llm = LLM(model="openai/gpt-4o")适用于简单的单轮任务,无需工具或Agent推理。
python
from crewai import LLM
from pydantic import BaseModel
class EmailFields(BaseModel):
sender: str
subject: str
urgency: str
llm = LLM(model="openai/gpt-4o")Without response_format — returns a string
不指定response_format — 返回字符串
raw = llm.call(messages=[{"role": "user", "content": "Summarize this text..."}])
print(raw) # str
raw = llm.call(messages=[{"role": "user", "content": "总结这段文本..."}])
print(raw) # str
With response_format — returns the Pydantic object directly
指定response_format — 直接返回Pydantic对象
result = llm.call(
messages=[{"role": "user", "content": f"Extract fields from this email: {email_text}"}],
response_format=EmailFields
)
print(result.sender) # str — access Pydantic fields directly
print(result.urgency) # str
**When NOT to use:** If you need tools, multi-step reasoning, or retries — use an Agent instead.
---result = llm.call(
messages=[{"role": "user", "content": f"从这封邮件中提取字段:{email_text}"}],
response_format=EmailFields
)
print(result.sender) # str — 直接访问Pydantic字段
print(result.urgency) # str
**不适用场景:** 如果你需要工具、多步骤推理或重试机制 — 请使用Agent。
---3. Agent.kickoff() — Single Agent Execution
3. Agent.kickoff() — 单个Agent执行
Use when you need one agent with tools and reasoning, but don't need multi-agent coordination.
python
from crewai import Agent
from crewai_tools import SerperDevTool
from pydantic import BaseModel
class ResearchFindings(BaseModel):
main_points: list[str]
key_technologies: list[str]
researcher = Agent(
role="AI Researcher",
goal="Research the latest AI developments",
backstory="Expert AI researcher with deep technical knowledge.",
llm="openai/gpt-4o", # Optional: defaults to OPENAI_MODEL_NAME env var or "gpt-4"
tools=[SerperDevTool()],
)适用于需要单个带工具和推理能力的Agent,但无需多Agent协作的场景。
python
from crewai import Agent
from crewai_tools import SerperDevTool
from pydantic import BaseModel
class ResearchFindings(BaseModel):
main_points: list[str]
key_technologies: list[str]
researcher = Agent(
role="AI研究员",
goal="研究AI领域的最新进展",
backstory="具备深厚技术知识的资深AI研究员。",
llm="openai/gpt-4o", # 可选:默认使用环境变量OPENAI_MODEL_NAME的值或"gpt-4"
tools=[SerperDevTool()],
)Unstructured output
非结构化输出
result = researcher.kickoff("What are the latest LLM developments?")
print(result.raw) # str
print(result.usage_metrics) # token usage
result = researcher.kickoff("LLM领域的最新进展有哪些?")
print(result.raw) # str
print(result.usage_metrics) # 令牌使用情况
Structured output with response_format
带response_format的结构化输出
result = researcher.kickoff(
"Summarize latest AI developments",
response_format=ResearchFindings,
)
print(result.pydantic.main_points)
> **Note:** `Agent.kickoff()` wraps results — access structured output via `result.pydantic`. This differs from `LLM.call()`, which returns the Pydantic object directly.
**When NOT to use:** If you need multiple agents passing context to each other — use a Crew.
---result = researcher.kickoff(
"总结AI领域的最新进展",
response_format=ResearchFindings,
)
print(result.pydantic.main_points)
> **注意:** `Agent.kickoff()`会对结果进行封装 — 需通过`result.pydantic`访问结构化输出。这与`LLM.call()`不同,后者会直接返回Pydantic对象。
**不适用场景:** 如果你需要多个Agent之间传递上下文 — 请使用Crew。
---4. CLI Scaffold Reference
4. CLI脚手架参考
As stated above: NEVER skip . This section documents what the CLI generates so you know what to modify — not so you can recreate it by hand.
crewai create flowbash
crewai create flow my_projectWarning: Always use underscores in project names, not hyphens.creates a directory that is not a valid Python identifier, causingcrewai create flow my-projecton import. UseModuleNotFoundErrorinstead.my_project
This generates:
my_project/
├── src/my_project/
│ ├── crews/
│ │ └── my_crew/
│ │ ├── config/
│ │ │ ├── agents.yaml # Agent definitions (role, goal, backstory)
│ │ │ └── tasks.yaml # Task definitions (description, expected_output)
│ │ └── my_crew.py # Crew class with @CrewBase
│ ├── tools/
│ │ └── custom_tool.py
│ ├── main.py # Flow class with @start/@listen
│ └── ...
├── .env # API keys (OPENAI_API_KEY, etc.)
└── pyproject.tomlDo not useunless you are certain you will never need routing, state, or multiple crews. Prefercrewai create crewas the default.crewai create flow
如前文所述:切勿跳过步骤。本节将介绍CLI生成的内容,以便你了解需要修改哪些部分 — 而非让你手动重现这些内容。
crewai create flowbash
crewai create flow my_project警告: 项目名称请始终使用下划线,而非连字符。会创建一个不符合Python标识符规范的目录,导致导入时出现crewai create flow my-project。请改用ModuleNotFoundError。my_project
该命令将生成以下结构:
my_project/
├── src/my_project/
│ ├── crews/
│ │ └── my_crew/
│ │ ├── config/
│ │ │ ├── agents.yaml # Agent定义(角色、目标、背景故事)
│ │ │ └── tasks.yaml # 任务定义(描述、预期输出)
│ │ └── my_crew.py # 带有@CrewBase装饰器的Crew类
│ ├── tools/
│ │ └── custom_tool.py
│ ├── main.py # 带有@start/@listen装饰器的Flow类
│ └── ...
├── .env # API密钥(OPENAI_API_KEY等)
└── pyproject.toml除非你确定永远不需要路由、状态管理或多个crews,否则不要使用。默认情况下请优先使用crewai create crew。crewai create flow
5. YAML Configuration (agents.yaml & tasks.yaml)
5. YAML配置(agents.yaml & tasks.yaml)
The scaffold uses YAML files for agent and task definitions. This separates configuration from code and supports interpolation.
{variable}脚手架使用YAML文件定义Agent和任务。这实现了配置与代码的分离,并支持变量插值。
{variable}agents.yaml
agents.yaml
yaml
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering
the latest developments in {topic}.
# Optional overrides:
# llm: openai/gpt-4o
# max_iter: 20
# max_rpm: 10
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} research findings
backstory: >
You're a meticulous analyst known for turning complex data
into clear, actionable reports.yaml
researcher:
role: >
{topic} 资深数据研究员
goal: >
发掘{topic}领域的前沿进展
backstory: >
你是一名经验丰富的研究员,擅长发掘{topic}领域的最新动态。
# 可选配置覆盖:
# llm: openai/gpt-4o
# max_iter: 20
# max_rpm: 10
reporting_analyst:
role: >
{topic} 报告分析师
goal: >
根据{topic}领域的研究成果创建详细报告
backstory: >
你是一名严谨的分析师,擅长将复杂数据转化为清晰、可执行的报告。tasks.yaml
tasks.yaml
yaml
research_task:
description: >
Conduct thorough research about {topic}.
Identify key trends, breakthrough technologies,
and potential industry impacts.
expected_output: >
A detailed report with analysis of the top 5
developments in {topic}, with sources and implications.
agent: researcher
reporting_task:
description: >
Review the research and create a comprehensive report about {topic}.
expected_output: >
A polished report formatted in markdown with sections
for each key finding.
agent: reporting_analyst
output_file: output/report.mdKey rules:
- placeholders are replaced at runtime via
{variable}crew.kickoff(inputs={...}) - is always a string (never a Pydantic class name)
expected_output - value must match an agent key in
agentagents.yaml - In , each task auto-receives all prior task outputs as context
Process.sequential - For non-sequential deps, use to explicitly pass output
context=[other_task]
yaml
research_task:
description: >
针对{topic}进行全面研究。
识别关键趋势、突破性技术
以及潜在的行业影响。
expected_output: >
一份详细报告,分析{topic}领域的前5大进展
并包含来源和影响分析。
agent: researcher
reporting_task:
description: >
审阅研究成果并创建关于{topic}的综合报告。
expected_output: >
一份格式规范的Markdown报告,
每个关键发现单独成节。
agent: reporting_analyst
output_file: output/report.md核心规则:
- 占位符会在运行时通过
{variable}进行替换crew.kickoff(inputs={...}) - 必须始终是字符串(不能是Pydantic类名)
expected_output - 的值必须与
agent中的Agent键名匹配agents.yaml - 在模式下,每个任务会自动接收所有前置任务的输出作为上下文
Process.sequential - 对于非顺序依赖,请使用显式传递输出
context=[other_task]
6. Wiring It Together — crew.py
6. 整合所有组件 — crew.py
The decorator auto-loads YAML config files and collects and methods.
@CrewBase@agent@taskpython
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
@CrewBase
class ResearchCrew:
"""Research and reporting crew."""
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config["researcher"],
tools=[SerperDevTool()],
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config["reporting_analyst"],
)
@task
def research_task(self) -> Task:
return Task(config=self.tasks_config["research_task"])
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config["reporting_task"],
context=[self.research_task()], # Explicit dependency (optional in sequential)
output_file="output/report.md",
)
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents, # auto-collected by @agent
tasks=self.tasks, # auto-collected by @task
process=Process.sequential,
verbose=True,
)Important: Method names must match YAML keys. maps to the key in .
def researcher(self)researcher:agents.yaml@CrewBase@agent@taskpython
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
@CrewBase
class ResearchCrew:
"""研究与报告团队。"""
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config["researcher"],
tools=[SerperDevTool()],
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config["reporting_analyst"],
)
@task
def research_task(self) -> Task:
return Task(config=self.tasks_config["research_task"])
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config["reporting_task"],
context=[self.research_task()], # 显式依赖(顺序模式下可选)
output_file="output/report.md",
)
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents, # 由@agent自动收集
tasks=self.tasks, # 由@task自动收集
process=Process.sequential,
verbose=True,
)重要提示: 方法名称必须与YAML中的键名匹配。对应中的键。
def researcher(self)agents.yamlresearcher:7. Flows — The Production Foundation
7. Flows — 生产级应用基础
Flows are the recommended way to build production crewAI applications. They provide state management, conditional routing, human-in-the-loop, and persistence — wrapping crews, agents, and LLM calls into a coherent workflow.
Flows是构建生产级CrewAI应用的推荐方式。它提供状态管理、条件路由、人工介入和持久化能力 — 将crews、agents和LLM调用封装为一个连贯的工作流。
Basic Flow — main.py
基础Flow — main.py
python
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
from .crews.research_crew.research_crew import ResearchCrew
class ResearchState(BaseModel):
topic: str = ""
report: str = ""
class ResearchFlow(Flow[ResearchState]):
@start()
def begin(self):
print(f"Starting research on: {self.state.topic}")
@listen(begin)
def run_research(self):
result = ResearchCrew().crew().kickoff(
inputs={"topic": self.state.topic}
)
self.state.report = result.raw
def kickoff():
flow = ResearchFlow()
flow.kickoff(inputs={"topic": "AI Agents"})
if __name__ == "__main__":
kickoff()Key points:
- populates
flow.kickoff(inputs={"topic": "AI Agents"})(keys must match Pydantic field names). The YAMLself.state.topicsubstitution happens later, when you call{variable}inside a Flow step. The chain is: flow inputs → state → crew inputs → YAML substitution.crew.kickoff(inputs={"topic": self.state.topic}) - Each method runs after its dependency completes
@listen - State persists across all Flow steps — use it to pass data between crews
python
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
from .crews.research_crew.research_crew import ResearchCrew
class ResearchState(BaseModel):
topic: str = ""
report: str = ""
class ResearchFlow(Flow[ResearchState]):
@start()
def begin(self):
print(f"开始研究主题:{self.state.topic}")
@listen(begin)
def run_research(self):
result = ResearchCrew().crew().kickoff(
inputs={"topic": self.state.topic}
)
self.state.report = result.raw
def kickoff():
flow = ResearchFlow()
flow.kickoff(inputs={"topic": "AI Agents"})
if __name__ == "__main__":
kickoff()核心要点:
- 会填充
flow.kickoff(inputs={"topic": "AI Agents"})(键名必须与Pydantic字段名匹配)。YAML中的self.state.topic替换会在后续调用{variable}时完成。整个流程为:Flow输入 → 状态 → Crew输入 → YAML替换。crew.kickoff(inputs={"topic": self.state.topic}) - 每个方法会在其依赖方法完成后运行
@listen - 状态会在所有Flow步骤中持久化 — 可用于在不同crews之间传递数据
State Management — Structured vs Unstructured
状态管理 — 结构化与非结构化
Structured (recommended for production):
python
from pydantic import BaseModel
class MyState(BaseModel):
topic: str = ""
research: str = ""
draft: str = ""
approved: bool = False
class MyFlow(Flow[MyState]):
...Unstructured (quick prototyping):
python
class MyFlow(Flow): # No type parameter — state is a dict
@start()
def begin(self):
self.state["topic"] = "AI" # dict-style accessUse structured state for type safety, IDE autocompletion, and validation. Use unstructured only for throwaway prototypes.
结构化(生产级推荐):
python
from pydantic import BaseModel
class MyState(BaseModel):
topic: str = ""
research: str = ""
draft: str = ""
approved: bool = False
class MyFlow(Flow[MyState]):
...非结构化(快速原型开发):
python
class MyFlow(Flow): # 不指定类型参数 — 状态为字典
@start()
def begin(self):
self.state["topic"] = "AI" # 字典式访问结构化状态提供类型安全、IDE自动补全和验证功能。非结构化状态仅适用于一次性原型开发。
Using Agent.kickoff() Inside Flows (Common Pattern)
在Flows中使用Agent.kickoff()(常见模式)
Many production Flows skip Crews entirely and orchestrate individual agents via . This gives you fine-grained control — each Flow step calls a specific agent, passes state, and stores the result. The Flow handles orchestration; agents handle reasoning.
Agent.kickoff()python
from crewai import Agent, LLM
from crewai.flow.flow import Flow, listen, start
from crewai_tools import SerperDevTool, ScrapeWebsiteTool
from pydantic import BaseModel
class ResearchState(BaseModel):
query: str = ""
raw_research: str = ""
analysis: str = ""
report: str = ""
class DeepResearchFlow(Flow[ResearchState]):
@start()
def gather_research(self):
"""Agent with tools does the actual searching."""
researcher = Agent(
role="Senior Research Analyst",
goal="Find comprehensive, factual information about the given topic",
backstory="You're an expert researcher who always cites sources and flags uncertainty.",
tools=[SerperDevTool(), ScrapeWebsiteTool()],
llm="openai/gpt-4o",
)
result = researcher.kickoff(
f"Research this topic thoroughly: {self.state.query}"
)
self.state.raw_research = result.raw
@listen(gather_research)
def analyze_findings(self):
"""A different agent analyzes the raw research — no tools needed."""
analyst = Agent(
role="Data Analyst",
goal="Extract key insights, patterns, and actionable recommendations",
backstory="You turn raw data into clear, structured analysis.",
llm="openai/gpt-4o",
)
result = analyst.kickoff(
f"Analyze these research findings and extract key insights:\n\n{self.state.raw_research}"
)
self.state.analysis = result.raw
@listen(analyze_findings)
def write_report(self):
"""A writer agent produces the final deliverable."""
writer = Agent(
role="Technical Writer",
goal="Produce clear, actionable reports for non-technical readers",
backstory="You specialize in making complex information accessible.",
llm="openai/gpt-4o",
)
result = writer.kickoff(
f"Write a comprehensive report based on this analysis:\n\n{self.state.analysis}"
)
self.state.report = result.rawWhy this pattern works well:
- Each agent is purpose-built for its step — narrow role, specific tools
- The Flow manages state and sequencing — no crew overhead
- Easy to add routing, human review, or retry logic between steps
- You can mix ,
Agent.kickoff(), andLLM.call()freelyCrew.kickoff()
When to use Agent.kickoff() vs Crew.kickoff() in a Flow:
Use | Use |
|---|---|
| Each step is a distinct agent with different tools | Multiple agents need to collaborate on ONE task |
| You want the Flow to control sequencing | Agents need to pass context to each other within a step |
| Steps are independent and don't need inter-agent delegation | You need hierarchical process with a manager |
| You want maximum control over what data flows between steps | The sub-workflow is self-contained and reusable |
许多生产级Flows会完全跳过Crews,直接通过编排单个Agent。这种方式能提供更精细的控制 — 每个Flow步骤调用特定的Agent,传递状态并存储结果。Flow负责编排,Agent负责推理。
Agent.kickoff()python
from crewai import Agent, LLM
from crewai.flow.flow import Flow, listen, start
from crewai_tools import SerperDevTool, ScrapeWebsiteTool
from pydantic import BaseModel
class ResearchState(BaseModel):
query: str = ""
raw_research: str = ""
analysis: str = ""
report: str = ""
class DeepResearchFlow(Flow[ResearchState]):
@start()
def gather_research(self):
"""带工具的Agent负责实际搜索。"""
researcher = Agent(
role="资深研究分析师",
goal="查找与给定主题相关的全面、真实信息",
backstory="你是一名专家研究员,始终会引用来源并标注不确定信息。",
tools=[SerperDevTool(), ScrapeWebsiteTool()],
llm="openai/gpt-4o",
)
result = researcher.kickoff(
f"针对以下主题进行全面研究:{self.state.query}"
)
self.state.raw_research = result.raw
@listen(gather_research)
def analyze_findings(self):
"""另一个Agent负责分析原始研究成果 — 无需工具。"""
analyst = Agent(
role="数据分析师",
goal="提取关键见解、模式和可执行建议",
backstory="你擅长将原始数据转化为清晰、结构化的分析结果。",
llm="openai/gpt-4o",
)
result = analyst.kickoff(
f"分析以下研究成果并提取关键见解:\n\n{self.state.raw_research}"
)
self.state.analysis = result.raw
@listen(analyze_findings)
def write_report(self):
"""撰稿Agent负责生成最终交付物。"""
writer = Agent(
role="技术撰稿人",
goal="为非技术读者生成清晰、可执行的报告",
backstory="你擅长将复杂信息转化为通俗易懂的内容。",
llm="openai/gpt-4o",
)
result = writer.kickoff(
f"基于以下分析结果撰写一份全面报告:\n\n{self.state.analysis}"
)
self.state.report = result.raw该模式的优势:
- 每个Agent都是为特定步骤量身打造的 — 角色明确、工具特定
- Flow负责管理状态和顺序 — 无需Crew的额外开销
- 易于在步骤之间添加路由、人工审核或重试逻辑
- 可以自由混合使用、
Agent.kickoff()和LLM.call()Crew.kickoff()
Agent.kickoff() with Structured Output in Flows
在Flow中选择Agent.kickoff()还是Crew.kickoff():
Combine with state for typed data flow between agents:
response_formatpython
class Insights(BaseModel):
key_points: list[str]
recommendations: list[str]
confidence: float
class AnalysisFlow(Flow[AnalysisState]):
@start()
def research(self):
researcher = Agent(role="Researcher", goal="...", backstory="...", tools=[SerperDevTool()])
result = researcher.kickoff(
f"Research {self.state.topic}",
response_format=Insights,
)
# result.pydantic gives you the typed Insights object
self.state.key_points = result.pydantic.key_points
self.state.recommendations = result.pydantic.recommendations适合使用 | 适合使用 |
|---|---|
| 每个步骤由不同工具的独立Agent完成 | 多个Agent需要协作完成单个任务 |
| 希望由Flow控制任务顺序 | Agent需要在步骤内部互相传递上下文 |
| 步骤之间相互独立,无需Agent间委托 | 需要带有管理者的分层流程 |
| 希望对步骤间的数据流转拥有最大控制权 | 子工作流是独立且可复用的 |
Mixing Abstractions in a Flow
在Flows中使用Agent.kickoff()实现结构化输出
A Flow can combine all crewAI abstractions in a single workflow:
python
class ProductFlow(Flow[ProductState]):
@start()
def classify_request(self):
# LLM.call() for simple classification
llm = LLM(model="openai/gpt-4o")
self.state.category = llm.call(
messages=[{"role": "user", "content": f"Classify: {self.state.request}"}],
response_format=Category
).category
@router(classify_request)
def route_by_category(self):
if self.state.category == "simple":
return "quick_answer"
return "deep_research"
@listen("quick_answer")
def handle_simple(self):
# Agent.kickoff() for single-agent work
agent = Agent(role="Helper", goal="Answer quickly", backstory="...")
result = agent.kickoff(self.state.request)
self.state.answer = result.raw
@listen("deep_research")
def handle_complex(self):
# Crew.kickoff() for multi-agent collaboration
result = ResearchCrew().crew().kickoff(
inputs={"topic": self.state.request}
)
self.state.answer = result.raw将与状态结合,实现Agent之间的类型化数据流转:
response_formatpython
class Insights(BaseModel):
key_points: list[str]
recommendations: list[str]
confidence: float
class AnalysisFlow(Flow[AnalysisState]):
@start()
def research(self):
researcher = Agent(role="研究员", goal="...", backstory="...", tools=[SerperDevTool()])
result = researcher.kickoff(
f"研究主题:{self.state.topic}",
response_format=Insights,
)
# result.pydantic会返回类型化的Insights对象
self.state.key_points = result.pydantic.key_points
self.state.recommendations = result.pydantic.recommendationsFlow Routing with @router
@router在Flow中混合使用不同抽象层
Use for conditional branching — return a string label, and binds to branches:
@router@listen("label")python
from crewai.flow.flow import Flow, listen, router, start, or_
class QualityFlow(Flow[QAState]):
@start()
def generate_content(self):
result = WriterCrew().crew().kickoff(inputs={"topic": self.state.topic})
self.state.draft = result.raw
@router(generate_content)
def check_quality(self):
llm = LLM(model="openai/gpt-4o")
score = llm.call(
messages=[{"role": "user", "content": f"Rate 1-10: {self.state.draft}"}],
response_format=QualityScore
)
if score.rating >= 7:
return "approved"
return "needs_revision"
@listen("approved")
def publish(self):
self.state.published = True
@listen("needs_revision")
def revise(self):
result = EditorCrew().crew().kickoff(
inputs={"draft": self.state.draft}
)
self.state.draft = result.raw单个Flow可以整合所有CrewAI抽象层:
python
class ProductFlow(Flow[ProductState]):
@start()
def classify_request(self):
# 使用LLM.call()进行简单分类
llm = LLM(model="openai/gpt-4o")
self.state.category = llm.call(
messages=[{"role": "user", "content": f"分类:{self.state.request}"}],
response_format=Category
).category
@router(classify_request)
def route_by_category(self):
if self.state.category == "simple":
return "quick_answer"
return "deep_research"
@listen("quick_answer")
def handle_simple(self):
# 使用Agent.kickoff()完成单Agent任务
agent = Agent(role="助手", goal="快速回答问题", backstory="...")
result = agent.kickoff(self.state.request)
self.state.answer = result.raw
@listen("deep_research")
def handle_complex(self):
# 使用Crew.kickoff()完成多Agent协作任务
result = ResearchCrew().crew().kickoff(
inputs={"topic": self.state.request}
)
self.state.answer = result.rawConverging Branches with or_()
and and_()
or_()and_()使用@router
实现Flow路由
@routerpython
from crewai.flow.flow import Flow, listen, start, or_, and_
class ParallelFlow(Flow[MyState]):
@start()
def fetch_data_a(self):
...
@start()
def fetch_data_b(self):
...
# Runs when BOTH fetches complete
@listen(and_(fetch_data_a, fetch_data_b))
def merge_results(self):
...
# Runs when EITHER source provides data
@listen(or_(fetch_data_a, fetch_data_b))
def process_first_available(self):
...使用实现条件分支 — 返回字符串标签,会绑定到对应分支:
@router@listen("label")python
from crewai.flow.flow import Flow, listen, router, start, or_
class QualityFlow(Flow[QAState]):
@start()
def generate_content(self):
result = WriterCrew().crew().kickoff(inputs={"topic": self.state.topic})
self.state.draft = result.raw
@router(generate_content)
def check_quality(self):
llm = LLM(model="openai/gpt-4o")
score = llm.call(
messages=[{"role": "user", "content": f"为以下内容评分1-10:{self.state.draft}"}],
response_format=QualityScore
)
if score.rating >= 7:
return "approved"
return "needs_revision"
@listen("approved")
def publish(self):
self.state.published = True
@listen("needs_revision")
def revise(self):
result = EditorCrew().crew().kickoff(
inputs={"draft": self.state.draft}
)
self.state.draft = result.rawFlow Persistence with @persist
@persist使用or_()
和and_()
实现分支聚合
or_()and_()For long-running workflows that need to survive restarts:
python
from crewai.flow.flow import Flow, start, listen, persist
from crewai.flow.persistence import SQLiteFlowPersistence
@persist(SQLiteFlowPersistence()) # Class-level: persists all methods
class LongRunningFlow(Flow[MyState]):
@start()
def step_one(self):
self.state.data = "processed"
@listen(step_one)
def step_two(self):
# If the process crashes here, restarting with the same
# state ID will resume from after step_one
...python
from crewai.flow.flow import Flow, listen, start, or_, and_
class ParallelFlow(Flow[MyState]):
@start()
def fetch_data_a(self):
...
@start()
def fetch_data_b(self):
...
# 当两个数据获取步骤都完成后运行
@listen(and_(fetch_data_a, fetch_data_b))
def merge_results(self):
...
# 当任意一个数据源获取完成后运行
@listen(or_(fetch_data_a, fetch_data_b))
def process_first_available(self):
...Human-in-the-Loop with @human_feedback
@human_feedback使用@persist
实现Flow持久化
@persistpython
from crewai.flow.flow import Flow, start, listen, router
from crewai.flow.human_feedback import human_feedback
class ApprovalFlow(Flow[ReviewState]):
@start()
def generate_draft(self):
result = WriterCrew().crew().kickoff(inputs={"topic": self.state.topic})
self.state.draft = result.raw
@human_feedback(
message="Review the draft and provide feedback",
emit=["approved", "needs_revision"],
llm="openai/gpt-4o",
default_outcome="approved"
)
@listen(generate_draft)
def review_step(self):
return self.state.draft
@listen("approved")
def publish(self):
...
@listen("needs_revision")
def revise(self):
feedback = self.last_human_feedback
# Use feedback.feedback_text for revision
...对于需要在重启后继续运行的长时工作流:
python
from crewai.flow.flow import Flow, start, listen, persist
from crewai.flow.persistence import SQLiteFlowPersistence
@persist(SQLiteFlowPersistence()) # 类级别:持久化所有方法
class LongRunningFlow(Flow[MyState]):
@start()
def step_one(self):
self.state.data = "processed"
@listen(step_one)
def step_two(self):
# 如果进程在此处崩溃,使用相同的状态ID重启后会从step_one之后继续执行
...Flow Visualization
使用@human_feedback
实现人工介入
@human_feedbackpython
flow = MyFlow()
flow.plot() # Display in notebook
flow.plot("my_flow") # Save as my_flow.pngpython
from crewai.flow.flow import Flow, start, listen, router
from crewai.flow.human_feedback import human_feedback
class ApprovalFlow(Flow[ReviewState]):
@start()
def generate_draft(self):
result = WriterCrew().crew().kickoff(inputs={"topic": self.state.topic})
self.state.draft = result.raw
@human_feedback(
message="审阅草稿并提供反馈",
emit=["approved", "needs_revision"],
llm="openai/gpt-4o",
default_outcome="approved"
)
@listen(generate_draft)
def review_step(self):
return self.state.draft
@listen("approved")
def publish(self):
...
@listen("needs_revision")
def revise(self):
feedback = self.last_human_feedback
# 使用feedback.feedback_text进行修订
...8. Variable Interpolation with inputs
inputsFlow可视化
The pattern is how you make crews reusable.
{variable}python
undefinedpython
flow = MyFlow()
flow.plot() # 在Notebook中显示
flow.plot("my_flow") # 保存为my_flow.pngVariables flow through: kickoff → YAML templates → agent/task prompts
8. 使用inputs
实现变量插值
inputscrew.kickoff(inputs={
"topic": "AI Agents",
"current_year": "2025",
"target_audience": "developers",
})
In YAML, `{topic}` and `{current_year}` get replaced:
```yaml
research_task:
description: >
Research {topic} trends for {current_year},
targeting {target_audience}.Common mistakes:
- Forgetting to pass a variable that's referenced in YAML → results in literal in the prompt
{variable} - Using Jinja2 syntax instead of single-brace
{{ }}→ crewAI uses single braces{ } - Passing variables that don't match any YAML placeholder → silently ignored
{variable}python
undefined9. Running Your Project
变量流转路径:kickoff → YAML模板 → Agent/任务提示词
bash
undefinedcrew.kickoff(inputs={
"topic": "AI Agents",
"current_year": "2025",
"target_audience": "开发者",
})
在YAML中,`{topic}`和`{current_year}`会被替换:
```yaml
research_task:
description: >
研究{current_year}年{topic}领域的趋势,
面向{target_audience}群体。常见错误:
- YAML中引用了变量,但中未传递
kickoff()→ 提示词中会出现字面量inputs={variable} - 使用Jinja2语法而非单括号
{{ }}→ CrewAI使用单括号{ } - 传递了YAML中未引用的变量 → 会被静默忽略
Install dependencies
9. 运行项目
crewai install
bash
undefinedRun the flow
安装依赖
crewai run
Or run directly:
```bash
cd my_project
uv run src/my_project/main.pycrewai install
10. Quick Diagnostic Checklist
运行Flow
| Symptom | Likely Cause | Fix |
|---|---|---|
| Missing | Pass |
| Method name doesn't match YAML key | Ensure |
| Wrong path or hyphens in project name | Use underscores; check |
| Crew runs but Flow state is empty | Not writing results back to | Assign crew output to |
| Uppercase enum | Use lowercase: |
| Agent ignores tools | Tools assigned to agent but task needs them | Move tools to task level or verify agent has the right tools |
| Agent fabricates search results | No tools assigned — agent can't actually search | Add |
| Listener string doesn't match router return value, or passed a string instead of method reference | |
| Flow step runs twice unexpectedly | Multiple | Use |
| Missing env var | Set |
| Agent retries endlessly on structured output | Pydantic model too complex for the LLM | Simplify the model, reduce nesting, or use a more capable |
Agent loops to | Task description too vague or conflicting with | Make |
| Flow state not updating across steps | Using unstructured state without proper key access | Switch to structured Pydantic state or ensure dict keys are consistent |
| Method not decorated with | Use |
crewai run
或直接运行:
```bash
cd my_project
uv run src/my_project/main.pyReferences
10. 快速诊断清单
For deeper dives into specific topics, see:
- Flow Routing, Persistence, Streaming & Human Feedback — complete ,
@router,or_(),and_(), streaming, and@persistpatterns@human_feedback - MCP Servers — prefer official MCP servers over native tools; setup, DSL integration, and known official servers
- Tools Catalog — all 80+ built-in tools with imports, env vars, and common combos (use as fallback when no MCP server exists)
For related skills:
- design-agent — agent Role-Goal-Backstory framework, parameter tuning, tool assignment, memory & knowledge configuration
- design-task — task description/expected_output best practices, guardrails, structured output, dependencies
| 症状 | 可能原因 | 修复方案 |
|---|---|---|
Agent输出中出现字面量 | | 传递 |
出现 | 方法名称与YAML键名不匹配 | 确保 |
导入时出现 | 路径错误或项目名称使用了连字符 | 使用下划线;检查 |
| Crew正常运行但Flow状态为空 | 未将结果写入 | 在 |
| 枚举使用了大写 | 使用小写: |
| Agent忽略工具 | 工具已分配给Agent但任务需要工具 | 将工具移至任务级别,或验证Agent是否拥有正确的工具 |
| Agent生成虚假搜索结果 | 未分配工具 — Agent无法实际搜索 | 添加 |
| 监听器字符串与路由返回值不匹配,或传递了字符串而非方法引用 | |
| Flow步骤意外运行两次 | 存在多个 | 如果需要所有上游步骤完成后再运行,请使用 |
出现 | 缺少环境变量 | 在 |
| Agent在结构化输出时无限重试 | Pydantic模型对LLM来说过于复杂 | 简化模型、减少嵌套,或使用更强大的 |
Agent循环至 | 任务描述过于模糊或与 | 让 |
| Flow状态在步骤间未更新 | 使用非结构化状态但未正确访问键名 | 切换为结构化Pydantic状态,或确保字典键名一致 |
| 方法未使用 | 分支方法请使用 |
—
参考资料
—
如需深入了解特定主题,请参阅:
- Flow路由、持久化、流式传输与人工反馈 — 完整的、
@router、or_()、and_()、流式传输和@persist模式@human_feedback - MCP服务器 — 优先使用官方MCP服务器而非原生工具;包含设置、DSL集成和已知官方服务器列表
- 工具目录 — 所有80+内置工具的导入方式、环境变量和常见组合(当没有MCP服务器时作为备选方案)
相关技能:
- design-agent — Agent角色-目标-背景故事框架、参数调优、工具分配、内存与知识配置
- design-task — 任务描述/预期输出最佳实践、防护机制、结构化输出、依赖管理