Loading...
Loading...
CrewAI task design and configuration. Use when creating, configuring, or debugging crewAI tasks — writing descriptions and expected_output, setting up task dependencies with context, configuring output formats (output_pydantic, output_json, output_file), using guardrails for validation, enabling human_input, async execution, markdown formatting, or debugging task execution issues.
npx skill4agent add crewaiinc/skills design-taskresearch_task:
description: >
Conduct thorough research about {topic} for the year {current_year}.
Your research should:
1. Identify the top 5 key trends and breakthroughs
2. For each trend, find at least 2 credible sources
3. Note any controversies or competing viewpoints
4. Assess potential industry impact (high/medium/low)
Focus on developments from the last 6 months.
Do NOT include speculation or unverified claims.
The output will feed into a report for {target_audience}.
expected_output: >
A structured research brief with 5 sections, one per trend.
Each section includes: trend name, 2-3 paragraph summary,
source citations, impact assessment (high/medium/low),
and a confidence level for your findings.
agent: researcherexpected_output| Bad Expected Output | Good Expected Output |
|---|---|
| |
| |
| |
# DON'T do this — too many objectives in one task
research_and_write_task:
description: >
Research {topic}, analyze the findings, write a blog post,
and proofread it for grammar errors.
expected_output: >
A polished blog post about {topic}.research_task:
description: >
Research {topic} and identify the top 5 key developments.
expected_output: >
A research brief with 5 sections covering key trends.
agent: researcher
writing_task:
description: >
Using the research findings, write a technical blog post about {topic}.
expected_output: >
A 1000-1500 word blog post with introduction, main sections,
and conclusion. Include code examples where relevant.
agent: writer
editing_task:
description: >
Review and edit the blog post for grammar, clarity, and consistency.
expected_output: >
The final edited blog post with all corrections applied.
Include a brief editor's note listing what was changed.
agent: editorTask(
description="...", # Required: what to do
expected_output="...", # Required: what the result looks like
agent=researcher, # Optional for hierarchical process; required for sequential
)contextanalysis_task = Task(
description="Analyze the research findings...",
expected_output="...",
agent=analyst,
context=[research_task], # Receives research_task's output as context
)contextcontextoutput_pydanticoutput_jsonfrom pydantic import BaseModel
class ResearchReport(BaseModel):
trends: list[str]
confidence: float
sources: list[str]
research_task = Task(
description="...",
expected_output="A structured report with trends, confidence score, and sources.",
agent=researcher,
output_pydantic=ResearchReport, # Agent's output is parsed into this model
)expected_outputoutput_pydanticexpected_outputresult = crew.kickoff(inputs={...})
last_task_output = result.pydantic # Pydantic model from the last task
all_outputs = result.tasks_output # List of all TaskOutput objects
first_task = all_outputs[0].pydantic # Pydantic from a specific taskTask(
...,
output_file="output/report.md", # Save output to file
create_directory=True, # Create directory if missing (default: True)
)output_pydanticTask(
...,
async_execution=True, # Run without blocking the next task
)contextTask(
...,
human_input=True, # Pause for human review before finalizing
)Task(
...,
markdown=True, # Add markdown formatting instructions
)def log_completion(output):
print(f"Task completed: {output.description[:50]}...")
save_to_database(output.raw)
Task(
...,
callback=log_completion, # Called after task completion
)def validate_word_count(output) -> tuple[bool, Any]:
"""Ensure output is between 500-2000 words."""
word_count = len(output.raw.split())
if word_count < 500:
return (False, f"Output too short ({word_count} words). Expand to at least 500 words.")
if word_count > 2000:
return (False, f"Output too long ({word_count} words). Condense to under 2000 words.")
return (True, output)
Task(
...,
guardrail=validate_word_count,
guardrail_max_retries=3, # Max retry attempts (default: 3)
)(bool, Any)Task(
...,
guardrail="Verify the output contains at least 3 source citations and no speculative claims.",
)Task(
...,
guardrails=[
validate_word_count, # Function: check length
validate_no_pii, # Function: check for PII
"Ensure the tone is professional and appropriate for a business audience.", # LLM check
],
guardrail_max_retries=3,
)research_task:
description: >
Conduct thorough research about {topic} for {current_year}.
Identify key trends, breakthrough technologies,
and potential industry impacts.
Focus on the last 6 months of developments.
expected_output: >
A structured research brief with 5 sections.
Each section: trend name, 2-3 paragraph summary,
source citations, and impact assessment.
agent: researcher
analysis_task:
description: >
Analyze the research findings and create actionable recommendations
for {target_audience}.
expected_output: >
A prioritized list of 5 recommendations with:
rationale, estimated effort, and expected impact.
agent: analyst
context:
- research_task
report_task:
description: >
Compile a final report combining research and analysis for {target_audience}.
expected_output: >
A polished markdown report with executive summary,
detailed findings, recommendations, and appendices.
agent: writer
output_file: output/report.md@CrewBase
class ResearchCrew:
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
@task
def research_task(self) -> Task:
return Task(config=self.tasks_config["research_task"])
@task
def analysis_task(self) -> Task:
return Task(
config=self.tasks_config["analysis_task"],
context=[self.research_task()],
)
@task
def report_task(self) -> Task:
return Task(
config=self.tasks_config["report_task"],
output_file="output/report.md",
)def research_taskresearch_task:Process.sequentialresearch_task → analysis_task → report_task
↓ ↓ ↓
output 1 output 1 + 2 output 1 + 2 + 3context=# Task C depends on A but NOT B
task_c = Task(
...,
context=[task_a], # Only receives task_a output, not task_b
)# Diamond dependency pattern
task_a = Task(...) # Entry point
task_b = Task(..., context=[task_a]) # Depends on A
task_c = Task(..., context=[task_a]) # Also depends on A
task_d = Task(..., context=[task_b, task_c]) # Depends on both B and Cfrom crewai.task import ConditionalTask
def needs_more_data(output) -> bool:
return len(output.pydantic.items) < 10
extra_research = ConditionalTask(
description="Fetch additional data sources...",
expected_output="...",
agent=researcher,
condition=needs_more_data, # Only runs if previous output has < 10 items
)from crewai_tools import SerperDevTool, ScrapeWebsiteTool
Task(
description="Search for and scrape the top 5 articles about {topic}...",
expected_output="...",
agent=researcher,
tools=[SerperDevTool(), ScrapeWebsiteTool()], # Task-specific tools
){variable}research_task:
description: >
Research {topic} trends for {current_year},
targeting {target_audience}.
expected_output: >
A report on {topic} suitable for {target_audience}.crew.kickoff(inputs={...})crew.kickoff(inputs={
"topic": "AI Agents",
"current_year": "2025",
"target_audience": "developers",
})inputs{variable}{{ }}{ }inputs| Mistake | Impact | Fix |
|---|---|---|
| Vague description ("Research the topic") | Agent produces shallow, unfocused output | Add specific steps, constraints, and context |
| Vague expected_output ("A report") | Agent guesses at format and structure | Specify format, sections, length, quality markers |
| Multiple objectives in one task | Agent does all of them poorly | Split into focused single-purpose tasks |
| No context between dependent tasks | Agent lacks information from prior steps | Use |
| Agent sees a class name string, not field names | Keep |
| Missing tools for data tasks | Agent fabricates data instead of fetching it | Add tools to the task or agent |
| No guardrails on critical output | Bad output flows downstream unchecked | Add function or LLM guardrails |
| Overly strict expected_output | Agent loops trying to match impossible criteria | Be specific but achievable; lower |
| Description duplicates backstory | Wasted tokens and confused agent | Description = what to do; backstory = who you are |
contextinputsoutput_pydanticoutput_jsonresponse_format