Parallel Agent Dispatch
Overview
You delegate tasks to specialized agents with isolated contexts. By carefully designing their instructions and context, ensure they stay focused and complete tasks successfully. They should not inherit your session context or history—you precisely construct everything they need. This also preserves your own context for coordinating work.
When you encounter multiple unrelated failures (different test files, different subsystems, different bugs), troubleshooting them one by one wastes time. Each troubleshooting task is independent and can be done in parallel.
Core Principle: Assign one agent to each independent problem domain and let them work concurrently.
When to Use
dot
digraph when_to_use {
"Multiple failures exist?" [shape=diamond];
"Are they independent?" [shape=diamond];
"Single agent troubleshoots all issues" [shape=box];
"One agent per problem domain" [shape=box];
"Can they run in parallel?" [shape=diamond];
"Execute agents sequentially" [shape=box];
"Parallel dispatch" [shape=box];
"Multiple failures exist?" -> "Are they independent?" [label="Yes"];
"Are they independent?" -> "Single agent troubleshoots all issues" [label="No - Related"];
"Are they independent?" -> "Can they run in parallel?" [label="Yes"];
"Can they run in parallel?" -> "Parallel dispatch" [label="Yes"];
"Can they run in parallel?" -> "Execute agents sequentially" [label="No - Shared state"];
}
Applicable Scenarios:
- 3 or more test files failing due to different root causes
- Multiple subsystems failing independently
- Each problem can be understood without context from other problems
- No shared state between troubleshooting tasks
Inapplicable Scenarios:
- Failures are related (fixing one may fix others)
- Need to understand the complete system state
- Agents will interfere with each other
Pattern
1. Identify Independent Problem Domains
Group failures by:
- File A Tests: Tool approval process
- File B Tests: Batch completion behavior
- File C Tests: Abort functionality
Each problem domain is independent—fixing the tool approval won't affect the abort tests.
2. Create Focused Agent Tasks
Each agent receives:
- Clear Scope: One test file or subsystem
- Clear Goal: Make these tests pass
- Constraints: Do not modify other code
- Expected Output: A summary of what you found and fixed
3. Parallel Dispatch
typescript
// In Claude Code / AI environment
Task("Fix failures in agent-tool-abort.test.ts")
Task("Fix failures in batch-completion-behavior.test.ts")
Task("Fix failures in tool-approval-race-conditions.test.ts")
// All three tasks run concurrently
4. Review and Integration
When agents return:
- Read each summary
- Verify there are no conflicts between fixes
- Run the full test suite
- Integrate all changes
Agent Prompt Structure
A good agent prompt should be:
- Focused - One clear problem domain
- Self-contained - Contains all context needed to understand the problem
- Clear Output Requirements - What should the agent return?
markdown
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
1. "should abort tool with partial output capture" - Expected message to contain 'interrupted at'
2. "should handle mixed completed and aborted tools" - Fast tool was aborted instead of completed
3. "should properly track pendingToolCount" - Expected 3 results but got 0
These are timing/race condition issues. Your tasks:
1. Read the test file and understand what each test verifies
2. Find the root cause - is it a timing issue or an actual bug?
3. Fix methods:
- Replace arbitrary timeouts with event-based waits
- Fix bugs in the abort implementation if found
- Adjust test expectations if testing changed behavior
Don't just increase timeouts - find the real issue.
Return: A summary of what you found and fixed.
Common Mistakes
Wrong Approach: Too Broad: "Fix all tests" - Agents will get lost
Correct Approach: Specific: "Fix agent-tool-abort.test.ts" - Focused scope
Wrong Approach: No Context: "Fix race conditions" - Agents don't know where
Correct Approach: Provide Context: Paste error messages and test names
Wrong Approach: No Constraints: Agents may refactor all code
Correct Approach: Set Constraints: "Do not modify production code" or "Only fix tests"
Wrong Approach: Vague Output Requirements: "Fix it" - You don't know what was changed
Correct Approach: Clear Requirements: "Return a summary of root causes and changes made"
Inapplicable Scenarios
Related Failures: Fixing one may fix others—troubleshoot together first
Requires Full Context: Understanding the problem requires seeing the entire system
Exploratory Debugging: You don't yet know what's broken
Shared State: Agents will interfere with each other (editing the same file, using the same resource)
Real-World Case
Scenario: After a large-scale refactoring, 6 tests in 3 files are failing
Failures:
- agent-tool-abort.test.ts: 3 failures (timing issues)
- batch-completion-behavior.test.ts: 2 failures (tools not executing)
- tool-approval-race-conditions.test.ts: 1 failure (execution count = 0)
Decision: Independent problem domains—abort logic, batch completion, and race conditions are separate
Dispatch:
Agent 1 → Fix agent-tool-abort.test.ts
Agent 2 → Fix batch-completion-behavior.test.ts
Agent 3 → Fix tool-approval-race-conditions.test.ts
Results:
- Agent 1: Replaced arbitrary timeouts with event-based waits
- Agent 2: Fixed an event structure bug (incorrect threadId position)
- Agent 3: Added logic to wait for asynchronous tool execution to complete
Integration: All fixes are independent, no conflicts, full test suite passes
Time Saved: 3 problems solved in parallel vs sequentially
Core Advantages
- Parallelization - Multiple troubleshooting tasks happen simultaneously
- Focus - Each agent has a narrow scope and less context to track
- Independence - Agents do not interfere with each other
- Speed - 3 problems solved in the time it takes to solve 1
Validation
After agents return:
- Review Each Summary - Understand what was changed
- Check for Conflicts - Did agents edit the same code segment?
- Run Full Suite - Verify all fixes work together
- Spot Check - Agents may make systematic errors
Actual Results
From a debugging session (2025-10-03):
- 6 failures across 3 files
- 3 agents dispatched in parallel
- All troubleshooting completed concurrently
- All fixes successfully integrated
- Zero conflicts between agent changes