Execute a GRACE development plan with multiple agents while keeping planning artifacts and shared context consistent.
Prerequisites
docs/development-plan.xml
must exist with module contracts and implementation order
- must exist
- If either is missing, tell the user to run first
- If the shell does not already have GRACE worker/reviewer presets, use before dispatching a large wave
- Prefer this skill only when module-local verification commands already exist or can be defined clearly
Core Principle
Parallelize module implementation, not architectural truth.
- One controller owns shared artifacts:
docs/development-plan.xml
, , phase status, and execution queue
- Worker agents own only their assigned module files and module-local tests
- Reviewers validate module outputs before the controller merges graph and plan updates
- Speed should come from better context packaging, batched shared-artifact work, and scoped reviews - not from letting workers invent architecture
If multiple agents edit the same module, the same shared XML file, or the same tightly coupled slice, this is not a multi-agent wave. Use
instead.
Execution Profiles
Default to
unless the user asks otherwise.
- Ask for approval on the proposed waves before dispatch
- Run contract review and verification review for every module output
- Run targeted graph sync after each wave and a full refresh at each phase boundary
- Use when modules are novel, risky, or touch poorly understood integration surfaces
(default)
- Parse plan and graph once at the start of the run
- Ask for one approval on the execution schedule up front unless the plan changes mid-run
- Give workers compact execution packets instead of making each worker reread full XML artifacts
- Run module-local verification per worker, scoped gate reviews per module, and batched integrity checks per wave or phase
- Run targeted graph sync after each wave and full refresh only at phase boundaries, when drift is suspected, or at final wrap-up
- Use only for mature codebases with strong verification and stable architecture
- Ask for one approval for the whole run unless a blocker or plan change appears
- Keep worker packets compact and require only the minimum context needed for exact scope execution
- Block only on critical scoped review issues during a wave, then batch the deeper integrity audit at phase end or final wrap-up
- Reserve full refresh for phase completion or final reconciliation
Every module still gets a fresh worker. Do not optimize this workflow by reusing worker sessions across modules.
Process
Step 1: Build the Execution Waves Once
Read
docs/development-plan.xml
and
once per run, then build the controller view of the execution queue.
- Parse pending and entries
- Group steps into parallel-safe waves
- A step is parallel-safe only if:
- all of its dependencies are already complete
- it has a disjoint write scope from every other step in the wave
- it does not require shared edits to the same integration surface
- Choose the execution profile: , , or
- For each wave, prepare a compact execution packet for every module containing:
- module ID and purpose
- target file paths and exact write scope
- module contract excerpt from
docs/development-plan.xml
- module graph entry excerpt from
- dependency contract summaries for every module in
- module-local verification commands
- wave-level integration checks that will run after merge
- expected graph delta fields: imports, exports, annotations, and CrossLinks
Present the proposed waves, selected profile, and packet scopes to the user. In
, wait for approval before each dispatch. In
and
, one up-front approval is enough unless the plan changes.
Step 2: Assign Ownership
Before dispatching, define ownership explicitly:
- Controller:
- owns
docs/development-plan.xml
- owns
- owns wave packets, phase completion, and commits that touch shared artifacts
- Worker agent:
- owns one module or one explicitly bounded slice
- may edit only that module's source files and module-local tests
- must not change shared planning artifacts directly
- Reviewer agent:
- read-only validation of contract compliance, GRACE markup, imports, graph delta accuracy, and verification evidence
If a worker discovers that a missing module or new dependency is required, stop that worker and ask the user to revise the plan before proceeding. Do not allow silent architectural drift.
Step 3: Dispatch Fresh Worker Agents Per Wave
For each approved wave:
- Dispatch one fresh worker agent per module
- Give each worker only the execution packet and the files inside its write scope
- Require the worker to:
- generate or update code using the protocol
- preserve MODULE_CONTRACT, MODULE_MAP, CHANGE_SUMMARY, function contracts, and semantic blocks
- add or update module-local tests only
- run module-local verification only
- commit their work after module-local verification passes with format:
grace(MODULE_ID): short description
Wave N, Phase M
- return a result packet with changed files, verification evidence, graph delta proposal, commit hash, and any integration assumptions
Step 4: Review with the Smallest Safe Scope
After each worker finishes:
- Run a scoped contract review against the changed files and execution packet
- Run a scoped verification review against the module-local tests and verification evidence
- Escalate to a full audit only when:
- cross-module drift is suspected
- the graph delta contradicts the packet or actual imports
- verification is too weak for the chosen profile
- a phase boundary audit is due
- If issues are found:
- send the same worker back to fix them
- re-run only the affected reviews unless escalation is required
- Only approved module outputs may move to controller integration
Step 5: Controller Integration and Batch Graph Sync
After all modules in the wave are approved:
- Integrate the accepted module outputs
- Apply graph delta proposals once, centrally, to
- Update
docs/development-plan.xml
step status once per wave
- Run targeted against the changed modules and touched dependency surfaces
- If targeted refresh reports wider drift, escalate to a full refresh before the next wave
- If the wave reveals weak or missing automated checks, use before continuing
Step 6: Verify by Level
Run verification at the smallest level that still protects correctness.
- Worker level: module-local typecheck, lint, unit tests, and deterministic local assertions
- Wave level: integration checks only for the merged surfaces touched by the wave
- Phase level: full suite, full integrity audit, and final graph reconciliation before marking the phase done
Do not run full-repository tests and full-repository graph scans after every successful module unless the risk profile requires it.
Step 7: Controller Shared-Artifact Commits and Report
After each wave, the controller commits only shared artifacts that changed:
- Update and
docs/development-plan.xml
with wave results
- Commit with format:
grace(graph): sync after wave N
Modules: M-xxx, M-yyy
Worker implementation commits are already done per module in Step 3. Controller commits are only for shared planning artifacts.
After each wave, report:
text
=== WAVE COMPLETE ===
Wave: N
Profile: safe / balanced / fast
Modules: M-xxx, M-yyy
Approved: count/count
Graph sync: targeted passed / targeted fixed / escalated to full refresh
Verification: module-local passed / wave checks passed / follow-up required
Remaining waves: count
Dispatch Rules
- Parse shared XML artifacts once per run unless the plan changes
- Prefer controller-built execution packets over repeated raw XML reads by workers
- Parallelize only across independent modules, never across unknown coupling
- Do not let workers invent new architecture
- Do not let workers edit the same XML planning artifacts in parallel
- Do not reuse worker sessions across modules; keep workers fresh and packets compact
- Give every worker exact file ownership and exact success criteria
- Workers must commit their implementation after verification passes - do not wait for controller
- Controller commits only shared artifacts (graph, plan), not implementation files
- Prefer targeted refresh and scoped review during active waves
- Reserve full reviewer audits and full refresh scans for phase boundaries, drift suspicion, or critical failures
- If verification is weak, slow down and move to rather than pretending is safe
When NOT to Use
- Only one module remains
- Steps are tightly coupled and share the same files
- The plan is still changing frequently
- The team has not defined reliable module-local verification yet
Use
for sequential execution when dependency risk is higher than the parallelism gain.