When this skill is activated, always start your first response with the 🧢 emoji.
Absolute-Human: AI-Native Development Lifecycle
Absolute-Human is a development lifecycle built from the ground up for AI agents. Traditional methods like Agile, Waterfall, and TDD were designed around human constraints - limited parallelism, context switching costs, communication overhead, and meetings. AI agents have none of these constraints. Absolute-Human exploits this by decomposing work into dependency-graphed sub-tasks, executing independent tasks in parallel waves, enforcing TDD verification at every step, and tracking everything on a persistent board that survives across sessions.
The model has 7 phases: INTAKE - DECOMPOSE - DISCOVER - PLAN - EXECUTE - VERIFY - CONVERGE.
Activation Banner
At the very start of every Absolute-Human invocation, before any other output, display this ASCII art banner:
███████╗██╗ ██╗██████╗ ███████╗██████╗ ██╗ ██╗██╗ ██╗███╗ ███╗ █████╗ ███╗ ██╗
██╔════╝██║ ██║██╔══██╗██╔════╝██╔══██╗██║ ██║██║ ██║████╗ ████║██╔══██╗████╗ ██║
███████╗██║ ██║██████╔╝█████╗ ██████╔╝███████║██║ ██║██╔████╔██║███████║██╔██╗ ██║
╚════██║██║ ██║██╔═══╝ ██╔══╝ ██╔══██╗██╔══██║██║ ██║██║╚██╔╝██║██╔══██║██║╚██╗██║
███████║╚██████╔╝██║ ███████╗██║ ██║██║ ██║╚██████╔╝██║ ╚═╝ ██║██║ ██║██║ ╚████║
╚══════╝ ╚═════╝ ╚═╝ ╚══════╝╚═╝ ╚═╝╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═══╝
This banner is mandatory. It signals to the user that Absolute-Human mode is active.
Activation Protocol
Immediately after displaying the banner, enter plan mode before doing anything else:
- On platforms with native plan mode (e.g., Claude Code's , Gemini CLI's planning mode): invoke the native plan mode mechanism immediately.
- On platforms without native plan mode: simulate plan mode by completing all planning phases (INTAKE through PLAN) in full before making any code changes. Present the complete plan to the user for explicit approval before proceeding to EXECUTE.
This ensures that every Absolute-Human invocation begins with structured thinking. The first four phases (INTAKE, DECOMPOSE, DISCOVER, PLAN) are inherently planning work - no files should be created or modified until the user has approved the plan and execution begins in Phase 5.
Session Resume Protocol
When Absolute-Human is invoked and a
already exists in the project root:
- Detect: Read the existing board and determine its status (, , )
- Display: Print a compact status summary showing completed/in-progress/blocked/remaining tasks
- Resume: Pick up from the last incomplete wave - do NOT restart from INTAKE
- Reconcile: If the codebase has changed since the last session (e.g., manual edits, other commits), run a quick diff check against the board's expected state and flag any conflicts before resuming
If the board is marked
, ask the user whether to start a new Absolute-Human session (archive the old board to
) or review the completed work.
Never blow away an existing board without explicit user confirmation.
Codebase Convention Detection
Before INTAKE begins, automatically detect the project's conventions by scanning for key files. This grounds all subsequent phases in reality rather than assumptions.
Auto-detect Checklist
| Signal | Files to Check |
|---|
| Package manager | (npm), (yarn), (pnpm), (bun), (cargo), (go) |
| Language/Runtime | (TypeScript), / (Python), (Go), (Rust) |
| Test runner | , , , , test directory patterns |
| Linter/Formatter | , , , , |
| Build system | , , , , |
| CI/CD | , , |
| Available scripts | section of , targets |
| Directory conventions | , , , , , |
| Codedocs | , documentation/.codedocs.json
, or any in the repo |
Codedocs Detection
If a
manifest is found, the repo has structured codedocs output. Record its location on the board and set a flag
. This changes how DISCOVER and PLAN operate - see those phases for details.
When codedocs is available, read
and
immediately during convention detection and append their key facts (tech stack, module map, entry points, dev commands) to the
section of the board. This front-loads context that would otherwise require separate codebase exploration in DISCOVER.
Output
Write the detected conventions to the board under a
section. Reference these conventions in every subsequent phase - particularly PLAN and the Mandatory Tail Tasks verification step.
When to Use This Skill
Use Absolute-Human when:
- Multi-step feature development touching 3+ files or components
- User says "build this end-to-end" or "plan and execute this"
- User says "break this into tasks" or "sprint plan this"
- Any task requiring planning + implementation + verification
- Greenfield projects, major refactors, or migrations
- Complex bug fixes that span multiple systems
Do NOT use Absolute-Human when:
- Single-file bug fixes or typo corrections
- Quick questions or code explanations
- Tasks the user wants to do manually with your guidance
- Pure research or exploration tasks
Key Principles
1. Dependency-First Decomposition
Every task is a node in a directed acyclic graph (DAG), not a flat list. Dependencies between tasks are explicit. This prevents merge conflicts, ordering bugs, and wasted work.
2. Wave-Based Parallelism
Tasks at the same depth in the dependency graph form a "wave". All tasks in a wave execute simultaneously via parallel agents. Waves execute in serial order. This maximizes throughput while respecting dependencies.
3. Test-First Verification
Every sub-task writes tests before implementation. A task is only "done" when its tests pass. No exceptions for "simple" changes - tests are the proof of correctness.
4. Persistent State
All progress is tracked in
in the project root. This file survives across sessions, enabling resume, audit, and handoff. The user chooses during INTAKE whether the board is git-tracked or gitignored.
5. Interactive Intake
Never assume. Scale questioning depth to task complexity - simple tasks get 3 questions, complex ones get 8-10. Extract requirements, constraints, and success criteria before writing a single line of code.
Core Concepts
The 7 Phases
INTAKE --> DECOMPOSE --> DISCOVER --> PLAN --> EXECUTE --> VERIFY --> CONVERGE
| | | | | | |
| gather | build DAG | research | detail | parallel | test + | merge +
| context | + waves | per task | per task| waves | verify | close
Task Graph
A directed acyclic graph (DAG) where each node is a sub-task and edges represent dependencies. Tasks with no unresolved dependencies can execute in parallel. See
references/dependency-graph-patterns.md
.
Execution Waves
Groups of independent tasks assigned to the same depth level in the DAG. Wave 1 runs first (all tasks in parallel), then Wave 2 (all tasks in parallel), and so on. See
references/wave-execution.md
.
Board
The
file is the single source of truth. It contains the intake summary, task graph, wave assignments, per-task status, research notes, plans, and verification results. See
references/board-format.md
.
Sub-task Lifecycle
pending --> researching --> planned --> in-progress --> verifying --> done
| |
+--- blocked +--- failed (retry)
Phase 1: INTAKE (Interactive Interview)
The intake phase gathers all context needed to decompose the task. Scale depth based on complexity.
Complexity Detection
- Simple (single component, clear scope): 3 questions
- Medium (multi-component, some ambiguity): 5 questions
- Complex (cross-cutting, greenfield, migration): 8-10 questions
Core Questions (always ask)
- Problem Statement: What exactly needs to be built or changed? What triggered this work?
- Success Criteria: How will we know this is done? What does "working" look like?
- Constraints: Are there existing patterns, libraries, or conventions we must follow?
Extended Questions (medium + complex)
- Existing Code: Is there related code already in the repo? Should we extend it or build fresh?
- Dependencies: Does this depend on external APIs, services, or other in-progress work?
Deep Questions (complex only)
- Edge Cases: What are the known edge cases or failure modes?
- Testing Strategy: Are there existing test patterns? Integration vs unit preference?
- Rollout: Any migration steps, feature flags, or backwards compatibility needs?
- Documentation: What docs need updating? API docs, README, architecture docs?
- Priority: Which parts are most critical? What can be deferred if needed?
Board Persistence Question (always ask)
Ask: "Should the
board be git-tracked (audit trail, resume across machines) or gitignored (local working state)?"
Output
Write the intake summary to
with all answers captured. See
references/intake-playbook.md
for the full question bank organized by task type.
Phase 2: DECOMPOSE (Task Graph Creation)
Break the intake into atomic sub-tasks and build the dependency graph.
Sub-task Anatomy
Each sub-task must have:
- ID: Sequential identifier (e.g., )
- Title: Clear, action-oriented (e.g., "Create user authentication middleware")
- Description: 2-3 sentences on what this task does
- Type: | | | |
- Complexity: (< 50 lines) | (50-200 lines) | (200+ lines - consider splitting)
- Dependencies: List of task IDs this depends on (e.g., )
Decomposition Rules
- Every task should be S or M complexity. If L, decompose further
- Test tasks are separate from implementation tasks
- Infrastructure/config tasks come before code that depends on them
- Documentation tasks depend on the code they document
- Aim for 5-15 sub-tasks. Fewer means under-decomposed; more means over-engineered
- Every task graph MUST end with three mandatory tail tasks (see below)
- Apply the complexity budget (see below)
Complexity Budget
After decomposition, sanity-check total scope before proceeding:
- Count the total number of tasks by complexity: S (small), M (medium), L (large)
- If any L tasks remain, decompose them further - L tasks are not allowed
- If total estimated scope exceeds 15 M-equivalent tasks (where 1 L = 3 M, 1 S = 0.5 M), flag to the user that scope may be too large for a single Absolute Human session
- Suggest splitting into multiple Absolute Human sessions with clear boundaries (e.g., "Session 1: backend API, Session 2: frontend integration")
- The user can override and proceed, but they must explicitly acknowledge the scope
Mandatory Tail Tasks
Every task graph MUST end with three mandatory tail tasks: Self Code Review, Requirements Validation, and Full Project Verification. For detailed descriptions and acceptance criteria of each, see
references/execution-patterns.md
.
Build the DAG
- List all sub-tasks
- For each task, identify which other tasks must complete first
- Draw edges from dependencies to dependents
- Verify no cycles exist (it's a DAG, not a general graph)
Assign Waves
Group tasks by depth level in the DAG:
- Wave 1: Tasks with zero dependencies (roots of the DAG)
- Wave 2: Tasks whose dependencies are all in Wave 1
- Wave N: Tasks whose dependencies are all in Waves 1 through N-1
Present for Approval
Generate an ASCII dependency graph and wave assignment table. Present to the user and wait for explicit approval before proceeding. See
references/dependency-graph-patterns.md
for common patterns, example graphs, and the wave assignment algorithm.
Phase 3: DISCOVER (Parallel Research)
Research each sub-task before planning implementation. This phase is parallelizable per wave.
Per Sub-task Research
For each sub-task, investigate in this order - docs first, source second:
-
Codedocs Lookup (if
on the board)
- Check to find which module doc covers the files relevant to this task
- Read the relevant for public API, internal structure, dependencies, and implementation notes
- Check for any cross-cutting pattern docs (error handling, testing strategy, logging) that apply to this task
- Use for architecture context and to understand how this task's module fits into the system
- Only proceed to Codebase Exploration below if the docs don't contain enough detail - flag any gaps in the docs as a staleness note on the board
- Record which doc files were used in the task's research notes on the board
-
Codebase Exploration (always run; use to fill gaps left by docs or when codedocs is not available)
- Find existing patterns, utilities, and conventions relevant to this task
- Identify files that will be created or modified
- Check for reusable functions, types, or components
- Understand the testing patterns used in the project
-
Web Research (when codebase context is insufficient)
- Official documentation for libraries and APIs involved
- Best practices and common patterns
- Known gotchas or breaking changes
-
Risk Assessment
- Flag unknowns or ambiguities
- Identify potential conflicts with other sub-tasks
- Note any assumptions that need validation
Execution Strategy
- Launch parallel Explore agents for all tasks in Wave 1 simultaneously
- Once Wave 1 research completes, launch Wave 2 research, and so on
- Each agent writes its findings to the board under the respective task
Output
Append research notes to each sub-task on the board:
- Key files identified
- Reusable code/patterns found
- Risks and unknowns flagged
- External docs referenced
Phase 4: PLAN (Execution Planning)
Create a detailed execution plan for each sub-task based on research findings.
Per Sub-task Plan
For each sub-task, specify:
- Files to Create/Modify: Exact file paths
- Test Files: Test file paths (TDD - these are written first)
- Implementation Approach: Brief description of the approach
- Acceptance Criteria: Specific, verifiable conditions for "done"
- Test Cases: List of test cases to write
- Happy path tests
- Edge case tests
- Error handling tests
Planning Rules
- Tests are always planned before implementation
- Each plan must reference specific reusable code found in DISCOVER
- Plans must respect the project's existing conventions (naming, structure, patterns)
- If a plan reveals a missing dependency, update the task graph (re-approve with user)
Output
Update each sub-task on the board with its execution plan. The board now contains everything an agent needs to execute the task independently.
Phase 5: EXECUTE (Wave-Based Implementation)
Execute tasks wave by wave. Within each wave, spin up parallel agents for independent tasks.
Pre-Execution Snapshot
Before executing the first wave, create a git safety net:
- Ensure all current changes are committed or stashed
- Record the current commit hash on the board under
- If execution goes catastrophically wrong (build broken after max retries, critical files corrupted), the user can to this commit
- Remind the user of the rollback point hash when flagging unrecoverable failures
Wave Execution Loop
for each wave in [Wave 1, Wave 2, ..., Wave N]:
for each task in wave (in parallel):
1. Write tests (TDD - red phase)
2. Implement code to make tests pass (green phase)
3. Refactor if needed (refactor phase)
4. Update board status: in-progress -> verifying
wait for all tasks in wave to complete
run wave boundary checks (conflict resolution, progress report)
proceed to next wave
For agent context handoff format, wave boundary checks (conflict resolution and progress reports), scope creep handling, blocked task management, and failure recovery patterns, see
references/execution-patterns.md
.
Phase 6: VERIFY (Per-Task + Integration)
Every sub-task must prove it works before closing.
Per-Task Verification
For each completed sub-task, run:
- Tests: Run the task's test suite - all tests must pass
- Lint: Run the project's linter on modified files
- Type Check: Run type checker if applicable (TypeScript, mypy, etc.)
- Build: Verify the project still builds
Integration Verification
After each wave completes:
- Run tests for tasks that depend on this wave's output
- Check for conflicts between parallel tasks (file conflicts, API mismatches)
- Run the full test suite if available
Verification Loop
if all checks pass:
mark task as "done"
update board with verification report
else:
mark task as "failed"
loop back to EXECUTE for this task (max 2 retries)
if still failing after retries:
flag for user attention
continue with other tasks
Output
Update each sub-task on the board with a verification report:
- Tests: pass/fail (with details on failures)
- Lint: clean/issues
- Type check: pass/fail
- Build: pass/fail
See
references/verification-framework.md
for the full verification protocol.
Phase 7: CONVERGE (Final Integration)
Merge all work and close out the board.
Steps
- Merge: If using worktrees or branches, merge all work into the target branch
- Full Test Suite: Run the complete project test suite
- Documentation: Update any docs that were part of the task scope
- Summary: Generate a change summary with:
- Files created/modified (with line counts)
- Tests added (with coverage if available)
- Key decisions made during execution
- Any deferred work or follow-ups
- Close Board: Mark the board as with a timestamp
- Suggest Commit: Propose a commit message summarizing the work
Board Finalization
The completed board serves as an audit trail:
- Full history of all 7 phases
- Every sub-task with its research, plan, and verification
- Timeline of execution
- Any issues encountered and how they were resolved
Gotchas
-
Parallel agents modifying shared files without a lock strategy - Two agents in the same wave that both edit the same utility file or test fixture will produce a merge conflict at the wave boundary. During DECOMPOSE, identify shared files and assign ownership to one task; other tasks must treat those files as read-only until the owning task completes.
-
Board marked but tests were never run - The mandatory tail task "Run full project verification suite" is frequently skipped when agents declare done based on subjective confidence. Never mark the board
until the actual test/lint/build commands have been run and their output recorded on the board.
-
DISCOVER phase skipped for "obvious" tasks - Agents assume they know the codebase well enough to skip research. Then they write code that duplicates existing utilities, uses the wrong import paths, or misses a convention that would have been visible in a 2-minute code scan. Always run DISCOVER, even for small tasks.
-
Rollback point recorded after Wave 1 has already started - Recording the git commit hash mid-wave means the rollback point already includes partial changes. The pre-execution snapshot must be taken before any file is touched - before Wave 1 begins, not during it.
-
Scope creep absorbed silently expands the DAG without user visibility - Agents frequently discover adjacent improvements during EXECUTE and absorb them into the current task without flagging them. This makes the wave take longer than planned, obscures what changed, and violates the scope agreement from INTAKE. Everything outside the original scope goes to the Deferred Work section.
Anti-Patterns and Common Mistakes
For the full anti-patterns table (16 patterns covering intake, decomposition, execution, scope management, and conventions), see
references/execution-patterns.md
.
References
For detailed guidance on specific phases, load these reference files:
references/intake-playbook.md
- Full question bank organized by task type (feature, bug, refactor, greenfield, migration), with scaling rules and example sessions
references/dependency-graph-patterns.md
- Common DAG patterns, ASCII rendering format, wave assignment algorithm, and example graphs
references/wave-execution.md
- Parallel agent orchestration, agent prompt templates, blocked task handling, error recovery
references/verification-framework.md
- TDD workflow per sub-task, verification signals, integration testing, failure handling
references/board-format.md
- Full specification with format, status transitions, and example board
Companion check
On first activation of this skill in a conversation: check which companion skills are installed by running
ls ~/.claude/skills/ ~/.agent/skills/ ~/.agents/skills/ .claude/skills/ .agent/skills/ .agents/skills/ 2>/dev/null
. Compare the results against the
field in this file's frontmatter. For any that are missing, mention them once and offer to install:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill <name>
Skip entirely if
is empty or all companions are already installed.