argue
Original:🇺🇸 English
Translated
Run structured multi-agent debates using argue CLI for cross-examined, high-confidence answers. Use when facing strategic decisions, ambiguous trade-offs, architecture debates, or questions where multiple perspectives improve the answer. Triggers on: argue, debate, cross-examine, second opinion, multi-agent, 'Should we X or Y?' with real stakes, consensus-building, risk analysis, or confirmation-bias mitigation.
8installs
Sourceonevcat/argue
Added on
NPX Install
npx skill4agent add onevcat/argue argueTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →Argue — Multi-Agent Debate Engine
Structured debates where AI agents analyze independently, cross-examine across rounds, and converge on consensus through voting. Higher-confidence answers than any single model alone.
When to Use
✅ Strategic / architectural decisions with real trade-offs, "Should we X or Y?" with real stakes, risk analysis, confirmation-bias mitigation, pre-commit quality gates on big decisions.
❌ Simple factual lookups, time-critical tasks (debates take 3–7 minutes), open-ended creative generation, questions with obvious answers.
Pre-flight
If is not on PATH, install it (confirm with the user first — this is a global install):
arguebash
npm install -g @onevcat/argue-cliThen verify and configure:
bash
argue version # verify installed (v0.2+)
argue config init --global # ~/.config/argue/config.json — recommended for agent use
# Add at least 2 agents — `--agent <id>` shorthand creates provider + agent in one shot
argue config add-provider --id codex --type cli --cli-type codex --model-id gpt-5.4 --agent codex-agent
argue config add-provider --id gemini --type cli --cli-type gemini --model-id gemini-3.1-pro-preview --agent gemini-agentWhy global by default: a global config is set up once and works from any cwd, and outputs go to instead of cluttering the current project tree. Use only when a specific project needs its own dedicated agent line-up — that writes and outputs to .
~/.argue/output/<requestId>/argue config init --local./argue.config.json./out/<requestId>/For API providers, SDK adapters, roles, and system prompts, see references/setup.md.
Running Debates
bash
# Basic — 2 agents, 2-3 rounds, auto-consensus
argue run --task "Should we use a monorepo or polyrepo?" --verbose
# With a follow-up action: representative executes once consensus is reached
argue run \
--task "Review the API design in docs/api.md" \
--action "Implement the consensus recommendation and open a PR" \
--verbose
# Open the rendered report in the hosted viewer when the run finishes
argue run --task "..." --viewUseful flags (full list: ):
argue --help| Flag | Purpose |
|---|---|
| Pick which agents participate (default: |
| Control debate depth (defaults: 2 / 3) |
| Consensus threshold (default: 1 = unanimous) |
| Execute task after consensus |
| Open report in the hosted viewer |
| JSON input for complex setups |
| Stream agent reasoning live |
Debates typically take 3–7 minutes for 2 agents × 3 rounds. Defaults are 10 min per task and 20 min per round; bump them for heavy reviews.
Viewing & Acting on Results
When a run finishes, argue prints the request id and a viewer hint. Open it any time:
bash
argue view # most recent run
argue view <request-id> # specific runThe hosted viewer renders entirely client-side (gzip + base64url in the URL fragment — nothing is uploaded). Use to point at a self-hosted viewer.
result.json--viewer-urlTo run a follow-up task using a debate result as context:
bash
argue act --result ~/.argue/output/<requestId>/result.json --task "Write a summary blog post"
argue act --result ./out/<requestId>/result.json --task "Implement the changes" --agent codex-agentOutput Files
After every run, argue writes to (global config) or (project-local config):
~/.argue/output/<requestId>/./out/<requestId>/- — full structured result
result.json - — markdown report (written on completion)
summary.md - — event stream (written live, survives crashes — parse it for partial results if a run is killed)
events.jsonl - — error details (only on failure)
error.json
Result status: | | | .
consensuspartial_consensusunresolvedfailedIf you need to parse programmatically, the canonical schema lives at .
result.jsonpackages/argue/src/contracts/result.tsTips
- Frame as decisions, not topics. "Should we use SwiftUI or UIKit?" beats "Tell me about SwiftUI".
- Add context. "Should we use a monorepo? Context: 8 microservices, 3 teams, Node+Go" produces sharper claims.
- 2–3 agents is the sweet spot. Agents in the same round are dispatched in parallel, so wall-clock is dominated by rounds rather than agent count — adding more agents barely costs time. The real cost is tokens: every extra agent produces its own claims, plus every other agent has to read them as peer context, so token usage grows roughly with N². If the user's config has more than 3 agents, pass explicitly to pick a focused subset, or set
--agents a,b,cin the config file once.defaults.defaultAgents - Use when consensus should drive code changes or another real-world side-effect.
--action
Troubleshooting
For common errors and fixes, see references/troubleshooting.md.