codex

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Codex Skill Guide

Codex 技能指南

Running a Task

运行任务

  1. Default to
    gpt-5.2
    model. Ask the user (via
    AskUserQuestion
    ) which reasoning effort to use (
    xhigh
    ,
    high
    ,
    medium
    , or
    low
    ). User can override model if needed (see Model Options below).
  2. Select the sandbox mode required for the task; default to
    --sandbox read-only
    unless edits or network access are necessary.
  3. Assemble the command with the appropriate options:
    • -m, --model <MODEL>
    • --config model_reasoning_effort="<high|medium|low>"
    • --sandbox <read-only|workspace-write|danger-full-access>
    • --full-auto
    • -C, --cd <DIR>
    • --skip-git-repo-check
  4. Always use --skip-git-repo-check.
  5. When continuing a previous session, use
    codex exec --skip-git-repo-check resume --last
    via stdin. When resuming don't use any configuration flags unless explicitly requested by the user e.g. if he species the model or the reasoning effort when requesting to resume a session. Resume syntax:
    echo "your prompt here" | codex exec --skip-git-repo-check resume --last 2>/dev/null
    . All flags have to be inserted between exec and resume.
  6. IMPORTANT: By default, append
    2>/dev/null
    to all
    codex exec
    commands to suppress thinking tokens (stderr). Only show stderr if the user explicitly requests to see thinking tokens or if debugging is needed.
  7. Run the command, capture stdout/stderr (filtered as appropriate), and summarize the outcome for the user.
  8. After Codex completes, inform the user: "You can resume this Codex session at any time by saying 'codex resume' or asking me to continue with additional analysis or changes."
  1. 默认使用
    gpt-5.2
    模型。通过
    AskUserQuestion
    询问用户要使用哪种推理强度(
    xhigh
    ,
    high
    ,
    medium
    , or
    low
    )。如有需要,用户可以覆盖模型设置(见下方的模型选项)。
  2. 选择任务所需的沙箱模式;默认使用
    --sandbox read-only
    ,除非需要编辑或网络访问权限。
  3. 组装带有适当选项的命令:
    • -m, --model <MODEL>
    • --config model_reasoning_effort="<high|medium|low>"
    • --sandbox <read-only|workspace-write|danger-full-access>
    • --full-auto
    • -C, --cd <DIR>
    • --skip-git-repo-check
  4. 始终使用
    --skip-git-repo-check
  5. 当继续之前的会话时,通过标准输入使用
    codex exec --skip-git-repo-check resume --last
    。恢复会话时,除非用户明确要求,否则不要使用任何配置标志,例如用户在请求恢复会话时指定了模型或推理强度。恢复语法:
    echo "your prompt here" | codex exec --skip-git-repo-check resume --last 2>/dev/null
    。所有标志必须插入在exec和resume之间。
  6. 重要提示:默认情况下,在所有
    codex exec
    命令后追加
    2>/dev/null
    以抑制思考令牌(stderr)。仅当用户明确要求查看思考令牌或需要调试时才显示stderr内容。
  7. 运行命令,捕获标准输出/标准错误(按适当方式过滤),并向用户总结结果。
  8. Codex完成后,告知用户:"您可以随时通过说‘codex resume’或要求我继续进行额外分析或更改来恢复此Codex会话。"

Quick Reference

快速参考

Use caseSandbox modeKey flags
Read-only review or analysis
read-only
--sandbox read-only 2>/dev/null
Apply local edits
workspace-write
--sandbox workspace-write --full-auto 2>/dev/null
Permit network or broad access
danger-full-access
--sandbox danger-full-access --full-auto 2>/dev/null
Resume recent sessionInherited from original
echo "prompt" | codex exec --skip-git-repo-check resume --last 2>/dev/null
(no flags allowed)
Run from another directoryMatch task needs
-C <DIR>
plus other flags
2>/dev/null
使用场景沙箱模式关键标志
只读审查或分析
read-only
--sandbox read-only 2>/dev/null
应用本地编辑
workspace-write
--sandbox workspace-write --full-auto 2>/dev/null
允许网络或全面访问
danger-full-access
--sandbox danger-full-access --full-auto 2>/dev/null
恢复最近会话继承自原始会话
echo "prompt" | codex exec --skip-git-repo-check resume --last 2>/dev/null
(不允许使用任何标志)
从其他目录运行匹配任务需求
-C <DIR>
加上其他标志
2>/dev/null

Model Options

模型选项

ModelBest forContext windowKey features
gpt-5.2-max
Max model: Ultra-complex reasoning, deep problem analysis400K input / 128K output76.3% SWE-bench, adaptive reasoning, $1.25/$10.00
gpt-5.2
Flagship model: Software engineering, agentic coding workflows400K input / 128K output76.3% SWE-bench, adaptive reasoning, $1.25/$10.00
gpt-5.2-mini
Cost-efficient coding (4x more usage allowance)400K input / 128K outputNear SOTA performance, $0.25/$2.00
gpt-5.1-thinking
Ultra-complex reasoning, deep problem analysis400K input / 128K outputAdaptive thinking depth, runs 2x slower on hardest tasks
GPT-5.2 Advantages: 76.3% SWE-bench (vs 72.8% GPT-5), 30% faster on average tasks, better tool handling, reduced hallucinations, improved code quality. Knowledge cutoff: September 30, 2024.
Reasoning Effort Levels:
  • xhigh
    - Ultra-complex tasks (deep problem analysis, complex reasoning, deep understanding of the problem)
  • high
    - Complex tasks (refactoring, architecture, security analysis, performance optimization)
  • medium
    - Standard tasks (refactoring, code organization, feature additions, bug fixes)
  • low
    - Simple tasks (quick fixes, simple changes, code formatting, documentation)
Cached Input Discount: 90% off ($0.125/M tokens) for repeated context, cache lasts up to 24 hours.
模型最佳适用场景上下文窗口关键特性
gpt-5.2-max
最大模型:超复杂推理、深度问题分析400K输入 / 128K输出76.3% SWE-bench得分、自适应推理、价格$1.25/$10.00
gpt-5.2
旗舰模型:软件工程、智能体编码工作流400K输入 / 128K输出76.3% SWE-bench得分、自适应推理、价格$1.25/$10.00
gpt-5.2-mini
高性价比编码(4倍使用额度)400K输入 / 128K输出接近最先进性能、价格$0.25/$2.00
gpt-5.1-thinking
超复杂推理、深度问题分析400K输入 / 128K输出自适应思考深度、在最难任务上运行速度慢2倍
GPT-5.2优势:76.3% SWE-bench得分(对比GPT-5的72.8%)、平均任务速度快30%、工具处理能力更强、幻觉减少、代码质量提升。知识截止日期:2024年9月30日。
推理强度级别:
  • xhigh
    - 超复杂任务(深度问题分析、复杂推理、对问题的深度理解)
  • high
    - 复杂任务(重构、架构设计、安全分析、性能优化)
  • medium
    - 标准任务(重构、代码组织、功能添加、bug修复)
  • low
    - 简单任务(快速修复、简单修改、代码格式化、文档编写)
缓存输入折扣: 重复上下文可享受90%折扣($0.125/百万令牌),缓存有效期最长24小时。

Following Up

后续操作

  • After every
    codex
    command, immediately use
    AskUserQuestion
    to confirm next steps, collect clarifications, or decide whether to resume with
    codex exec resume --last
    .
  • When resuming, pipe the new prompt via stdin:
    echo "new prompt" | codex exec resume --last 2>/dev/null
    . The resumed session automatically uses the same model, reasoning effort, and sandbox mode from the original session.
  • Restate the chosen model, reasoning effort, and sandbox mode when proposing follow-up actions.
  • 每次执行
    codex
    命令后,立即使用
    AskUserQuestion
    确认下一步操作、收集澄清信息,或决定是否使用
    codex exec resume --last
    恢复会话。
  • 恢复会话时,通过标准输入传递新提示:
    echo "new prompt" | codex exec resume --last 2>/dev/null
    。恢复的会话会自动使用原始会话的模型、推理强度和沙箱模式。
  • 提出后续操作时,重申所选的模型、推理强度和沙箱模式。

Error Handling

错误处理

  • Stop and report failures whenever
    codex --version
    or a
    codex exec
    command exits non-zero; request direction before retrying.
  • Before you use high-impact flags (
    --full-auto
    ,
    --sandbox danger-full-access
    ,
    --skip-git-repo-check
    ) ask the user for permission using AskUserQuestion unless it was already given.
  • When output includes warnings or partial results, summarize them and ask how to adjust using
    AskUserQuestion
    .
  • codex --version
    codex exec
    命令执行失败(退出码非零)时,立即停止并报告失败,在重试前请求用户指示。
  • 使用高影响标志(
    --full-auto
    --sandbox danger-full-access
    --skip-git-repo-check
    )之前,除非用户已明确授权,否则需使用AskUserQuestion请求用户许可。
  • 当输出包含警告或部分结果时,总结这些内容并使用AskUserQuestion询问用户如何调整。

CLI Version

CLI版本

Requires Codex CLI v0.57.0 or later for GPT-5.2 model support. The CLI defaults to
gpt-5.2
on macOS/Linux and
gpt-5.2
on Windows. Check version:
codex --version
Use
/model
slash command within a Codex session to switch models, or configure default in
~/.codex/config.toml
.
需要Codex CLI v0.57.0或更高版本以支持GPT-5.2模型。macOS/Linux和Windows系统上的CLI默认均使用
gpt-5.2
。检查版本:
codex --version
在Codex会话中使用
/model
斜杠命令切换模型,或在
~/.codex/config.toml
中配置默认模型。