codex
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCodex Skill Guide
Codex 技能指南
Running a Task
运行任务
- Default to model. Ask the user (via
gpt-5.2) which reasoning effort to use (AskUserQuestion,xhigh,high, ormedium). User can override model if needed (see Model Options below).low - Select the sandbox mode required for the task; default to unless edits or network access are necessary.
--sandbox read-only - Assemble the command with the appropriate options:
-m, --model <MODEL>--config model_reasoning_effort="<high|medium|low>"--sandbox <read-only|workspace-write|danger-full-access>--full-auto-C, --cd <DIR>--skip-git-repo-check
- Always use --skip-git-repo-check.
- When continuing a previous session, use via stdin. When resuming don't use any configuration flags unless explicitly requested by the user e.g. if he species the model or the reasoning effort when requesting to resume a session. Resume syntax:
codex exec --skip-git-repo-check resume --last. All flags have to be inserted between exec and resume.echo "your prompt here" | codex exec --skip-git-repo-check resume --last 2>/dev/null - IMPORTANT: By default, append to all
2>/dev/nullcommands to suppress thinking tokens (stderr). Only show stderr if the user explicitly requests to see thinking tokens or if debugging is needed.codex exec - Run the command, capture stdout/stderr (filtered as appropriate), and summarize the outcome for the user.
- After Codex completes, inform the user: "You can resume this Codex session at any time by saying 'codex resume' or asking me to continue with additional analysis or changes."
- 默认使用模型。通过
gpt-5.2询问用户要使用哪种推理强度(AskUserQuestion,xhigh,high, ormedium)。如有需要,用户可以覆盖模型设置(见下方的模型选项)。low - 选择任务所需的沙箱模式;默认使用,除非需要编辑或网络访问权限。
--sandbox read-only - 组装带有适当选项的命令:
-m, --model <MODEL>--config model_reasoning_effort="<high|medium|low>"--sandbox <read-only|workspace-write|danger-full-access>--full-auto-C, --cd <DIR>--skip-git-repo-check
- 始终使用。
--skip-git-repo-check - 当继续之前的会话时,通过标准输入使用。恢复会话时,除非用户明确要求,否则不要使用任何配置标志,例如用户在请求恢复会话时指定了模型或推理强度。恢复语法:
codex exec --skip-git-repo-check resume --last。所有标志必须插入在exec和resume之间。echo "your prompt here" | codex exec --skip-git-repo-check resume --last 2>/dev/null - 重要提示:默认情况下,在所有命令后追加
codex exec以抑制思考令牌(stderr)。仅当用户明确要求查看思考令牌或需要调试时才显示stderr内容。2>/dev/null - 运行命令,捕获标准输出/标准错误(按适当方式过滤),并向用户总结结果。
- Codex完成后,告知用户:"您可以随时通过说‘codex resume’或要求我继续进行额外分析或更改来恢复此Codex会话。"
Quick Reference
快速参考
| Use case | Sandbox mode | Key flags |
|---|---|---|
| Read-only review or analysis | | |
| Apply local edits | | |
| Permit network or broad access | | |
| Resume recent session | Inherited from original | |
| Run from another directory | Match task needs | |
| 使用场景 | 沙箱模式 | 关键标志 |
|---|---|---|
| 只读审查或分析 | | |
| 应用本地编辑 | | |
| 允许网络或全面访问 | | |
| 恢复最近会话 | 继承自原始会话 | |
| 从其他目录运行 | 匹配任务需求 | |
Model Options
模型选项
| Model | Best for | Context window | Key features |
|---|---|---|---|
| Max model: Ultra-complex reasoning, deep problem analysis | 400K input / 128K output | 76.3% SWE-bench, adaptive reasoning, $1.25/$10.00 |
| Flagship model: Software engineering, agentic coding workflows | 400K input / 128K output | 76.3% SWE-bench, adaptive reasoning, $1.25/$10.00 |
| Cost-efficient coding (4x more usage allowance) | 400K input / 128K output | Near SOTA performance, $0.25/$2.00 |
| Ultra-complex reasoning, deep problem analysis | 400K input / 128K output | Adaptive thinking depth, runs 2x slower on hardest tasks |
GPT-5.2 Advantages: 76.3% SWE-bench (vs 72.8% GPT-5), 30% faster on average tasks, better tool handling, reduced hallucinations, improved code quality. Knowledge cutoff: September 30, 2024.
Reasoning Effort Levels:
- - Ultra-complex tasks (deep problem analysis, complex reasoning, deep understanding of the problem)
xhigh - - Complex tasks (refactoring, architecture, security analysis, performance optimization)
high - - Standard tasks (refactoring, code organization, feature additions, bug fixes)
medium - - Simple tasks (quick fixes, simple changes, code formatting, documentation)
low
Cached Input Discount: 90% off ($0.125/M tokens) for repeated context, cache lasts up to 24 hours.
| 模型 | 最佳适用场景 | 上下文窗口 | 关键特性 |
|---|---|---|---|
| 最大模型:超复杂推理、深度问题分析 | 400K输入 / 128K输出 | 76.3% SWE-bench得分、自适应推理、价格$1.25/$10.00 |
| 旗舰模型:软件工程、智能体编码工作流 | 400K输入 / 128K输出 | 76.3% SWE-bench得分、自适应推理、价格$1.25/$10.00 |
| 高性价比编码(4倍使用额度) | 400K输入 / 128K输出 | 接近最先进性能、价格$0.25/$2.00 |
| 超复杂推理、深度问题分析 | 400K输入 / 128K输出 | 自适应思考深度、在最难任务上运行速度慢2倍 |
GPT-5.2优势:76.3% SWE-bench得分(对比GPT-5的72.8%)、平均任务速度快30%、工具处理能力更强、幻觉减少、代码质量提升。知识截止日期:2024年9月30日。
推理强度级别:
- - 超复杂任务(深度问题分析、复杂推理、对问题的深度理解)
xhigh - - 复杂任务(重构、架构设计、安全分析、性能优化)
high - - 标准任务(重构、代码组织、功能添加、bug修复)
medium - - 简单任务(快速修复、简单修改、代码格式化、文档编写)
low
缓存输入折扣: 重复上下文可享受90%折扣($0.125/百万令牌),缓存有效期最长24小时。
Following Up
后续操作
- After every command, immediately use
codexto confirm next steps, collect clarifications, or decide whether to resume withAskUserQuestion.codex exec resume --last - When resuming, pipe the new prompt via stdin: . The resumed session automatically uses the same model, reasoning effort, and sandbox mode from the original session.
echo "new prompt" | codex exec resume --last 2>/dev/null - Restate the chosen model, reasoning effort, and sandbox mode when proposing follow-up actions.
- 每次执行命令后,立即使用
codex确认下一步操作、收集澄清信息,或决定是否使用AskUserQuestion恢复会话。codex exec resume --last - 恢复会话时,通过标准输入传递新提示:。恢复的会话会自动使用原始会话的模型、推理强度和沙箱模式。
echo "new prompt" | codex exec resume --last 2>/dev/null - 提出后续操作时,重申所选的模型、推理强度和沙箱模式。
Error Handling
错误处理
- Stop and report failures whenever or a
codex --versioncommand exits non-zero; request direction before retrying.codex exec - Before you use high-impact flags (,
--full-auto,--sandbox danger-full-access) ask the user for permission using AskUserQuestion unless it was already given.--skip-git-repo-check - When output includes warnings or partial results, summarize them and ask how to adjust using .
AskUserQuestion
- 当或
codex --version命令执行失败(退出码非零)时,立即停止并报告失败,在重试前请求用户指示。codex exec - 使用高影响标志(、
--full-auto、--sandbox danger-full-access)之前,除非用户已明确授权,否则需使用AskUserQuestion请求用户许可。--skip-git-repo-check - 当输出包含警告或部分结果时,总结这些内容并使用AskUserQuestion询问用户如何调整。
CLI Version
CLI版本
Requires Codex CLI v0.57.0 or later for GPT-5.2 model support. The CLI defaults to on macOS/Linux and on Windows. Check version:
gpt-5.2gpt-5.2codex --versionUse slash command within a Codex session to switch models, or configure default in .
/model~/.codex/config.toml需要Codex CLI v0.57.0或更高版本以支持GPT-5.2模型。macOS/Linux和Windows系统上的CLI默认均使用。检查版本:
gpt-5.2codex --version在Codex会话中使用斜杠命令切换模型,或在中配置默认模型。
/model~/.codex/config.toml