codex-agent

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Codex Agent

Codex Agent

通过 Codex CLI 将任务委派给独立的 Codex 会话执行。
Delegate tasks to independent Codex sessions for execution via Codex CLI.

前置条件

Prerequisites

  1. 安装 Codex CLI:
    npm install -g @openai/codex
  2. 确保已完成 Codex 登录认证(
    codex
    首次运行会引导登录)
  3. 建议在目标项目目录下运行,或显式传
    -C /path/to/project
  1. Install Codex CLI:
    npm install -g @openai/codex
  2. Ensure you have completed Codex login authentication (the first run of
    codex
    will guide you through login)
  3. It is recommended to run in the target project directory, or explicitly pass
    -C /path/to/project

调用方式

Usage Methods

新建会话

Create a New Session

bash
codex exec --json --sandbox workspace-write --skip-git-repo-check --model gpt-5.4 "你的任务描述"
上面这条只适合短 prompt。当 prompt 超过约 500 字符或包含多行/特殊字符时,不要继续用位置参数;改走 stdin 更稳。
输出为 JSONL,每行一个事件。当前常见事件:
jsonl
{"type":"thread.started","thread_id":"019d32fc-..."}
{"type":"turn.started"}
{"type":"item.completed","item":{"id":"item_0","type":"agent_message","text":"回复内容"}}
{"type":"turn.completed","usage":{"input_tokens":46879,"cached_input_tokens":2432,"output_tokens":54}}
  • thread.started
    事件中提取
    thread_id
    ,用于后续多轮对话
  • item.completed
    事件(
    item.type == "agent_message"
    )中提取
    text
    作为 Codex 的回复
  • turn.completed
    中提取
    usage
    ,记录 token 消耗
自动化脚本里只应解析 JSON 行。若你的运行环境会把
stderr
的 warning 和
stdout
合并,先过滤出以
{
开头的 JSON 行,或将
stderr
重定向掉。
bash
codex exec --json --sandbox workspace-write --skip-git-repo-check --model gpt-5.4 "Your task description"
The above command is only suitable for short prompts. When the prompt exceeds approximately 500 characters or contains multiple lines/special characters, do not continue using positional parameters; using stdin is more stable.
Output is in JSONL format, with one event per line. Common current events:
jsonl
{"type":"thread.started","thread_id":"019d32fc-..."}
{"type":"turn.started"}
{"type":"item.completed","item":{"id":"item_0","type":"agent_message","text":"Response content"}}
{"type":"turn.completed","usage":{"input_tokens":46879,"cached_input_tokens":2432,"output_tokens":54}}
  • Extract
    thread_id
    from the
    thread.started
    event for subsequent multi-turn conversations
  • Extract
    text
    from the
    item.completed
    event (where
    item.type == "agent_message"
    ) as Codex's response
  • Extract
    usage
    from
    turn.completed
    to record token consumption
Only parse JSON lines in automation scripts. If your runtime environment merges
stderr
warnings with
stdout
, first filter out JSON lines starting with
{
, or redirect
stderr
.

长 prompt 用 stdin 管道

Use stdin Pipe for Long Prompts

当 prompt 超过约 500 字符或包含多行/特殊字符时,不要继续用位置参数。实测这类输入可能卡在
Reading additional input from stdin...
,即使手动把 stdin 关掉也不稳定。推荐直接把 prompt 写进文件,再让
codex exec
-
从 stdin 读取:
bash
codex exec --json --sandbox workspace-write --full-auto --model gpt-5.4 \
  --skip-git-repo-check - < /tmp/task-prompt.txt > /tmp/task-out.jsonl 2>&1
位置参数仍然适合短 prompt,例如一句话问答、很短的续问,或临时命令行实验。
When the prompt exceeds approximately 500 characters or contains multiple lines/special characters, do not continue using positional parameters. Such inputs may get stuck at
Reading additional input from stdin...
in practice, even if you manually close stdin. It is recommended to write the prompt to a file first, then let
codex exec
read from stdin using
-
:
bash
codex exec --json --sandbox workspace-write --full-auto --model gpt-5.4 \
  --skip-git-repo-check - < /tmp/task-prompt.txt > /tmp/task-out.jsonl 2>&1
Positional parameters are still suitable for short prompts, such as one-sentence Q&A, very short follow-up questions, or temporary command-line experiments.

继续会话

Resume a Session

bash
codex exec resume --json --model gpt-5.4 "thread_id" "后续提问"
bash
codex exec resume --json --model gpt-5.4 "thread_id" "Follow-up question"

继续最近会话(快捷方式)

Resume the Most Recent Session (Shortcut)

bash
codex exec resume --json --model gpt-5.4 --last "后续提问"
  • --last
    默认只看当前目录最近一次记录的会话
  • 需要跨目录查找时,加
    --all
bash
codex exec resume --json --model gpt-5.4 --last "Follow-up question"
  • --last
    only looks at the most recent session recorded in the current directory by default
  • Add
    --all
    when you need to search across directories

⚠️
--ephemeral
:会话不可恢复

⚠️
--ephemeral
: Non-recoverable Session

--ephemeral
后,本次会话不会写入磁盘(不持久化到
~/.codex/sessions/
),因此事后无法被
codex exec resume
--last
恢复
。仅在以下场景使用:
  • 一次性快速问答,确定不会追问
  • CI / 脚本中的临时调用,避免污染会话列表
  • 敏感任务,不希望在本地留下记录
只要后续可能需要追问,就不要加
--ephemeral
claude -p --no-session-persistence
语义对等。
After adding
--ephemeral
, the session will not be written to disk (not persisted to
~/.codex/sessions/
), so it cannot be recovered by
codex exec resume
or
--last
afterwards. Use only in the following scenarios:
  • One-time quick Q&A where you are certain no follow-up will be needed
  • Temporary calls in CI/scripts to avoid polluting the session list
  • Sensitive tasks where you do not want local records to be left behind
Do not add
--ephemeral
if you may need to follow up later.
It is semantically equivalent to
claude -p --no-session-persistence
.

codex exec 参数

codex exec Parameters

输出与结果

Output and Results

Flag说明
--json
JSONL 格式输出,便于解析事件流
--output-schema FILE
用 JSON Schema 约束最后一条消息的结构
-o, --output-last-message FILE
将最后一条消息直接写入文件
--color COLOR
颜色输出:
always
/
never
/
auto
FlagDescription
--json
Output in JSONL format for easy event stream parsing
--output-schema FILE
Use JSON Schema to constrain the structure of the last message
-o, --output-last-message FILE
Write the last message directly to a file
--color COLOR
Color output:
always
/
never
/
auto

执行环境

Execution Environment

Flag说明
-s, --sandbox MODE
沙箱模式:
read-only
/
workspace-write
/
danger-full-access
--full-auto
当前等价于
--sandbox workspace-write
的便捷写法
--dangerously-bypass-approvals-and-sandbox
跳过所有确认和沙箱,极度危险
-C, --cd DIR
指定工作目录
--skip-git-repo-check
允许在非 git 目录运行
--add-dir DIR
额外可写目录(可重复)
FlagDescription
-s, --sandbox MODE
Sandbox mode:
read-only
/
workspace-write
/
danger-full-access
--full-auto
Currently a shortcut equivalent to
--sandbox workspace-write
--dangerously-bypass-approvals-and-sandbox
Skip all confirmations and sandbox protections, extremely dangerous
-C, --cd DIR
Specify the working directory
--skip-git-repo-check
Allow running in non-git directories
--add-dir DIR
Additional writable directory (can be repeated)

模型与配置

Model and Configuration

Flag说明
-m, --model MODEL
指定模型,建议显式传递
-p, --profile PROFILE
使用
config.toml
中的 profile
-c, --config key=value
覆盖
config.toml
配置项
--enable FEATURE
启用 feature flag(可重复)
--disable FEATURE
禁用 feature flag(可重复)
--oss
使用本地开源模型 provider
--local-provider PROVIDER
指定本地 provider(如
lmstudio
/
ollama
FlagDescription
-m, --model MODEL
Specify the model, explicit passing is recommended
-p, --profile PROFILE
Use the profile in
config.toml
-c, --config key=value
Override configuration items in
config.toml
--enable FEATURE
Enable feature flags (can be repeated)
--disable FEATURE
Disable feature flags (can be repeated)
--oss
Use local open-source model providers
--local-provider PROVIDER
Specify local providers (e.g.,
lmstudio
/
ollama
)

输入

Input

Flag / 方式说明
-i, --image FILE
附加图片(可重复)
PROMPT
直接把短任务作为命令行参数传入;只推荐给短 prompt
-
或 stdin
不传 prompt,或把 prompt 写成
-
,即可从 stdin 读取;长 prompt、Markdown、多行结构优先用这一种
Flag / MethodDescription
-i, --image FILE
Attach images (can be repeated)
PROMPT
Pass short tasks directly as command-line parameters; only recommended for short prompts
-
or stdin
Do not pass a prompt, or set the prompt to
-
to read from stdin; prefer this method for long prompts, Markdown, or multi-line structures

codex exec resume 参数

codex exec resume Parameters

Flag说明
--json
JSONL 格式输出
-m, --model MODEL
指定模型
--full-auto
便捷写法,等价于
workspace-write
沙箱
--dangerously-bypass-approvals-and-sandbox
跳过确认和沙箱
--skip-git-repo-check
允许非 git 目录
--ephemeral
不持久化
-i, --image FILE
附加图片
-o, --output-last-message FILE
最后消息写入文件
-c, --config key=value
覆盖
config.toml
配置项
--enable FEATURE
启用特性(可重复)
--disable FEATURE
禁用特性(可重复)
--last
恢复最近一次会话(无需指定 ID)
--all
查找全部会话(不限当前目录)
FlagDescription
--json
Output in JSONL format
-m, --model MODEL
Specify the model
--full-auto
Shortcut equivalent to
workspace-write
sandbox
--dangerously-bypass-approvals-and-sandbox
Skip confirmations and sandbox protections
--skip-git-repo-check
Allow running in non-git directories
--ephemeral
Do not persist the session
-i, --image FILE
Attach images
-o, --output-last-message FILE
Write the last message to a file
-c, --config key=value
Override configuration items in
config.toml
--enable FEATURE
Enable features (can be repeated)
--disable FEATURE
Disable features (can be repeated)
--last
Resume the most recent session (no need to specify ID)
--all
Search all sessions (not limited to current directory)

codex exec review 参数

codex exec review Parameters

内置代码审查子命令,针对当前仓库进行 review:
bash
codex exec review [OPTIONS] [PROMPT]
Flag说明
--uncommitted
审查暂存、未暂存和未跟踪的变更
--base BRANCH
对比指定基准分支
--commit SHA
审查指定 commit 引入的变更
--title TITLE
review 摘要中显示的标题
-m, --model MODEL
指定模型
--json
JSONL 格式输出
--full-auto
便捷写法,等价于
workspace-write
沙箱
--ephemeral
不持久化
-o, --output-last-message FILE
最后消息写入文件
Built-in code review subcommand for reviewing the current repository:
bash
codex exec review [OPTIONS] [PROMPT]
FlagDescription
--uncommitted
Review staged, unstaged, and untracked changes
--base BRANCH
Compare against the specified base branch
--commit SHA
Review changes introduced by the specified commit
--title TITLE
Title displayed in the review summary
-m, --model MODEL
Specify the model
--json
Output in JSONL format
--full-auto
Shortcut equivalent to
workspace-write
sandbox
--ephemeral
Do not persist the session
-o, --output-last-message FILE
Write the last message to a file

多轮对话

Multi-turn Conversations

  1. 首次
    codex exec --json ...
    获取
    thread_id
  2. 后续追问用
    codex exec resume --json "thread_id" "prompt"
  3. 自动跟踪
    thread_id
    ,用户无需关心
  4. 不同任务各自新建会话,多个
    thread_id
    互不干扰
  1. Run
    codex exec --json ...
    for the first time to get
    thread_id
  2. Use
    codex exec resume --json "thread_id" "prompt"
    for follow-up questions
  3. thread_id
    is tracked automatically, no need for users to manage it
  4. Create new sessions for different tasks; multiple
    thread_id
    s do not interfere with each other

模型选择

Model Selection

根据任务复杂度显式指定
--model
任务复杂度model适用场景
gpt-5.4
架构设计、复杂重构、多文件编码
gpt-5.4-mini
单文件功能实现、bug 修复
gpt-5.3-codex-spark
简单问答、代码解释
Explicitly specify
--model
based on task complexity:
Task ComplexityModelApplicable Scenarios
High
gpt-5.4
Architecture design, complex refactoring, multi-file coding
Medium
gpt-5.4-mini
Single-file feature implementation, bug fixes
Low
gpt-5.3-codex-spark
Simple Q&A, code explanation

推荐参数组合

Recommended Parameter Combinations

场景modelsandbox其他 flags
复杂编码
gpt-5.4
workspace-write
--full-auto
一般编码
gpt-5.4-mini
workspace-write
--full-auto
只读问答 / 分析
gpt-5.4-mini
read-only
--skip-git-repo-check
(非 git 目录时)
浏览器调研 / Computer Use
gpt-5.4
read-only
-C "$PWD" -o /tmp/result.txt
,需要事件流时再加
--json
代码审查
gpt-5.4-mini
read-only
codex exec --json ... "Review ..."
仓库 review
gpt-5.4-mini
codex exec review --base main
快速问答
gpt-5.4-mini
read-only
--skip-git-repo-check --ephemeral
(⚠️ 不可恢复)
结构化输出
gpt-5.4-mini
read-only
--output-schema schema.json -o result.json
ScenarioModelSandboxOther Flags
Complex Coding
gpt-5.4
workspace-write
--full-auto
General Coding
gpt-5.4-mini
workspace-write
--full-auto
Read-only Q&A / Analysis
gpt-5.4-mini
read-only
--skip-git-repo-check
(when in non-git directories)
Browser Research / Computer Use
gpt-5.4
read-only
-C "$PWD" -o /tmp/result.txt
, add
--json
if event streams are needed
Code Review
gpt-5.4-mini
read-only
codex exec --json ... "Review ..."
Repository Review
gpt-5.4-mini
codex exec review --base main
Quick Q&A
gpt-5.4-mini
read-only
--skip-git-repo-check --ephemeral
(⚠️ Non-recoverable)
Structured Output
gpt-5.4-mini
read-only
--output-schema schema.json -o result.json

使用规则

Usage Rules

  1. 自动化调用一律加
    --json
    :确保输出可解析,提取
    thread_id
    和回复内容
  2. 始终显式传
    --model
    :避免默认模型漂移
  3. 始终在目标项目目录运行:优先
    cd /path/to/project
    -C /path/to/project
  4. 编码任务用
    workspace-write
    :通常直接用
    --sandbox workspace-write
    --full-auto
  5. 审查任务用
    read-only
    codex exec review
    :防止意外修改
  6. 保持对话连续:同一任务的追问复用
    thread_id
    只要可能追问就别加
    --ephemeral
  7. 需要稳定下游解析时,用
    --output-schema
    +
    -o
    :把最终结果约束成机器可消费的结构
  8. 长 prompt 用 stdin 管道:prompt 超过约 500 字符或包含多行/特殊字符时,不要继续用位置参数;先写入文件,再用
    - < file.txt
    传入,避免卡在
    Reading additional input from stdin...
  9. 向用户报告结果:每次调用后,从 JSONL 中提取最终回复,简要总结给用户
  10. 区分
    -o
    --json
    的职责
    -o
    负责把最后一条回复落文件;
    --json
    负责把整段事件流打印到 stdout。脚本常见组合是两者一起用。
  11. 非编码的本机浏览器任务优先
    read-only
    :如果只是让 Codex 用 Computer Use 打开 Chrome、浏览网页、总结内容,不需要
    --full-auto
    ;提示词里再补一句
    Do not modify local files.
    作为双保险。
  1. Always add
    --json
    for automated calls
    : Ensure output is parsable to extract
    thread_id
    and response content
  2. Always explicitly pass
    --model
    : Avoid default model drift
  3. Always run in the target project directory: Prefer
    cd /path/to/project
    or
    -C /path/to/project
  4. Use
    workspace-write
    for coding tasks
    : Usually directly use
    --sandbox workspace-write
    or
    --full-auto
  5. Use
    read-only
    or
    codex exec review
    for review tasks
    : Prevent accidental modifications
  6. Maintain conversation continuity: Reuse
    thread_id
    for follow-ups on the same task; do not add
    --ephemeral
    if follow-up may be needed
  7. Use
    --output-schema
    +
    -o
    for stable downstream parsing
    : Constrain the final result into a machine-consumable structure
  8. Use stdin pipe for long prompts: Do not use positional parameters when prompts exceed ~500 characters or contain multi-line/special characters; write to a file first, then pass via
    - < file.txt
    to avoid getting stuck at
    Reading additional input from stdin...
  9. Report results to users: After each call, extract the final response from JSONL and summarize it briefly for users
  10. Distinguish between
    -o
    and
    --json
    responsibilities
    :
    -o
    is responsible for saving the last message to a file;
    --json
    is responsible for printing the entire event stream to stdout. Scripts often use both together.
  11. Prefer
    read-only
    for non-coding native browser tasks
    : If you only want Codex to use Computer Use to open Chrome, browse web pages, and summarize content,
    --full-auto
    is not needed; add
    Do not modify local files.
    to the prompt as a double safeguard.

Prompt References

Prompt References

按任务类型按需加载对应 reference,不要把所有默认 prompt 一次性塞进主上下文:
  • 编码 / 诊断 / 规划 / 窄修复:读 references/task-prompt-recipes.md
  • 代码审查 / 挑战式审查 / 测试缺口检查:读 references/review-prompt-recipes.md
  • 本机浏览器调研 / Reddit 或社区采样 / 证据型总结:读 references/browser-research-prompt-recipes.md
这些 reference 提供的是可直接复用或轻改的默认 prompt 模板;优先复制最接近的模板,再删掉不需要的块。
Load corresponding references based on task types; do not load all default prompts into the main context at once:
  • Coding / Diagnosis / Planning / Narrow Fixes: Read references/task-prompt-recipes.md
  • Code Review / Challenging Review / Test Gap Check: Read references/review-prompt-recipes.md
  • Native Browser Research / Reddit or Community Sampling / Evidence-based Summary: Read references/browser-research-prompt-recipes.md
These references provide reusable or slightly modifiable default prompt templates; prioritize copying the closest template, then remove unnecessary blocks.

示例

Examples

编码任务

Coding Task

用户:用 Codex 在当前项目里实现一个 TODO API

步骤 1 - 新建会话:
cd /path/to/project && codex exec --json --full-auto --model gpt-5.4 "Implement a REST API for TODO items with CRUD endpoints. Use Express.js."

→ 解析输出,获得 thread_id: "xxx",回复: "Implemented server.js ..."

用户:加上单元测试

步骤 2 - 继续会话:
cd /path/to/project && codex exec resume --json --model gpt-5.4-mini "xxx" "Add unit tests for all the TODO API endpoints using vitest."
User: Use Codex to implement a TODO API in the current project

Step 1 - Create a new session:
cd /path/to/project && codex exec --json --full-auto --model gpt-5.4 "Implement a REST API for TODO items with CRUD endpoints. Use Express.js."

→ Parse output to get thread_id: "xxx", response: "Implemented server.js ..."

User: Add unit tests

Step 2 - Resume the session:
cd /path/to/project && codex exec resume --json --model gpt-5.4-mini "xxx" "Add unit tests for all the TODO API endpoints using vitest."

继续最近会话

Resume the Most Recent Session

bash
cd /path/to/project && codex exec resume --json --model gpt-5.4-mini --last "Continue the refactor and remove the dead helper functions."
适合“刚才那个任务继续做”,不想手动保存
thread_id
的场景。
bash
cd /path/to/project && codex exec resume --json --model gpt-5.4-mini --last "Continue the refactor and remove the dead helper functions."
Suitable for scenarios like "continue the previous task" where you don't want to manually save
thread_id
.

代码审查

Code Review

bash
undefined
bash
undefined

通用只读审查

General read-only review

cd /path/to/project && codex exec --json --sandbox read-only --model gpt-5.4-mini "Review the changes in git diff HEAD~1. Focus on correctness, security, and missing tests."
cd /path/to/project && codex exec --json --sandbox read-only --model gpt-5.4-mini "Review the changes in git diff HEAD~1. Focus on correctness, security, and missing tests."

内置 review:对比 main

Built-in review: Compare with main

cd /path/to/project && codex exec review --json --model gpt-5.4-mini --base main
cd /path/to/project && codex exec review --json --model gpt-5.4-mini --base main

审查未提交变更

Review uncommitted changes

cd /path/to/project && codex exec review --json --model gpt-5.4-mini --uncommitted
undefined
cd /path/to/project && codex exec review --json --model gpt-5.4-mini --uncommitted
undefined

结构化输出并写入文件

Structured Output and Write to File

bash
cd /path/to/project && codex exec --json --sandbox read-only --model gpt-5.4-mini \
  --output-schema ./review-schema.json \
  -o /tmp/review-result.json \
  "Review src/todo.ts and output summary, risks, and suggested tests."
适合要把结果继续喂给脚本、CI 或其他 agent 的场景。
bash
cd /path/to/project && codex exec --json --sandbox read-only --model gpt-5.4-mini \
  --output-schema ./review-schema.json \
  -o /tmp/review-result.json \
  "Review src/todo.ts and output summary, risks, and suggested tests."
Suitable for scenarios where results need to be fed to scripts, CI, or other agents.

本机浏览器调研(只要最终答案)

Native Browser Research (Only Final Answer Needed)

bash
codex exec \
  -m gpt-5.4 \
  --sandbox read-only \
  --skip-git-repo-check \
  -C "$PWD" \
  -o /tmp/codex-last.txt \
  "Use Computer Use on my Mac. Open Google Chrome, go to Reddit, search for 'Duolingo review', open 3 representative posts (one positive, one negative, one long-term review), then summarize the findings in Chinese. Do not modify local files."
适合人工查看最终结论,不关心中间事件流的场景。
bash
codex exec \
  -m gpt-5.4 \
  --sandbox read-only \
  --skip-git-repo-check \
  -C "$PWD" \
  -o /tmp/codex-last.txt \
  "Use Computer Use on my Mac. Open Google Chrome, go to Reddit, search for 'Duolingo review', open 3 representative posts (one positive, one negative, one long-term review), then summarize the findings in Chinese. Do not modify local files."
Suitable for manually viewing the final conclusion without caring about intermediate event streams.

本机浏览器调研(既要事件流,也要最终答案落盘)

Native Browser Research (Both Event Stream and Final Answer Persistence)

bash
codex exec \
  -m gpt-5.4 \
  --sandbox read-only \
  --skip-git-repo-check \
  -C "$PWD" \
  --json \
  -o /tmp/codex-last.txt \
  "Use Computer Use on my Mac. Open Google Chrome, search Reddit for Duolingo reviews, open a few representative posts, and then summarize them in Chinese. Do not modify local files."
适合脚本或上层 agent:stdout 读 JSONL 事件流,
/tmp/codex-last.txt
读最终自然语言结论。
bash
codex exec \
  -m gpt-5.4 \
  --sandbox read-only \
  --skip-git-repo-check \
  -C "$PWD" \
  --json \
  -o /tmp/codex-last.txt \
  "Use Computer Use on my Mac. Open Google Chrome, search Reddit for Duolingo reviews, open a few representative posts, and then summarize them in Chinese. Do not modify local files."
Suitable for scripts or upper-level agents: read JSONL event stream from stdout, and read the final natural language conclusion from
/tmp/codex-last.txt
.

图片输入

Image Input

bash
cd /path/to/project && codex exec --json --sandbox read-only --model gpt-5.4-mini \
  -i ./screenshots/login-bug.png \
  "Describe the UI issue in this screenshot and propose a minimal fix plan."
适合视觉回归、报错截图诊断、设计稿差异分析。
bash
cd /path/to/project && codex exec --json --sandbox read-only --model gpt-5.4-mini \
  -i ./screenshots/login-bug.png \
  "Describe the UI issue in this screenshot and propose a minimal fix plan."
Suitable for visual regression, error screenshot diagnosis, and design draft difference analysis.

长任务 prompt(推荐)

Long Task Prompt (Recommended)

bash
undefined
bash
undefined

1. 把复杂 prompt 写进文件

1. Write complex prompt to a file

cat > /tmp/task-prompt.txt <<'PROMPT_EOF' 请在当前仓库完成以下任务:
  1. 先阅读 README 和 tests
  2. 只修改与 issue 直接相关的文件
  3. 先补测试,再改实现,最后运行验证 PROMPT_EOF
cat > /tmp/task-prompt.txt <<'PROMPT_EOF' Please complete the following tasks in the current repository:
  1. First read the README and tests
  2. Only modify files directly related to the issue
  3. First add tests, then modify the implementation, and finally run verification PROMPT_EOF

2. 用 - 从 stdin 读取,避免长位置参数卡住

2. Read from stdin using - to avoid stuck long positional parameters

codex exec --json --sandbox workspace-write --full-auto --model gpt-5.4
--skip-git-repo-check - < /tmp/task-prompt.txt > /tmp/task-out.jsonl 2>&1

适合市场调研、多段 Markdown 约束、脚本拼装 prompt,或任何已经超过一屏的任务描述。
codex exec --json --sandbox workspace-write --full-auto --model gpt-5.4
--skip-git-repo-check - < /tmp/task-prompt.txt > /tmp/task-out.jsonl 2>&1

Suitable for market research, multi-paragraph Markdown constraints, script-assembled prompts, or any task description that exceeds one screen.

从 stdin 传长提示词

Pass Long Prompt via stdin

bash
cat ./prompt.md | codex exec --json --sandbox workspace-write --model gpt-5.4 -
适合长 prompt、模板化 prompt,或脚本动态拼接指令后直接通过管道喂给 Codex。若是在 shell 脚本里长期复用,优先上一节的
- < file.txt
写法,可读性更好,也更容易排查问题。
bash
cat ./prompt.md | codex exec --json --sandbox workspace-write --model gpt-5.4 -
Suitable for long prompts, templated prompts, or dynamically assembled instructions fed directly to Codex via pipe. For long-term reuse in shell scripts, prefer the
- < file.txt
approach from the previous section for better readability and easier troubleshooting.