gpt-image-2

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

GPT Image 2 — Pro Pack on RunComfy

GPT Image 2 — RunComfy专业包

OpenAI GPT Image 2 (ChatGPT Images 2.0) hosted on the RunComfy Model API — no OpenAI key, async REST.
bash
npx skills add agentspace-so/runcomfy-skills --skill gpt-image-2 -g
OpenAI GPT Image 2(ChatGPT Images 2.0)托管在RunComfy模型API上——无需OpenAI密钥,支持异步REST调用。
bash
npx skills add agentspace-so/runcomfy-skills --skill gpt-image-2 -g

When to pick this model (vs siblings)

何时选择该模型(对比同类模型)

GPT Image 2's distinct strength is directive precision: it follows multi-element prompts, layout cues, and embedded-text instructions more reliably than its peers. Pick it when what's on the canvas matters more than how stylized it looks.
You wantUse
Embedded text, logos, signage, multilingual typographyGPT Image 2
Brand-safe, e-commerce / ad / UI mockup imageryGPT Image 2
Iterative refinement that holds composition stableGPT Image 2
Heavy stylization, painterly lookFlux 2
Hyperrealistic portraitNano Banana Pro
Cinematic / aesthetic-first hero shotsSeedream 5
If the user explicitly asked for GPT Image 2 / ChatGPT Image 2 / Image 2, route here regardless — don't second-guess the model choice.
GPT Image 2的独特优势是指令精准度:相比同类模型,它能更可靠地遵循多元素提示、布局指引和嵌入文本指令。当画布内容比风格化效果更重要时,选择它。
你的需求使用模型
嵌入文本、logo、标识、多语言排版GPT Image 2
符合品牌规范的电商/广告/UI原型图GPT Image 2
保持构图稳定的迭代优化GPT Image 2
重度风格化、绘画质感Flux 2
超写实肖像Nano Banana Pro
电影质感/美学优先的主视觉图Seedream 5
如果用户明确要求使用GPT Image 2 / ChatGPT Image 2 / Image 2,无论何种情况都选择该模型——不要质疑用户的模型选择。

Prerequisites

前置条件

  1. RunComfy CLI
    npm i -g @runcomfy/cli
  2. RunComfy account
    runcomfy login
    opens a browser device-code flow.
  3. CI / containers — set
    RUNCOMFY_TOKEN=<token>
    instead of
    runcomfy login
    .
  1. RunComfy CLI
    npm i -g @runcomfy/cli
  2. RunComfy账号
    runcomfy login
    会打开浏览器设备码登录流程。
  3. CI/容器环境 — 设置环境变量
    RUNCOMFY_TOKEN=<token>
    替代
    runcomfy login

Endpoints + input schema

端点及输入 schema

Two endpoints, same model.
两个端点,使用同一模型。

openai/gpt-image-2/text-to-image

openai/gpt-image-2/text-to-image

FieldTypeRequiredDefaultNotes
prompt
stringyesThe positive prompt
size
enumno
1024_1024
1024_1024
(1:1),
1024_1536
(2:3 portrait),
1536_1024
(3:2 landscape) — only these three
字段类型必填默认值说明
prompt
字符串正向提示词
size
枚举
1024_1024
1024_1024
(1:1)、
1024_1536
(2:3竖版)、
1536_1024
(3:2横版)——仅支持这三种尺寸

openai/gpt-image-2/edit

openai/gpt-image-2/edit

FieldTypeRequiredDefaultNotes
prompt
stringyesNatural-language edit instruction
images
string[]yesUp to 10 reference image URLs (publicly fetchable HTTPS)
size
enumno
auto
auto
(preserve input ratio), or one of the three fixed sizes above
size=auto
on edit preserves the input aspect ratio — strongly recommended unless the edit explicitly changes framing.
字段类型必填默认值说明
prompt
字符串自然语言格式的编辑指令
images
字符串数组最多10个可公开访问的HTTPS参考图片URL
size
枚举
auto
auto
(保留输入图片比例),或上述三种固定尺寸之一
编辑时设置
size=auto
会保留输入图片的宽高比——除非编辑明确要求更改画幅,否则强烈推荐使用此设置。

How to invoke

调用方式

Text-to-image:
bash
runcomfy run openai/gpt-image-2/text-to-image \
  --input '{"prompt": "<user prompt>", "size": "1024_1536"}' \
  --output-dir <absolute/path>
Edit (single ref):
bash
runcomfy run openai/gpt-image-2/edit \
  --input '{
    "prompt": "<edit instruction>",
    "images": ["https://..."]
  }' \
  --output-dir <absolute/path>
Edit (multi-ref, up to 10):
bash
runcomfy run openai/gpt-image-2/edit \
  --input '{
    "prompt": "compose subject from image 1 into the room from image 2; match the lighting of image 2",
    "images": ["https://...subject.jpg", "https://...room.jpg"]
  }' \
  --output-dir <absolute/path>
The CLI submits, polls every 2s until terminal, then downloads any
*.runcomfy.net
/
*.runcomfy.com
URL from the result into
--output-dir
. Stdout is the result JSON. Stderr is progress.
For pipe-friendly usage:
bash
runcomfy --output json run openai/gpt-image-2/text-to-image \
  --input '{"prompt":"..."}' --no-wait | jq -r .request_id
文本转图片:
bash
runcomfy run openai/gpt-image-2/text-to-image \
  --input '{"prompt": "<用户提示词>", "size": "1024_1536"}' \
  --output-dir <绝对路径>
编辑(单张参考图):
bash
runcomfy run openai/gpt-image-2/edit \
  --input '{
    "prompt": "<编辑指令>",
    "images": ["https://..."]
  }' \
  --output-dir <绝对路径>
编辑(多张参考图,最多10张):
bash
runcomfy run openai/gpt-image-2/edit \
  --input '{
    "prompt": "将图1中的主体合成到图2的房间中;匹配图2的光线",
    "images": ["https://...subject.jpg", "https://...room.jpg"]
  }' \
  --output-dir <绝对路径>
CLI会提交请求,每2秒轮询一次直到任务完成,然后将结果中所有
*.runcomfy.net
/
*.runcomfy.com
的URL下载到
--output-dir
目录。标准输出为结果JSON,标准错误输出为进度信息。
管道友好型用法:
bash
runcomfy --output json run openai/gpt-image-2/text-to-image \
  --input '{"prompt":"..."}' --no-wait | jq -r .request_id

Prompting — what actually works

提示技巧——有效方法

These are model-specific patterns that empirically improve output quality. Apply to text-to-image and edit alike.
Be explicit on subject + setting + mood. "A close-up of a matte ceramic water bottle on warm linen, soft window light, neutral background" — three concrete directives — beats "nice product photo of a bottle".
Quote embedded text exactly. Keep it short. GPT Image 2 is the strongest text-rendering model in this class, but only when you put the literal characters in quotes. Long blocks of text degrade. For multilingual text, name the script: "Japanese kana", "Cyrillic", "Arabic right-to-left".
Use compositional cues directly. "rule of thirds", "close-up", "aerial view", "centered subject", "shallow depth of field" — these have learned-meaning to the model.
Iterate one attribute at a time. When refining, change one thing per iteration (lighting OR background OR pose OR text) and keep the rest of the prompt verbatim. The model holds composition stable across iterations when only one knob moves.
Don't conflict instructions. "no text" + "the word 'AQUA+' on the label" is incoherent — the model will pick one and you don't control which.
Don't pile up styles. "ukiyo-e + watercolor + 8K + cinematic + minimalist" cancels out. Pick one or two style anchors max.
For the edit endpoint specifically:
  • State preservation goals. "keep the person's pose and face identity unchanged", "keep the brand mark and typography on the package", "keep the overall framing". The model needs to know what NOT to change.
  • Use directional language for spatial edits. "Move the headline from top-right to bottom-center", not "reposition the headline".
  • Multi-ref: number the images in the prompt — "subject from image 1, lighting and background from image 2" — and the model will route the cues correctly.
以下是针对该模型的经验证能提升输出质量的模式,适用于文本转图片和编辑场景。
明确主体、场景和氛围。 比如“哑光陶瓷水瓶的特写,放在温暖的亚麻布上,柔和的窗边光线,中性背景”——三个具体指令——比“好看的水瓶产品照片”效果更好。
精确引用嵌入文本,保持简短。 GPT Image 2是同类模型中文本渲染能力最强的,但只有当你将文字内容放在引号中时效果才好。长文本会降低输出质量。对于多语言文本,需注明文字类型:“日语假名”、“西里尔字母”、“阿拉伯语从右到左”。
直接使用构图指引词。 "rule of thirds(三分法)"、"close-up(特写)"、"aerial view(鸟瞰)"、"centered subject(主体居中)"、"shallow depth of field(浅景深)"——这些词对模型来说有明确的语义。
每次迭代只修改一个属性。 优化时,每次只更改一个元素(光线、背景、姿势或文本),其余提示词保持不变。当只有一个变量变化时,模型会在迭代过程中保持构图稳定。
不要给出矛盾指令。 "无文本" + "标签上添加文字'AQUA+'"是自相矛盾的——模型会选择其中一个执行,但你无法控制它选哪个。
不要堆砌风格词。 "浮世绘 + 水彩 + 8K + 电影质感 + 极简主义"会相互抵消。最多选择1-2个风格锚点。
针对编辑端点的特殊技巧:
  • 说明保留目标。 "保留人物的姿势和面部特征不变"、"保留包装上的品牌标识和排版"、"保留整体画幅"。模型需要知道哪些内容不能修改。
  • 使用方向性语言进行空间编辑。 "将标题从右上角移到底部中央",而不是“重新定位标题”。
  • 多张参考图: 在提示词中给图片编号——“图1的主体,图2的光线和背景”——模型会正确识别这些指引。

Where it shines

适用场景

Use caseWhy GPT Image 2
E-commerce product photographyReliable text on labels, brand-safe lighting, consistent across SKUs
High-conversion adsHeadline + visual integration in one pass
Brand asset localizationOne source asset → many language variants of the same headline
Signage, posters, packaging mock-upsText rendering accuracy at multiple scales
UI mockups, scientific illustrationsLayout precision and label legibility
使用场景选择GPT Image 2的原因
电商产品摄影标签文本渲染可靠,符合品牌规范的光线,SKU间风格统一
高转化广告标题与视觉元素可一次性整合
品牌资产本地化一份源资产可生成多种语言版本的标题
标识、海报、包装原型多尺度下的文本渲染精度
UI原型、科学插图布局精准,标签清晰可读

Sample prompts (verified to produce strong results)

验证有效的示例提示词

Text-to-image — product hero:
A minimal hero product still life: a matte ceramic water bottle on warm linen,
soft window light, the word "AQUA+" in clean sans-serif on the label,
subtle rim highlights, e-commerce ready, 8K detail, neutral background
Text-to-image — multilingual signage:
A small Tokyo café storefront at dusk, warm interior glow,
the sign reads "コーヒー" in bold Japanese kana on a wooden plaque,
shallow depth of field, rule of thirds, cinematic
Edit — background swap with preservation:
Turn the background into a bright minimal white-to-soft-gray studio sweep
with gentle floor shadow; add a large headline in-image that reads
"OPEN STUDIO" in a bold clean sans-serif, high contrast, centered;
keep the main person or product, pose, and face identity unchanged
文本转图片——产品主视觉:
A minimal hero product still life: a matte ceramic water bottle on warm linen,
soft window light, the word "AQUA+" in clean sans-serif on the label,
subtle rim highlights, e-commerce ready, 8K detail, neutral background
文本转图片——多语言标识:
A small Tokyo café storefront at dusk, warm interior glow,
the sign reads "コーヒー" in bold Japanese kana on a wooden plaque,
shallow depth of field, rule of thirds, cinematic
编辑——背景替换并保留内容:
Turn the background into a bright minimal white-to-soft-gray studio sweep
with gentle floor shadow; add a large headline in-image that reads
"OPEN STUDIO" in a bold clean sans-serif, high contrast, centered;
keep the main person or product, pose, and face identity unchanged

Limitations

局限性

  • Only 3 fixed sizes on text-to-image (and the same 3 +
    auto
    on edit). Extreme aspect ratios are auto-resized to the nearest supported one.
  • Prompt length ~ a few thousand tokens. Long blocks of embedded text degrade output.
  • Edit's multi-image support is "guidance from up to 10 refs", not ControlNet-style stacks. The first image is treated as the primary; the rest provide auxiliary cues.
  • Photorealism on portraits is not its strongest suit — Nano Banana Pro wins that head-to-head.
  • 文本转图片仅支持3种固定尺寸(编辑支持这3种尺寸+
    auto
    )。极端宽高比会自动调整到最近的支持尺寸。
  • 提示词长度约几千个token。长文本块会降低输出质量。
  • 编辑的多图支持是“最多10张参考图的指引”,而非ControlNet式的图层叠加。第一张图被视为主要参考,其余提供辅助线索。
  • 肖像写实度不是它的强项——Nano Banana Pro在这方面表现更优。

Exit codes

退出码

The
runcomfy
CLI uses sysexits-style codes:
codemeaning
0success
64bad CLI args
65bad input JSON / schema mismatch (e.g.
size: "2048_2048"
would 422)
69upstream 5xx
75retryable: timeout / 429
77not signed in or token rejected
runcomfy
CLI使用sysexits风格的退出码:
代码含义
0成功
64CLI参数错误
65输入JSON错误/schema不匹配(例如
size: "2048_2048"
会返回422)
69上游服务5xx错误
75可重试:超时/429请求过多
77未登录或令牌被拒绝

How it works

工作原理

  1. The skill invokes
    runcomfy run openai/gpt-image-2/<endpoint>
    with a JSON body matching the schema above.
  2. The CLI POSTs to
    https://model-api.runcomfy.net/v1/models/openai/gpt-image-2/<endpoint>
    with the user's bearer token.
  3. The Model API returns a
    request_id
    ; the CLI polls
    GET .../requests/<id>/status
    every 2 seconds.
  4. On terminal status, the CLI fetches
    GET .../requests/<id>/result
    and downloads any URL whose host ends with
    .runcomfy.net
    or
    .runcomfy.com
    into
    --output-dir
    . Other URLs are listed but not fetched.
  5. Ctrl-C
    while polling sends
    POST .../requests/<id>/cancel
    so you don't get billed for GPU you stopped.
  1. 本技能调用
    runcomfy run openai/gpt-image-2/<endpoint>
    ,传入符合上述schema的JSON请求体。
  2. CLI携带用户的Bearer令牌向
    https://model-api.runcomfy.net/v1/models/openai/gpt-image-2/<endpoint>
    发送POST请求。
  3. 模型API返回
    request_id
    ;CLI每2秒轮询
    GET .../requests/<id>/status
  4. 任务完成后,CLI获取
    GET .../requests/<id>/result
    ,并将所有主机后缀为
    .runcomfy.net
    .runcomfy.com
    的URL下载到
    --output-dir
    目录。其他URL仅列出不下载。
  5. 轮询时按
    Ctrl-C
    会发送
    POST .../requests/<id>/cancel
    请求,避免为已停止的GPU使用付费。

What this skill is not

本技能不包含的内容

Not a direct OpenAI API client. Not a capability grant — depends on a working RunComfy account. Not multi-tenant.
不是直接的OpenAI API客户端。不提供独立能力——依赖可用的RunComfy账号。不支持多租户。

Security & Privacy

安全与隐私

  • Token storage:
    runcomfy login
    writes the API token to
    ~/.config/runcomfy/token.json
    with mode 0600 (owner-only read/write). Set
    RUNCOMFY_TOKEN
    env var to bypass the file entirely in CI / containers.
  • Input boundary: the user prompt is passed as a JSON string to the CLI via
    --input
    . The CLI does NOT shell-expand the prompt; it transmits the JSON body directly to the Model API over HTTPS. No shell injection surface from prompt content.
  • Third-party content: image / mask / video URLs you pass are fetched by the RunComfy model server, not by the CLI on your machine. Treat external URLs as untrusted; image-based prompt injection is a known risk for any image-edit / video-edit model.
  • Outbound endpoints: only
    model-api.runcomfy.net
    (request submission) and
    *.runcomfy.net
    /
    *.runcomfy.com
    (download whitelist for generated outputs). No telemetry, no callbacks.
  • Generated-file size cap: the CLI aborts any single download > 2 GiB to prevent disk-fill from a malicious or runaway model output.
  • 令牌存储
    runcomfy login
    会将API令牌写入
    ~/.config/runcomfy/token.json
    ,权限为0600(仅所有者可读写)。在CI/容器环境中可设置
    RUNCOMFY_TOKEN
    环境变量来绕过文件存储。
  • 输入边界:用户提示词通过
    --input
    以JSON字符串形式传递给CLI。CLI不会对提示词进行shell展开;会直接将JSON请求体通过HTTPS传输给模型API。提示词内容不存在shell注入风险。
  • 第三方内容:你传入的图片/遮罩/视频URL由RunComfy模型服务器获取,而非本地CLI。请将外部URL视为不可信;基于图片的提示注入是所有图片/视频编辑模型的已知风险。
  • 出站端点:仅允许访问
    model-api.runcomfy.net
    (请求提交)和
    *.runcomfy.net
    /
    *.runcomfy.com
    (生成输出的下载白名单)。无遥测,无回调。
  • 生成文件大小限制:CLI会中止任何超过2 GiB的单个下载,防止恶意或异常模型输出占满磁盘。