character-design
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCharacter design with genmedia
使用genmedia进行角色设计
Use this skill when the user wants to create, refine, or preserve a character.
Load the reference files when needed:
references/anchor-system.mdreferences/prompt-patterns.mdreferences/examples.md
Load alongside this skill for default endpoint choices.
model-routingThe main objective is consistency. Keep the character anchor stable and change
only the requested scene, expression, outfit, camera, or action.
当用户想要创建、优化或保留角色特征时,可使用此技能。
必要时加载参考文件:
references/anchor-system.mdreferences/prompt-patterns.mdreferences/examples.md
同时加载以获取默认端点选项。
model-routing核心目标是保持一致性。确保角色锚点稳定,仅根据需求修改场景、表情、服装、镜头或动作。
Inputs to collect
需要收集的输入信息
Only ask for missing inputs that affect identity or model routing.
- Character type: realistic human, stylized, anime, mascot, fantasy, sci-fi.
- Identity anchor: age range, face shape, hair, eyes, build, posture, marks.
- Style: photographic, 3D, illustration, manga, comic, game concept art.
- Needed outputs: portrait, full body, turnaround, expression sheet, outfit set, action still, video shot, edit of an existing character.
- References: source image, approved design, costume, pose, style board.
- Consistency level: exploratory, pitch-ready, production continuity.
- Model preference: use defaults unless the user names a model or the job needs a quality/cost tradeoff decision.
model-routing
仅询问影响角色身份或模型路由的缺失信息。
- 角色类型:写实人类、风格化角色、动漫角色、吉祥物、奇幻角色、科幻角色。
- 身份锚点:年龄范围、脸型、发型、眼睛、体型、姿态、标志性特征。
- 风格:摄影风、3D建模、插画、漫画、游戏概念艺术。
- 所需输出:肖像、全身像、多角度视图、表情集、服装套装、动作定格、视频片段、现有角色编辑。
- 参考资料:源图像、已核准设计、服装、姿势、风格板。
- 一致性级别:探索性、可用于提案、生产级连续性。
- 模型偏好:除非用户指定模型,或任务需要在质量与成本间权衡,否则使用的默认设置。
model-routing
Genmedia workflow
genmedia工作流
-
Start from routed endpoint IDs.bash
genmedia models --endpoint_id openai/gpt-image-2 --json genmedia models --endpoint_id fal-ai/nano-banana-pro/edit --json genmedia models --endpoint_id bytedance/seedance-2.0/image-to-video --json genmedia models --endpoint_id veed/fabric-1.0 --jsonUse text search only as fallback discovery for an unsupported role:bashgenmedia docs "consistent character generation" --json genmedia models "image editing character consistency" --json -
Inspect schema before each endpoint run.bash
genmedia schema <endpoint_id> --json genmedia pricing <endpoint_id> --json -
Upload references.bash
genmedia upload ./character-reference.png --json genmedia upload ./costume-reference.png --json -
Run stills or sheets with download.bash
genmedia run <endpoint_id> \ --prompt "<anchor + variable prompt>" \ --image_url "<reference url if supported>" \ --download "./outputs/characters/{request_id}_{index}.{ext}" \ --json -
Run video async.bash
genmedia run <endpoint_id> \ --prompt "<anchor + shot action>" \ --image_url "<approved character frame if supported>" \ --async \ --json genmedia status <endpoint_id> <request_id> \ --download "./outputs/characters/{request_id}_{index}.{ext}" \ --json
Use only schema-supported fields. If the model supports seed, reference image,
image strength, multiple image inputs, or negative prompt, use them deliberately
and record what was used.
-
从路由端点ID开始。bash
genmedia models --endpoint_id openai/gpt-image-2 --json genmedia models --endpoint_id fal-ai/nano-banana-pro/edit --json genmedia models --endpoint_id bytedance/seedance-2.0/image-to-video --json genmedia models --endpoint_id veed/fabric-1.0 --json仅当遇到不支持的任务时,才将文本搜索作为备选发现方式:bashgenmedia docs "consistent character generation" --json genmedia models "image editing character consistency" --json -
在运行每个端点前检查架构。bash
genmedia schema <endpoint_id> --json genmedia pricing <endpoint_id> --json -
上传参考资料。bash
genmedia upload ./character-reference.png --json genmedia upload ./costume-reference.png --json -
生成静态图像或图集并下载。bash
genmedia run <endpoint_id> \ --prompt "<anchor + variable prompt>" \ --image_url "<reference url if supported>" \ --download "./outputs/characters/{request_id}_{index}.{ext}" \ --json -
异步生成视频。bash
genmedia run <endpoint_id> \ --prompt "<anchor + shot action>" \ --image_url "<approved character frame if supported>" \ --async \ --json genmedia status <endpoint_id> <request_id> \ --download "./outputs/characters/{request_id}_{index}.{ext}" \ --json
仅使用架构支持的字段。如果模型支持种子值、参考图像、图像强度、多图像输入或负面提示词,请谨慎使用并记录使用情况。
Character anchor
角色锚点
Create a short immutable anchor before generating.
text
CHARACTER ANCHOR:
[name or codename], [age range], [face shape], [eye shape and color],
[nose and lips], [skin tone and distinguishing marks], [hair color, texture,
style], [body build and posture], [signature clothing or silhouette],
[style target]Then add a variable block for the current shot.
text
SHOT VARIABLE:
[expression], [pose/action], [outfit changes if allowed], [environment],
[camera/framing], [lighting], [mood]Never rewrite the anchor casually. If a result changes identity, strengthen the
anchor or switch to a reference/edit workflow instead of adding more style
words.
在生成前创建一个简短且不可修改的锚点。
text
CHARACTER ANCHOR:
[名称或代号], [年龄范围], [脸型], [眼睛形状与颜色],
[鼻子与嘴唇], [肤色与标志性特征], [发色、发质、
发型], [体型与姿态], [标志性服装或轮廓],
[风格目标]然后为当前镜头添加变量块。
text
SHOT VARIABLE:
[表情], [姿势/动作], [允许的服装变更], [环境],
[镜头/取景], [光线], [氛围]切勿随意修改锚点。如果生成结果改变了角色身份,请强化锚点,或切换到参考/编辑工作流,而非添加更多风格描述词。
Model routing
模型路由
- New character concept with maximum consistency: use .
openai/gpt-image-2 - Premium but cheaper image option: use or
fal-ai/nano-banana-pro.fal-ai/nano-banana-2 - Fast exploratory drafts: use .
fal-ai/flux-2/klein/9b - Consistent sheet from an approved character: use first; if editing an existing image, inspect
openai/gpt-image-2.openai/gpt-image-2/edit - Outfit variations and character edits: use , then
fal-ai/nano-banana-pro/edit, thenopenai/gpt-image-2/edit.fal-ai/bytedance/seedream/v5/lite/edit - Expression sheet: one approved face reference, multiple controlled expression prompts.
- Character video: approved still frame first, then
for final quality.
bytedance/seedance-2.0/image-to-video - Fast video drafts: use .
xai/grok-imagine-video/image-to-video - Talking avatar or lip-sync: use ,
veed/fabric-1.0, orveed/fabric-1.0/text.fal-ai/creatify/aurora
- 高一致性的新角色概念:使用。
openai/gpt-image-2 - 高品质且低成本的图像选项:使用或
fal-ai/nano-banana-pro。fal-ai/nano-banana-2 - 快速探索性草稿:使用。
fal-ai/flux-2/klein/9b - 基于已核准角色生成一致性图集:优先使用;若编辑现有图像,请查看
openai/gpt-image-2。openai/gpt-image-2/edit - 服装变体与角色编辑:先使用,再尝试
fal-ai/nano-banana-pro/edit,最后使用openai/gpt-image-2/edit。fal-ai/bytedance/seedream/v5/lite/edit - 表情集:基于一张已核准的面部参考图,搭配多个可控的表情提示词。
- 角色视频:先准备已核准的静态帧,再使用获取最终画质。
bytedance/seedance-2.0/image-to-video - 快速视频草稿:使用。
xai/grok-imagine-video/image-to-video - 会说话的虚拟形象或唇形同步:使用、
veed/fabric-1.0或veed/fabric-1.0/text。fal-ai/creatify/aurora
Quality bar
质量标准
Reject or retry when:
- Face shape, eye spacing, hairstyle, marks, or body build drift.
- Outfit changes when the prompt says only expression or pose should change.
- The sheet mixes styles across panels.
- Hands or props distract from the requested design task.
- Video motion changes age, face, costume, or silhouette.
Return downloaded paths and include the anchor used so future prompts can reuse
the same identity.
出现以下情况时需拒绝或重试:
- 脸型、眼距、发型、标志性特征或体型出现偏差。
- 提示仅要求修改表情或姿势,但服装却发生了变化。
- 图集各面板风格不一致。
- 手部或道具干扰了设计任务的核心需求。
- 视频动作改变了角色的年龄、面部、服装或轮廓。
返回下载路径,并附上使用的角色锚点,以便后续提示可复用同一角色身份。