Video Batch Runner
Follow shared release-shell rules in:
Use this skill after image, script, voice, or prompt-planning work already exists.
This skill is for:
- generating talking-head videos from approved image and audio inputs
- generating Seedance videos from text, images, videos, audio references, or first/last frames
- storing render jobs as local assets with normalized manifests
- preserving traceability from final video back to persona, concept, script, and voice take
- keeping render providers replaceable behind a stable adapter contract
This skill is not for unconstrained video ideation.
Quality Default
When the goal is believable human video, default to the highest practical render quality the provider offers.
Default quality assumption:
- lower render quality can make already-imperfect source faces look more fake
- realism-sensitive talking-head jobs should start from the best available resolution before blaming script or voice
- only step down when the user is explicitly running a cheap draft, a latency test, or a provider-limited experiment
The selected resolution should always be persisted in the request and manifest.
Core Idea
The video layer should be organized around render objects, not around one provider endpoint.
Treat a video render as:
-
- one attempt to produce a video from approved upstream assets
-
- the normalized local record of that attempt
-
- the downloaded output file(s) produced by the render
The main value is not "image plus audio becomes video". The main value is preserving the chain:
Fact Rule
Video render inputs must be grounded in approved upstream assets or an explicit prompt plan.
Required upstream inputs depend on route:
-
- approved image asset
- approved voice take
- script or concept reference
- render purpose
- local output directory
-
- prompt or
- any required media refs for the chosen mode
- concept reference
- render purpose
- local output directory
Do not let the render stage silently redefine:
- who the persona is
- how the voice should sound
- what the concept is trying to test
If the request is experimenting with a new render style, record that as an explicit render variant.
Source Selection Rule
Start from the active project's approved upstream assets and manifests.
If a current task clearly belongs to one project or client folder, stay within that context first.
Do not assume one client directory is the default home for all renders.
Video Routes
Current routes:
-
- model: hosted talking-head capability
- category: image-to-video digital human
- (hosted)
- endpoint keys: ,
video-seedance-2-image-turbo
, , video-seedance-2-text-turbo
- category: text/image/reference-media to video
-
- direct workspace route for internal video/audio workflows
- category: text/image/video/audio to video
Read
references/hosted-video-talking-head.md
before implementation or request design.
Read
references/hosted-video-generative.md
before designing hosted Seedance requests.
Read
references/volcengine-seedance-2.md
before designing Seedance requests.
If the project should keep related image, audio, and video files under one asset root, use the shared asset model in
../image-batch-runner/references/unified-asset-contract-v1.md
.
Hosted Boundary Rule
- keep request files, raw provider responses, and polling state under
<work-folder>/.postplus/video-batch-runner/
when they are internal
execution state
- keep only final user-facing renders outside
- if hosted video capability is unavailable, unauthorized, or returns a stable
network error, stop immediately instead of switching to ad hoc shell glue
Render Objects
1. Render Job
One request to a video provider.
Should include:
2. Render Manifest
The normalized local handoff object for later review.
Should include:
3. Video Asset
One downloaded output.
Should include:
Default Workflow
1. Lock the render brief
Before calling any provider, write down:
2. Produce a normalized request record
The local request JSON should contain stable fields even if provider fields change later.
At minimum record:
- provider route
- model
- prompt or prompt plan
- media refs used by the route
- optional mask image
- resolution
- ratio when relevant
- duration or frames when relevant
- seed
- local output directory
When the provider exposes multiple resolution tiers, default to the highest practical tier for realism-sensitive renders.
3. Call the provider and save raw response
Always save:
- downloaded video files under
Do not use the provider response alone as the durable store.
4. Normalize local outputs
Every run should end with a local manifest containing:
- stable upstream refs
- provider ids
- local asset paths
- source basis
- feedback history
5. Hand off to human QA
Do not auto-approve a render.
The next stage is
, where a person may record:
- verdict
- what worked
- what failed
- which stage should be rerun
If there is no human feedback yet, the render can remain in
.
Path Selection Rule
Write outputs into the active project's render structure when one already exists.
If no project structure exists yet, choose a clear workspace output path and make it visible in the task summary.
If the chosen location will become the long-term handoff point for a client, prefer confirming the destination with the user.
Example Persistence Convention
One possible project-local layout is:
text
videos/<job-id>/
request.json
response.json
manifest.json
renders/
qa/
Do not assume this example layout is the universal default.
Keep draft request files, raw provider responses, and polling state under
<work-folder>/.postplus/video-batch-runner/
when they are internal execution
artifacts rather than the final handoff.
Tool Contract
This skill expects these adapters:
generate_video_from_image_audio
- for async providers
For Seedance, the same
generate_video_from_image_audio
script is used as the normalized submit entrypoint even though the request may be text-to-video or multimodal; the route is chosen by
.
The normalized request and manifest shapes live in
references/tool-contracts.md
.
Review Rule
Before calling a video provider, verify:
- for , persona is approved
- for , image asset is approved
- for , voice take is approved
- for (hosted), request mode is explicit and required media exists
- for , request mode is explicit
- for , media roles match the intended Seedance mode
- for , prompt or prompt plan is concrete enough to constrain the generation
- render request is tied to a real concept or asset purpose
- local output path is explicit
After generation, review:
- lip sync acceptability
- persona continuity
- audio and image match
- TikTok-native feel
- ad-like drift
Failure Mode
Stop and say the request is under-specified if any of these are missing:
- for , no approved image asset
- for , no approved voice take
- for (hosted), no prompt or no required first image
- for , no prompt or no usable
- no asset purpose
- no source basis
- no local output path
Do not compensate for missing upstream approvals by letting the render model improvise.
Seedance Prompt Rule
For Seedance 2.0 work, prefer a structured prompt plan over a single dense paragraph.
The adapter accepts
and turns it into one compact prompt string in this order:
- subject + action
- scene / environment
- camera / shot / motion
- visual style / realism target
- sound intent
- continuity constraints
- must-keep
- must-avoid
- reference bindings such as
This is a better default than freehand adjective stacks.
Core Scripts
scripts/generate_video_from_image_audio.mjs
scripts/poll_prediction.mjs
These scripts take normalized request JSON files and write: