Loading...
Loading...
Build multi-shot narrative image, video, and audio workflows with genmedia. Use this for storyboards, shot lists, multi-prompt video, first-frame to last-frame pipelines, social stories, brand films, and sequence continuity.
npx skill4agent add fal-ai-community/skills storytellingreferences/shot-planning.mdreferences/workflows.mdreferences/examples.mdmodel-routinggenmedia models --endpoint_id bytedance/seedance-2.0/text-to-video --json
genmedia models --endpoint_id bytedance/seedance-2.0/image-to-video --json
genmedia models --endpoint_id bytedance/seedance-2.0/reference-to-video --json
genmedia models --endpoint_id fal-ai/kling-video/v3/pro/text-to-video --json
genmedia models --endpoint_id alibaba/happy-horse/text-to-video --json
genmedia models --endpoint_id veed/fabric-1.0 --jsongenmedia models "first frame last frame video generation" --json
genmedia docs "multi shot video generation" --jsongenmedia schema <endpoint_id> --json
genmedia pricing <endpoint_id> --jsongenmedia upload ./first-frame.png --json
genmedia upload ./character.png --json
genmedia upload ./product.png --json
genmedia upload ./voiceover.wav --jsonmodel-routingmodel-routinggenmedia run <endpoint_id> \
--prompt "<shot or sequence prompt>" \
--async \
--json
genmedia status <endpoint_id> <request_id> \
--download "./outputs/story/{request_id}_{index}.{ext}" \
--jsonSHOT [number], [duration]:
[story purpose]. [subject and action]. [location and time]. [camera framing].
[camera movement]. [lighting and color]. [continuity anchor]. [transition or
relationship to previous shot].bytedance/seedance-2.0/text-to-videobytedance/seedance-2.0/image-to-videobytedance/seedance-2.0/reference-to-videoxai/grok-imagine-video/text-to-videoxai/grok-imagine-video/image-to-videofal-ai/kling-video/v3/pro/text-to-videofal-ai/kling-video/v3/pro/image-to-videoalibaba/happy-horse/text-to-videoalibaba/happy-horse/image-to-videoopenai/gpt-image-2quality=highveed/fabric-1.0veed/fabric-1.0/textfal-ai/creatify/aurora