Loading...
Loading...
Found 3 Skills
Use-case-driven multi-step pipelines on fal.ai. Trigger when the user asks for a specific kind of content production rather than a single endpoint call: "make a commercial", "ad creative", "product photography", "cinematic shot", "film look", "character design", "consistent character", "anchor system", "storyboard", "multi-shot", "narrative video", "talking head", "lip sync", "make this person talk", "virtual try-on", "garment transfer", "restore image", "deblur", "denoise", "fix face", "old photo restore", "add audio to video", "video sound effects", "product shot", "photoreal", "realistic photo", "candid photo", "editorial portrait", "documentary photo", "looks like a real photograph", "iPhone-style photo", "film photo", "archival photo". Each recipe describes inputs, the genmedia call sequence, and quality checks.
Choose the right fal.ai endpoint for a given task. Modality-organized catalog of production endpoint defaults, text-to-image, image-to-image, text-to-video, image-to-video, and more. Use when the user has not named a specific model, or asks "which model for X", "best endpoint for Y", "what should I use for Z".
Use the genmedia CLI to search, inspect, run, and manage 1200+ fal.ai model endpoints. Trigger when the user mentions "genmedia", "fal CLI", or asks to "search models", "run a model", "fetch schema", "check pricing", "upload to fal", "queue async job", "track request", or any direct interaction with the fal.ai endpoint catalog. This is the foundational skill. Every other fal.ai-related skill in this repo executes its work through genmedia commands. Use `--json` whenever the output will be parsed by an agent.