Loading...
Loading...
Found 52 Skills
Review storyboard prompts and video-request prompts before generation. Use this when a prompt draft already exists and you need to catch weak first frames, drift risk, missing constraints, bad product timing, or generic ad-like language before spending model credits.
Download TikTok video samples for selected music or sounds, extract local audio references, and preserve manifests for reproducible music research archives.
Research TikTok metadata, creators, comments, trends, and benchmark data for organic platform analysis.
Research TikTok Creative Center or ad-library style datasets for winning ad patterns, regions, objectives, hook language, and creative signals without mixing paid ads with organic creator discovery.
Prepare and, after explicit approval, publish social posts through the PostPlus platform-owned Postiz workspace.
Route audio, video, transcript, subtitle, and edit-prep requests into the right media-understanding workflow before execution. Use this when the user wants transcription, subtitle generation, beat mapping, B-roll planning, or edit-ready outputs and the first question is which skill and model chain should run.
Transcribe video files directly into timed transcripts and subtitle-ready artifacts using hosted Whisper video-to-text. Use this when the input is a video and the goal is speech extraction, caption generation, or edit-prep timing.
Build fact-grounded short-form video personas and visual consistency packs from validated benchmark research. Use this when you need to define a repeatable creator archetype, image prompt pack, or persona lock for batch video production. This skill must derive personas from real benchmark evidence such as creator types, protagonist descriptions, visual styles, hooks, and audience language. Do not invent personas or visual traits without source support.
Local execution tools for X/Twitter hosted collection workflows, including actor runs, dataset normalization, tweet ranking, account ranking, audience graph construction, and language clustering.
Run fact-grounded image generation batches for short-form video production, especially persona images, first-frame candidates, and light consistency edits. Use this when persona and concept inputs already exist and you need local image assets, prompt records, and reusable model-call metadata. This skill should stay anchored to benchmark-backed persona locks and should save both raw provider responses and normalized local asset manifests.
Lay out an existing draft or script into a Xiaohongshu image-post or long-image package without rewriting the source. Use this when the user wants pagination, hierarchy, image placement, and renderable HTML/CSS while preserving the original wording, information density, and voice of the source text.
Match spoken edit beats to candidate B-roll assets using a normalized transcript, subtitle chunking, optional A-roll analysis, and a reusable B-roll catalog. Use this when the goal is to decide what B-roll should support each beat, not just to list assets or describe the video.