Loading...
Loading...
Found 5,089 Skills
This skill should be used when a developer wants to autonomously execute all tasks under a fully-specified Epic or Feature — for example "go", "start building", "implement everything", "run the loop", "execute the feature", "build it all", "kick it off". Requires that the Epic/Feature/Task tree is fully written before starting. Chains implement → verify → PR for every task in dependency order, with targeted human-in-the-loop gates for contradictions and ambiguities.
Use when generating a Dockerfile for deploying a project to Zeabur. Use when the user needs help writing a Dockerfile for Node.js, Python, Go, Rust, PHP, Ruby, Java, .NET, or Elixir projects. Use when troubleshooting Dockerfile build failures on Zeabur.
Ultra-lightweight channel for feature workflows: No need to write design docs, checklists, or conduct phased reviews. Let AI write code directly as it normally would, but before it starts, tell it where the CodeStable knowledge base in the project is and how to search it. This way, the code it writes will have fewer pitfalls and be more consistent with project conventions. Trigger scenarios: Users say "fast mode", "fastforward", "skip all those steps", "just start coding", "help me make xxx" and the requirement is too small to go through the design process.
Discussion entry when ideas are still vague — first conduct triage through 1-2 rounds of dialogue to determine which downstream process this discussion should eventually go to: if the idea is clear enough, proceed directly to feature-design; if the direction of a small requirement is set, continue the discussion within the feature and document it in `{slug}-brainstorm.md`; if a large requirement cannot fit into a single feature, hand it over to roadmap for decomposition. The role of AI is a thinking partner, not a recorder — dig out the real problem the user wants to solve, proactively evaluate when the user brings a solution, and propose alternative directions when necessary. Trigger scenarios: when the user says "I have an idea that's not clear yet", "Let's brainstorm first", "I want to do something but it's still vague", "Let's talk about this area", "The function direction is still undecided", or when the user comes with a specific solution but wants to hear other ideas first. Bugs (go to issue) and refactoring (go to refactor) are not handled here.
Ultra-lightweight channel for refactor processes - used when changes are clearly too small to go through the full scan → design → apply three-stage workflow. AI directly identifies 1-3 low-risk optimization points, confirms with the user once, modifies in-place using classic methods, and validates itself by running tests. No scan checklist, no design documentation, no multi-step human verification required. Trigger scenarios: User says "quick refactor", "small refactor", "simply optimize XX function", "modify directly", "skip the extra steps", and the scope of changes is clearly localized to a single function / single component with test coverage for self-validation.
Document the pitfalls encountered or good practices discovered during this work into searchable learning documents, which can be accessed by both AI and humans when similar tasks arise in the future. Two tracks: The pitfall track records experiences where "things should have worked but didn't" — including bugs, configuration traps, environment issues, and integration failures; The knowledge track records findings that "should be the default approach going forward" — including best practices, workflow improvements, and reusable patterns. Trigger scenarios: Proactively prompt at the end of feature-acceptance or issue-fix workflows, or when the user mentions phrases like "document knowledge", "learning", "document learnings", or "record this experience". Spec documents record what was done, while learning documents record what pitfalls were encountered / what was learned — they complement each other and are not interchangeable.
Package beat-level edit enhancement instructions from a B-roll plan, subtitle chunks, and optional style context. Use this when the goal is to turn B-roll matching into editor-ready guidance for keyword emphasis, micro-animation hints, A-roll stay-on-face logic, B-roll coverage style, and subtitle interaction.
Transcribe video files directly into timed transcripts and subtitle-ready artifacts using hosted Whisper video-to-text. Use this when the input is a video and the goal is speech extraction, caption generation, or edit-prep timing.
Build fact-grounded short-form video personas and visual consistency packs from validated benchmark research. Use this when you need to define a repeatable creator archetype, image prompt pack, or persona lock for batch video production. This skill must derive personas from real benchmark evidence such as creator types, protagonist descriptions, visual styles, hooks, and audience language. Do not invent personas or visual traits without source support.
Match spoken edit beats to candidate B-roll assets using a normalized transcript, subtitle chunking, optional A-roll analysis, and a reusable B-roll catalog. Use this when the goal is to decide what B-roll should support each beat, not just to list assets or describe the video.
Plan short-form post-edit decisions from A-roll, B-roll, scripts, and reference videos. Use this when the goal is not generic video analysis or rendering, but deciding how to cut a social video beat by beat, including where to stay on face, where to insert proof B-roll, how to use reference patterns, and how to package an actionable edit plan for a human editor or downstream timeline tooling.
Extract useful frames from local video files based on task intent, such as persona research, shot breakdown, product visibility, UI walkthroughs, visual-style review, or CTA/compliance checks. Use this when the goal is not generic video analysis, but selecting the right still frames and contact sheets for a specific downstream need.