Loading...
Loading...
Found 219 Skills
Use when writing instructions that guide Claude behavior - skills, CLAUDE.md files, agent prompts, system prompts. Covers token efficiency, compliance techniques, and discovery optimization.
Expert guidance for creating Claude Code slash commands. Use when working with slash commands, creating custom commands, understanding command structure, or learning YAML configuration.
Seedance 2.0: An integrated tool for professional storyboard prompt generation and video creation. It is triggered when users want to create storyboard videos, generate videos via Seedance/Jimeng, need professional storyboard prompts and directly generate videos. It supports multi-image reference, storyboard guidance, API-based video generation, and automatic download.
Use the @steipete/oracle CLI to bundle a prompt plus the right files and get a second-model review (API or browser) for debugging, refactors, design checks, or cross-validation.
Generate Ralph-compatible prompts for research, analysis, and planning tasks. Creates prompts with systematic research phases, synthesis requirements, and deliverable specifications. Use when analyzing codebases, creating migration plans, researching technologies, auditing security, or any task requiring investigation before action.
Optimizes text, prompts, and documentation for LLM token efficiency. Applies 41 research-backed rules across 6 categories: Claude behavior, token efficiency, structure, reference integrity, perception, and LLM comprehension. Use when optimizing prompts, reducing tokens, compressing verbose docs, or improving LLM instruction quality.
Expert prompt engineering for Google Veo 3.2 (Artemis engine). Use when the user wants to generate a video with Veo 3.2, needs help crafting cinematic prompts, or mentions Veo, Google video generation, or Artemis engine.
Offers the user an informed choice about how much response depth to consume before answering. Use this skill when the user explicitly wants to control response length, depth, or token budget. TRIGGER when: "token budget", "token count", "token usage", "token limit", "response length", "answer depth", "short version", "brief answer", "detailed answer", "exhaustive answer", "respuesta corta vs larga", "cuántos tokens", "ahorrar tokens", "responde al 50%", "dame la versión corta", "quiero controlar cuánto usas", or clear variants where the user is explicitly asking to control answer size or depth. DO NOT TRIGGER when: user has already specified a level in the current session (maintain it), the request is clearly a one-word answer, or "token" refers to auth/session/payment tokens rather than response size.
Uncle Huang's Private Advisory Board — a business decision think tank composed of 12 top thinkers. With a structured private advisory board process, different cognitive frameworks collide to produce optimal decisions. Trigger scenarios: - Users say "start a private advisory board", "invite the think tank", "help me make decisions" - Users say "Private Advisory Board: [topic]", "Let the experts discuss this matter" - Users use the /私董会 or /advisory-board command - Users face major business decisions requiring multi-perspective analysis Non-trigger scenarios: - Simple operational issues (how to publish, how to typeset) - Pure technical issues (code debugging) - Daily content creation (use article-writer / topic-partner) - Personal emotional counseling (use life-coach)
Activate a brutally honest, skeptical architectural partner that stress-tests ideas, architectures, and assumptions. Use when the user says "reality check", wants their design challenged, asks if their idea is sound, wants a devil's advocate review, or wants architectural critique without hand-holding.
Generate HeyGen presenter videos via the v3 Video Agent pipeline — handles Frame Check (aspect ratio correction), prompt engineering, avatar resolution, and voice selection. Required for any HeyGen video generation. Replaces deprecated endpoints with v3. Use when: (1) generating any HeyGen video (via API or otherwise), (2) sending a personalized video message (outreach, update, announcement, pitch, knowledge), (3) creating a HeyGen presenter-led explainer, tutorial, or product demo with a human face, (4) "make a video of me saying...", "send a video to my leads", "record an update for my team", "create a video pitch", "make a loom-style message", "I want to appear in this video", "generate a HeyGen video", "make a talking head video". Accepts avatar_id from heygen-avatar for identity-first HeyGen videos, or uses a stock presenter. Returns video share URL + HeyGen session URL for iteration. Chain signal: when the user wants to create/design an avatar AND make a video in the same request, run heygen-avatar first, then return here. Conjunctions to watch: "and then", "and immediately", "first...then", "X and make a video", "design [presenter] and record" = always CHAIN. If the user provides a photo AND wants a video, route to heygen-avatar first. NOT for: avatar creation or identity setup (use heygen-avatar first), cinematic footage or b-roll without a presenter, translating videos, TTS-only, or streaming avatars.
Single-image generation skill for posters, key art, and editorial illustrations. Defaults to gpt-image-2 but is provider-agnostic — the same workflow drives Flux, Imagen, or Midjourney via the active upstream tooling. Output is one or more PNG/JPEG files saved to the project folder.