Total 45,283 skills
Showing 12 of 45283 skills
TypeGPU is type-safe WebGPU in TypeScript. Use whenever the user writes, debugs, or designs TypeGPU code: 'use gpu' shader functions, tgpu.fn, buffers, textures, bind groups, compute and render pipelines, vertex layouts, slots, accessors, and any TypeGPU API. Shader logic and CPU-side resources are tightly coupled - handle both sides here even if the user only mentions one (e.g. "how do I write a shader", "how do I create a buffer"). Trigger on any mention of typegpu, tgpu, "use gpu", TypedGPU, or WebGPU code written using TypeGPU's schema API (d.*, tgpu.*, std.*). Do NOT trigger for raw WebGPU (using GPUDevice/GPURenderPipeline directly without tgpu), WGSL-only questions, Three.js, Babylon.js, or WebGL.
Configure human-in-the-loop gating for AI agent review actions in Claude Code. Use when setting up a project where an agent may post PR reviews, comments, merges, or edit CI configuration, and you want a cryptographically auditable approval trail with Cedar-enforced gates.
GANG skeptic skill. You act as the devil's advocate for orch, challenging orch's task breakdowns, VAL coverage, and progress assessments on critical decisions.
GANG entry skill. When a user types /gang, it indicates that they want to upgrade the current pane to a GANG orchestrator and start the GANG closed-loop. The skill content = run `hive gang init`, move the current pane to a new window, set up the board + skeptic, and automatically dispatch /gang-orch to take over the duty.
GANG orchestrator skill. You are the orch, orchestrating the GANG closed loop — split features / assign peers / collect verdicts / update the board / perform integration validation / report to humans.
GANG Worker Skill. You are a worker, receiving features assigned by orch, performing minimal self-checks, handing off to validator-N, and the validator will issue a verdict to the upstream.
Fork a GitHub repo and clone it locally with proper remote setup
Create a persistent HeyGen avatar — a reusable face + voice identity for the agent, the user, or any named character — powered by HeyGen Avatar V technology. Prompt-based creation by default (description → HeyGen builds it); photo upload is optional for real-person digital twins. Use when: (1) giving the agent a face + voice so it can present videos ("bring yourself to life", "create your avatar", "give yourself an avatar", "design a presenter", "set up an avatar", "let's make an avatar"), (2) the user wants to appear in videos as themselves ("create my avatar", "I want my face in a video", "digital twin of me", "build me an avatar"), (3) building a named character presenter ("create an avatar called Cleo", "design a character named X"), (4) establishing HeyGen identity before making videos — the correct FIRST step when no avatar exists yet. Chain signal: when the user says both an identity/avatar action AND a video action in the same request ("create an avatar AND make a video", "set up identity THEN create a video", "design a presenter AND immediately record"), run heygen-avatar first, then heygen-video. Returns avatar_id + voice_id — pass directly to heygen-video to create HeyGen videos. NOT for: generating videos (use heygen-video), translating videos, or TTS-only tasks.
MSW search integration — (1) vector search for API docs and implementation guides (msw-guide-mcp or curl against mlua_Document_Retriever / mlua_API_Retriever), (2) REST API search for resources (sprite / animation / sound / resource pack / avatar). Use for 'find details, examples, or related APIs not in .d.mlua', 'need a SpriteRUID', 'monster sprite', 'background image', 'find a sound', 'avatar rendering', etc. Keywords: document search, API details, examples, guide, retriever, resource, sprite, animation, sound, RUID, resource pack, avatar.
This skill guides development of full-stack features on EdgeOne Pages — Edge Functions, Cloud Functions (Node.js / Go / Python runtimes), Middleware, KV Storage, and local dev workflows. It should be used when the user wants to create APIs, serverless functions, middleware, WebSocket endpoints, or full-stack features specifically on EdgeOne Pages — e.g. "create an API", "add a serverless function", "write middleware", "build a full-stack app", "add WebSocket support", "set up edge functions", "use KV storage", "create a Go API", "build a Python backend", "use Flask/FastAPI/Gin on EdgeOne Pages". Do NOT trigger for framework-native features (Next.js API routes, Next.js middleware, Nuxt server routes) or generic Express/Koa development outside an EdgeOne Pages project. Do NOT trigger for deployment — use edgeone-pages-deploy instead. Do NOT trigger for other platforms (Cloudflare Workers, Vercel Functions, AWS Lambda).
This skill helps agents use Figma's use_figma MCP tool in the FigJam context. Can be used alongside figma-use which has foundational context for using the use_figma tool.
Event prospecting skill. Takes a conference / event speakers URL, extracts the people, filters their companies against the user's ICP, then deep-researches only the speakers at ICP-fit companies. Outputs a person-first HTML report where each card answers "why should the AE talk to this person?" with all public links and a one-click DM opener. Use when the user wants to: (1) find leads at a specific conference, (2) prep for an event, (3) research event speakers, (4) build a target list from a sponsor/exhibitor page, (5) scrape conference speakers and rank by ICP fit. Triggers: "find leads at {event}", "research speakers at", "prospect this conference", "stripe sessions leads", "ai engineer summit prospects", "event prospecting", "scrape conference speakers", "who should I meet at".