Loading...
Loading...
Found 1,688 Skills
AI-native lead intelligence and outreach pipeline. Replaces Apollo, Clay, and ZoomInfo with agent-powered signal scoring, mutual ranking, warm path discovery, and personalized outreach. Use when the user wants to find, qualify, and reach high-value contacts.
Post-task review. Extract learnings, classify, write to memory layers, and reconcile GitHub issues.
Convene a four-voice council for ambiguous decisions, tradeoffs, and go/no-go calls. Use when multiple valid paths exist and you need structured disagreement before choosing.
Structured self-debugging workflow for AI agent failures using capture, diagnosis, contained recovery, and introspection reports.
Implement features from a validated RootSpec specification — test-driven and autonomous. Use this when a user wants to build, code, or implement features from their spec, or when they want to make failing tests pass.
Agent skill for coordinator-swarm-init - invoke with $agent-coordinator-swarm-init
dontbesilent Slow is Fast. It helps entrepreneurs find methods that seem slower but deliver faster results in the long run, and build assets through friction. Trigger methods: /dbs-slowisfast, /slow-is-fast, "Is there a slower way", "Am I going too fast" Slow-is-fast diagnosis. Help entrepreneurs find seemingly slower methods that build assets through friction. Trigger: /dbs-slowisfast, "is there a slower way", "am I going too fast"
Use when the user wants to update, refresh, or reinstall the CopilotKit agent SKILLS (the SKILL.md files that teach this agent about CopilotKit). NOT for updating the CopilotKit codebase or project — this is specifically about refreshing the skills/knowledge this agent has loaded. Triggers on "update copilotkit skills", "update skills", "refresh skills", "skills are stale", "skills are outdated", "get latest skills", "my copilotkit knowledge is wrong", "copilotkit APIs changed", "skills seem old", "wrong API names", "reinstall skills", "skills not working right", "update your copilotkit knowledge".
MUST be used whenever creating an AtlasTool (client-side tool) for an Atlas agent. Do NOT manually write AtlasTool definitions or wire them into useAtlasChat — this skill handles the TypeBox schema, execute function, and hook wiring. This includes tools that fetch data, render UI, call APIs, show charts, query local state, or perform any browser-side action. Triggers: AtlasTool, client tool, add tool, create tool, new tool, tool definition, agent tool.
End-to-end deep research and analysis pipeline. Takes a raw idea or market question, conducts deep web research, builds a competitive landscape, runs multi-framework intelligence analysis (/think), stress-tests it (/red-team), researches the red team findings, re-thinks with adversarial data, re-red-teams, and iterates until divergence between think and red-team is low (conviction stabilizes). Then generates a comprehensive single-file HTML report with all findings: market landscape, competitive analysis, intelligence briefs, red team results, how to win, and how you could lose. Use when the user says "/deepthink", "deep think", "deep research", or wants a comprehensive research-to-report pipeline on any idea, market, or strategic question.
Nassim Taleb's Antifragility framework applied to a business idea, system, or portfolio position. Spawns a team of specialist agents — Fat-Tail Detector, Fragility Auditor, Optionality Scout, Iatrogenics Checker, Skin-in-the-Game Auditor — who each apply a distinct lens from Taleb's Incerto to evaluate whether the subject is fragile, robust, or antifragile. The lead synthesizes into a convexity assessment: what's the payoff structure under disorder, where are the hidden tail risks, and the honest Taleb verdict. Use when the user says "taleb this", "is this fragile", "antifragility analysis", "what would Taleb think", "tail risk check", or proposes a business/system and wants structural risk analysis. Works standalone or after /munger for complementary analysis.
Adversarial stress-test of a /think intelligence brief. Reads the think output markdown, then deploys 5-7 of the same analytical frameworks — but each one is hunting exclusively for reasons the recommendation is wrong, the conviction is unearned, and the idea will fail. Every framework becomes a prosecutor, not a judge. Surfaces the strongest kill shots, identifies which parts of the original brief are load-bearing but unverified, and produces a Red Team Report with a survival verdict. Use when the user says "red-team this", "attack this", "poke holes", "steel-man the opposition", "why is this a bad idea", "/red-team", or presents a /think brief they want stress-tested.