Loading...
Loading...
Found 686 Skills
Designs production-grade RAG pipelines with chunking optimization, retrieval evaluation, and pipeline architecture. Use when building a RAG system, selecting a chunking strategy, choosing a vector database, optimizing retrieval quality, designing embedding pipelines, or evaluating RAG performance with RAGAS metrics.
Grafana Cloud infrastructure monitoring — Kubernetes monitoring, cloud provider integrations (AWS, Azure, GCP), host and container monitoring, infrastructure dashboards, and collector setup. Use when setting up Kubernetes monitoring, connecting cloud provider metrics, configuring node exporter or cAdvisor, setting up infrastructure dashboards, or using the k8s-monitoring Helm chart.
Sending telemetry data to Grafana Cloud — metrics via Prometheus remote write or OTLP, logs via Loki push or Alloy, traces via OTLP to Tempo, profiles via Pyroscope. Covers Alloy-based pipelines, direct SDK/agent integrations, cloud integrations catalog, and credentials management. Use when connecting an application or infrastructure to Grafana Cloud, setting up data ingestion, configuring remote write, or choosing between ingestion methods.
Influencer marketing strategy across platforms — creator discovery, audience vetting, campaign tracking, ROI measurement, outreach, gifting, and affiliate programs. Covers platform comparison (Modash, influData, Creator.co, Hypefy, LTK, Influencity, Meltwater, Skeepers, Heepsy, CreatorIQ, NeoReach, Afluencer, Collabstr, Cloutboost, House of Marketers, Famesters, inBeat, HypeAuditor, Aspire, Influencer Hero, InstaJet, Open Influence, Viral Nation, trendHERO, Upfluence, GRIN, PlutoBa), discovery workflow (search, filters, lookalikes, audience analysis), vetting framework (fake followers, engagement authenticity, audience demographics), campaign setup and tracking (EMV, ROAS, CPM, sales attribution), creator outreach (email templates, negotiation, rate cards), product gifting and seeding, influencer affiliate programs, and creator payments. Use when unsure which influencer platform to pick, can't find the right creators for your niche, worried about fake followers or inflated metrics, campaign results aren't being tracked properly, creators aren't responding to outreach, or influencer spend isn't showing ROI. Do NOT use for platform-specific config (use /sales-modash, /sales-infludata, /sales-creatorco, /sales-hypefy, /sales-ltk, /sales-influencity, /sales-meltwater, /sales-brandwatch, /sales-skeepers, /sales-heepsy, /sales-creatoriq, /sales-neoreach, /sales-afluencer, /sales-collabstr, /sales-cloutboost, /sales-houseofmarketers, /sales-famesters, /sales-inbeat, /sales-hypeauditor, /sales-aspire, /sales-influencer-hero, /sales-instajet, /sales-openinfluence, /sales-viralnation, /sales-trendhero, /sales-upfluence, /sales-grin, or /sales-plutoba), TikTok marketing strategy (use /sales-tiktok-marketing), gaming influencer marketing strategy (use /sales-gaming-marketing), ad campaign strategy (use /sales-retargeting), or email marketing to subscribers (use /sales-email-marketing).
A methodology for iteratively improving agent-facing text instructions (skills / slash commands / task prompts / CLAUDE.md sections / code-generation prompts) by having a bias-free executor actually run them and evaluating two-sidedly (executor self-report + instruction-side metrics). Keep iterating until improvements plateau. Use it right after creating or substantially revising a prompt or skill, or when you want to attribute an agent's unexpected behavior to ambiguity on the instruction side.
Applies Eric Ries's Lean Startup methodology for building products under extreme uncertainty. Use when iterating toward product/market fit, designing MVPs, deciding whether to pivot or persevere, setting up actionable metrics, or accelerating the Build-Measure-Learn loop. Triggers include 'how do we test this idea fast', 'what should our MVP look like', 'our metrics look good but we're not growing', 'should we pivot', 'we're building features no one uses', 'how do we measure validated learning', 'vanity metrics vs real metrics', 'how to do innovation accounting'. NOT for companies with proven product/market fit scaling a known playbook (use Crossing the Chasm), not for determining Market Type (use Four Steps), not for sales methodology (use SPIN Selling), not for pricing strategy (use Monetizing Innovation).
Browser automation and testing using chrome-devtools MCP server. Use when automating web browsers, taking screenshots, inspecting console logs, monitoring network requests, testing responsive layouts, collecting performance metrics, or debugging web applications. Critical for visual testing workflows and browser-based automation tasks.
Single-file horizontal-swipe slide deck for a weekly team update — shipped, in flight, blocked, metrics, asks. 6–8 slides. Use when the brief mentions "weekly update", "team update slides", "weekly status", "周报演示".
Expert product management guidance for day-to-day PM work. Use when creating roadmaps, prioritizing features, managing stakeholders, planning sprints, grooming backlogs, scoping features, planning releases, defining OKRs, managing technical debt, or coordinating go-to-market. Covers RICE, ICE, MoSCoW frameworks, cross-functional collaboration, and product metrics.
Defines and tracks UX success through metrics, measurement frameworks, and experimentation. Part of the Intent design strategy system. Connects design decisions to observable evidence — did the thing we built actually help? Guards against measurement becoming manipulation. Trigger when: defining success metrics, designing A/B tests, building measurement frameworks, analyzing funnels, reviewing metric dashboards, questioning whether the right things are being measured, or when someone says "how do we know if this worked," "what should we measure," "let's run a test," or "the numbers look good but something feels off." Also trigger for ethical measurement reviews and counter-metric definition.
Set up and run experiments in LaunchDarkly. Create experiments with metrics and treatments, start iterations to collect data, and monitor results.
Explains business financial terms and frameworks for engineering managers — produces term definitions (ARR, COGS, CAC, LTV, gross margin, burn rate, EBITDA, AARRR), translation formulas for making engineering work visible in business language, and a three-layer framework for building business credibility. Use when the user says "business terms," "EBITDA," "burn rate," "CAC," "LTV," "gross margin," "ARR," "how do I speak to business people," "I don't understand finance," "make the case for engineering work," "connect engineering to business outcomes," "talk to the P&L owner," or "business impact." Do NOT use when the user wants to connect engineering metrics (DORA, velocity) to business metrics — use developer-productivity instead.