Loading...
Loading...
Found 1,085 Skills
Production-safe Drizzle migration workflow for schema changes that require data backfills or constraint tightening. Use when changing enums/check constraints/defaults, removing status values, or sequencing custom and generated migrations in Drizzle. Trigger on requests about Drizzle migration safety, deployment-safe backfills, migration ordering, and rollback planning.
Use when creating new skills, editing existing skills, or verifying skills work before deployment
Check Custom SCAPI (B2C/SFCC/Demandware) endpoint registration status with the b2c cli. Always reference when using the CLI to check custom API endpoint status, verify custom API deployment, or debug "endpoint not found" errors. For creating new custom APIs, use b2c-custom-api-development skill instead.
Phoenix operations and deployment: releases, runtime configuration, clustering, libcluster, telemetry/logging, secrets, assets, background jobs, and production hardening on the BEAM.
Generates comprehensive operational runbooks for any system or process. Reads codebase, infrastructure config, and deployment scripts to produce structured runbook.md files formatted for on-call engineers. Use when you need operations documentation, incident response guides, deployment procedures, or disaster recovery plans.
Create, manage, and deploy Power BI semantic models inside Microsoft Fabric workspaces via `az rest` CLI against Fabric and Power BI REST APIs. Use when the user wants to: (1) create a semantic model from TMDL definition files, (2) retrieve or download semantic model definitions, (3) update a semantic model definition with modified TMDL, (4) trigger or manage dataset refresh operations, (5) configure data sources, parameters, or permissions, (6) deploy semantic models between pipeline stages. Covers Fabric Items API (CRUD) and Power BI Datasets API (refresh, data sources, permissions). For read-only DAX queries, use `powerbi-consumption-cli`. For fine-grained modeling changes, route to `powerbi-modeling-mcp`. Triggers: "create semantic model", "upload TMDL", "download semantic model TMDL", "refresh dataset", "semantic model deployment pipeline", "dataset permissions", "list dataset users", "semantic model authoring".
Deploy vLLM to Kubernetes (K8s) with GPU support, health probes, and OpenAI-compatible API endpoint. Use this skill whenever the user wants to deploy, run, or serve vLLM on a Kubernetes cluster, including creating deployments, services, checking existing deployments, or managing vLLM on K8s.
Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than PyTorch, or for serving models with quantization (FP8/INT4), in-flight batching, and multi-GPU scaling.
MuleSoft platform help — Anypoint Platform, API-led connectivity, Design Center, Anypoint Studio, Code Builder, Exchange, Runtime Manager, API Manager, Flex Gateway, Composer, RPA, IDP, DataWeave, CloudHub, 450+ connectors. Use when Anypoint Studio crashes or gives misleading errors, DataWeave transformation isn't working, CloudHub deployment fails or runs out of CPU credits, API policies aren't enforcing correctly, connectors won't authenticate to SAP or Salesforce, vCore pricing is spiraling and you need to optimize, or MuleSoft implementation is stalling. Do NOT use for general CRM platform config (use /sales-salesforce) or simple Zapier/Make integrations (use /sales-integration).
Use when you need maximum precision on a critical task — production deployments, security-sensitive code, financial calculations, or any work where mistakes are unacceptable.
Guide for creating effective skills. This command should be used when users want to create a new skill (or update an existing skill) that extends Claude's capabilities with specialized knowledge, workflows, or tool integrations. Use when creating new skills, editing existing skills, or verifying skills work before deployment - applies TDD to process documentation by testing with subagents before writing, iterating until bulletproof against rationalization
Develop Microsoft Fabric Spark/data engineering workflows with intelligent routing to specialized resources. Provides core workspace/lakehouse management and routes to: data engineering patterns, development workflow, or infrastructure orchestration. Use when the user wants to: (1) manage Fabric workspaces and resources, (2) develop notebooks and PySpark applications, (3) design data pipelines and orchestration, (4) provision infrastructure as code. Triggers: "develop notebook", "data engineering", "workspace setup", "pipeline design", "infrastructure provisioning", "Delta Lake patterns", "Spark development", "lakehouse configuration", "organize lakehouse tables", "create Livy session", "notebook deployment".