Loading...
Loading...
Found 916 Skills
Fine-tune LLMs with Unsloth using GRPO or SFT. Supports FP8, vision models, mobile deployment, Docker, packing, GGUF export. Use when: train with GRPO, fine-tune, reward functions, SFT training, FP8 training, vision fine-tuning, phone deployment, docker training, packing, export to GGUF.
Expert guide for configuring, customizing, and creatively leveraging OpenClaw — the self-hosted AI gateway that connects LLMs to messaging channels (Telegram, WhatsApp, Discord, Slack, iMessage, etc.). Use when the user wants to: (1) Set up or modify their openclaw.json configuration, (2) Write or edit bootstrap files (SOUL.md, USER.md, AGENTS.md, IDENTITY.md, TOOLS.md), (3) Configure messaging channels, (4) Set up models and providers, (5) Create multi-agent routing, (6) Build skills, hooks, or cron jobs, (7) Troubleshoot OpenClaw issues, (8) Get creative ideas for leveraging OpenClaw in non-obvious ways. Triggers on: openclaw, gateway, SOUL.md, USER.md, AGENTS.md, IDENTITY.md, channels setup, agent routing, heartbeat, cron jobs, openclaw hooks, openclaw skills, openclaw config, openclaw.json, personal assistant setup.
Synthesize outputs from multiple AI models into a comprehensive, verified assessment. Use when: (1) User pastes feedback/analysis from multiple LLMs (Claude, GPT, Gemini, etc.) about code or a project, (2) User wants to consolidate model outputs into a single reliable document, (3) User needs conflicting model claims resolved against actual source code. This skill verifies model claims against the codebase, resolves contradictions with evidence, and produces a more reliable assessment than any single model.
Backend development agent for Resume Matcher. Handles FastAPI endpoints, Pydantic schemas, TinyDB operations, LiteLLM integration, and Python service logic. Use when creating or modifying backend code.
Add new LLM model pricing entries to Langfuse's default-model-prices.json. Use when adding model prices, updating model pricing, creating model entries, adding Claude/OpenAI/Anthropic/Google/Gemini/AWS Bedrock/Azure/Vertex AI model pricing, working with matchPattern regex, pricingTiers, or model cost configuration. Covers model price JSON structure, regex patterns for multi-provider matching, tiered pricing with conditions, cache pricing, and validation rules.
RAG, embedding, vector search를 통해 사내/최신 데이터를 LLM 응답에 연결하는 방법과 선택 기준을 다루는 모듈.
Recursive Language Models (RLM) CLI - enables LLMs to recursively process large contexts by decomposing inputs and calling themselves over parts. Use for code analysis, diff reviews, codebase exploration. Triggers on "rlm ask", "rlm complete", "rlm search", "rlm index".
Generate an LLM-optimized project profile for any git repository. Outputs docs/{project-name}.md covering architecture, core abstractions, usage guide, design decisions, and recommendations. Trigger: "/project-profiler", "profile this project", "為專案建側寫"
Build MCP servers in Python with FastMCP. Workflow: define tools and resources, build server, test locally, deploy to FastMCP Cloud or Docker. Use when creating MCP servers, exposing tools/resources/prompts to LLMs, building Claude integrations, or troubleshooting FastMCP module-level server, storage, lifespan, middleware, OAuth, or deployment errors.
Build a structured taxonomy of failure modes from open-coded trace annotations. Use this skill whenever the user has freeform annotations from reviewing LLM traces and wants to cluster them into a coherent, non-overlapping set of binary failure categories (axial coding). Also use when the user mentions "failure modes", "error taxonomy", "axial coding", "cluster annotations", "categorize errors", "failure analysis", or wants to go from raw observation notes to structured evaluation criteria. This skill covers the full pipeline: grouping open codes, defining failure modes, re-labeling traces, and quantifying error rates.
Use this skill when crafting, reviewing, or improving prompts for LLM pipelines — including task prompts, system prompts, and LLM-as-Judge prompts. Triggers include: requests to write or refine a prompt, diagnose why an LLM produces inconsistent or incorrect outputs, bridge the gap between intent and model behavior, reduce ambiguity in instructions, add few-shot examples, structure complex prompts, or improve output formatting. Also use when the user needs help distinguishing specification failures (unclear instructions) from generalization failures (model limitations), or when iterating on prompts based on observed failure modes. Do NOT use for general coding tasks, document creation, or non-LLM writing.
Generate a custom trace annotation web app for open coding during LLM error analysis. Use when the user wants to review LLM traces, annotate failures with freeform comments, and do first-pass qualitative labeling (open coding). Also use when the user mentions "annotate traces", "trace review tool", "open coding tool", "label traces", "build an annotation interface", "review LLM outputs", or wants to manually inspect pipeline traces before building a failure taxonomy. This skill produces a tailored Python web application using FastHTML, TailwindCSS, and HTMX.