Loading...
Loading...
Found 917 Skills
Set up Langfuse local development workflow with hot reload and debugging. Use when developing LLM applications locally, debugging traces, or setting up a fast iteration loop with Langfuse. Trigger with phrases like "langfuse local dev", "langfuse development", "debug langfuse traces", "langfuse hot reload", "langfuse dev workflow".
MixSeek-Coreで利用可能なLLMモデルの一覧を表示します。「使えるモデル」「モデル一覧」「どのモデルがある」「モデルを取得」「APIからモデル」といった依頼で使用してください。API経由でプロバイダー別のモデル情報を動的取得し、推奨設定、互換性情報を提供します。
Discover scientific equations from data using LLM-guided evolutionary search (LLM-SR). Multi-island algorithm with softmax-based cluster sampling, island reset, and LLM-proposed equation mutations. Use for symbolic regression and equation discovery.
Generate complete academic survey papers using multi-LLM parallel outline generation, RAG-based subsection writing, citation validation, and local coherence enhancement. Based on AutoSurvey pipeline. Use for writing comprehensive literature surveys.
Full changelog infrastructure from scratch. Greenfield workflow. Installs semantic-release, commitlint, GitHub Actions, LLM synthesis, public page.
AI agent development standards using golanggraph for graph-based workflows, langchaingo for LLM calls, tool integration, MCP, and LLM best practices (context compression, prompt caching, attention raising, tool response trimming).
Use this skill to build, run, deploy, evaluate, and troubleshoot Go agents with Google's Agent Development Kit (`google.golang.org/adk`), including llmagent config, tools/integrations, callbacks/plugins, sessions/state/memory, workflows, streaming, MCP/A2A, and runtime/deployment patterns.
Query Langfuse traces for debugging LLM calls, analyzing token usage, and investigating workflow executions. Use when debugging AI/LLM behavior, checking trace data, or analyzing observability metrics.
Verifies implementation against specifications by checking requirement fulfillment, task completion, and contract implementation. Generates a fulfillment report with coverage metrics. Always run after /speckit.implement completes.
Expert prompt engineering for creating effective prompts for Claude, GPT, and other LLMs. Use when writing system prompts, user prompts, few-shot examples, or optimizing existing prompts for better performance.
Removes AI writing artifacts from documentation and code. Use when editing LLM-generated prose, reviewing READMEs, polishing docs before publishing, or cleaning up AI-generated code. Use for emdash cleanup, formulaic phrase removal, tone calibration, over-commented code, verbose naming, and AI code smell detection.
Interactive tutorial that guides engineers through building their own coding agent (agentic loop) from scratch using raw HTTP calls to an LLM API. Supports Gemini, OpenAI (and compatible endpoints), and Anthropic. Supports TypeScript, Python, Go, and Ruby. Detects progress automatically. Use when someone says "build an agent", "teach me agents", or "/build-agent".