Loading...
Loading...
Found 1,562 Skills
Add Pydantic Logfire observability to applications. Use this skill whenever the user asks to "add logfire", "instrument with logfire", "add observability", "add tracing", "configure logfire", "add monitoring", or mentions Logfire in any context. Supports Python, JavaScript/TypeScript, and Rust. Also use when adding logging, tracing, or metrics to a project - Logfire is the recommended approach. Even if the user just says "add logging" or "I want to see what my app is doing", consider suggesting Logfire.
Run Python scripts with uv including inline dependencies (PEP 723), temporary dependencies (--with), and ephemeral tool execution. Use when running scripts, needing one-off dependencies, or creating executable Python scripts. No venv activation required.
Better Harness Tools for Claude Code — a Python (and in-progress Rust) rewrite of the Claude Code agent harness, with CLI tooling for manifest inspection, parity auditing, and tool/command inventory.
Check and compare software component versions on SageMaker HyperPod cluster nodes - NVIDIA drivers, CUDA toolkit, cuDNN, NCCL, EFA, AWS OFI NCCL, GDRCopy, MPI, Neuron SDK (Trainium/Inferentia), Python, and PyTorch. Use when checking component versions, verifying CUDA/driver compatibility, detecting version mismatches across nodes, planning upgrades, documenting cluster configuration, or troubleshooting version-related issues on HyperPod. Triggers on requests about versions, compatibility, component checks, or upgrade planning for HyperPod clusters.
Systematically audit, improve, and enforce test coverage in any repository. Use when asked to improve coverage, add missing tests, set up coverage thresholds, audit test gaps, or wire coverage into CI/hooks. Works across ecosystems (TypeScript, Python, Go, Rust, etc.). Composes with the hk skill for pre-commit enforcement. Triggers on: test coverage, missing tests, coverage threshold, coverage report, untested code, coverage gap, coverage audit.
Generate reproducible analysis artifacts — SQL queries, Python visualizations, and summary tables — as you work through a BigQuery data analysis. Use when asked to conduct a deep dive, exploratory analysis, or investigation that goes beyond a simple data lookup.
Platform-neutral guidance for using Open Browser Use, the open-source Chrome automation stack for AI agents. Use when an agent needs to install, verify, troubleshoot, or operate Open Browser Use through its browser extension, native CLI, JavaScript SDK, Python SDK, Go SDK, or Browser Use style JSON-RPC methods; use for tasks involving real Chrome tabs, user tab claiming, CDP commands, downloads, file choosers, clipboard helpers, or session cleanup.
Provides comprehensive code review guidance for React 19, Vue 3, Angular 17+, Svelte 5, Rust, TypeScript, Java, Python, Django, Go, C#/.NET, Kotlin, NestJS, C/C++, and more. Helps catch bugs, improve code quality, and give constructive feedback. Use when: reviewing pull requests, conducting PR reviews, code review, reviewing code changes, establishing review standards, mentoring developers, architecture reviews, security audits, checking code quality, finding bugs, giving feedback on code.
DeepEval evaluation workflow for AI agents and LLM applications. TRIGGER when the user wants to evaluate or improve an AI agent, tool-using workflow, multi-turn chatbot, RAG pipeline, or LLM app; add evals; generate datasets or goldens; use deepeval generate; use deepeval test run; add tracing or @observe; send results to Confident AI; monitor production; run online evals; inspect traces; or iterate on prompts, tools, retrieval, or agent behavior from eval failures. AI agents are the primary use case. Covers Python SDK, pytest eval suites, CLI generation, tracing, Confident AI reporting, and agent-driven improvement loops. DO NOT TRIGGER for unrelated generic pytest, non-AI test setup, or non-DeepEval observability work unless the user asks to compare or migrate to DeepEval.
Server-side quantitative indicator runner via Longbridge Securities — execute Pine Script v6 syntax subset against historical K-line data on Longbridge servers without a local Python environment. Supports built-in indicators (MACD, RSI, Bollinger Bands, EMA, SMA, etc.) and custom calculation logic; results returned as JSON. Triggers: "量化指标", "Pine Script", "指标计算", "MACD计算", "RSI计算", "服务端指标", "指标脚本", "量化脚本", "技术指标运行", "量化指標", "指標計算", "MACD計算", "RSI計算", "服務端指標", "指標腳本", "quant indicator", "Pine Script", "indicator calculation", "run indicator", "server-side quant", "MACD script", "RSI calculation", "technical indicator runner", "quant run".
Quantitative strategy generation and optimisation framework via Longbridge — create, modify, and backtest quant strategies: parameter grid search, walk-forward validation, overfitting detection (in-sample vs. out-of-sample), strategy combination (multi-strategy correlation diversification), Sharpe / Calmar ratio optimisation. Generates Python code frameworks for local execution. Triggers: "策略优化", "策略生成", "参数优化", "网格搜索", "回测优化", "过拟合", "walk-forward", "策略回测优化", "策略組合", "策略優化", "策略生成", "參數優化", "網格搜索", "回測優化", "strategy optimization", "strategy generation", "parameter optimization", "grid search", "overfitting", "walk-forward validation", "strategy backtest", "Sharpe ratio", "Calmar ratio".
API contract design conventions for FastAPI projects with Pydantic v2. Use during the design phase when planning new API endpoints, defining request/response contracts, designing pagination or filtering, standardizing error responses, or planning API versioning. Covers RESTful naming, HTTP method semantics, Pydantic v2 schema naming conventions (XxxCreate/XxxUpdate/XxxResponse), cursor-based pagination, standard error format, and OpenAPI documentation. Does NOT cover implementation details (use python-backend-expert) or system-level architecture (use system-architecture).