Loading...
Loading...
Found 151 Skills
Identifies architectural components in codebases and calculates size metrics for decomposition planning. Use when analyzing codebase structure, planning monolithic decomposition, identifying oversized components, calculating component statistics, or when the user asks about component analysis, codebase sizing, or architectural decomposition.
Analyze ML experiment results, compute statistics, generate comparison tables and insights. Use when user says "analyze results", "compare", or needs to interpret experimental data.
Use this skill for processing and analyzing large tabular datasets (billions of rows) that exceed available RAM. Vaex excels at out-of-core DataFrame operations, lazy evaluation, fast aggregations, efficient visualization of big data, and machine learning on large datasets. Apply when users need to work with large CSV/HDF5/Arrow/Parquet files, perform fast statistics on massive datasets, create visualizations of big data, or build ML pipelines that don't fit in memory.
Deep-dive data profiling for a specific table. Use when the user asks to profile a table, wants statistics about a dataset, asks about data quality, or needs to understand a table's structure and content. Requires a table name.
T-SQL query optimization techniques for SQL Server and Azure SQL Database. Use this skill when: (1) User needs to optimize slow queries, (2) User asks about SARGability or index seeks, (3) User needs help with query hints, (4) User has parameter sniffing issues, (5) User needs to understand execution plans, (6) User asks about statistics and cardinality estimation.
A fast, extensible progress bar for Python and CLI. Instantly makes your loops show a smart progress meter with ETA, iterations per second, and customizable statistics. Minimal overhead. Use for monitoring long-running loops, simulations, data processing, ML training, file downloads, I/O operations, command-line tools, pandas operations, parallel tasks, and nested progress bars.
Use this skill to analyze an existing PostgreSQL database and identify which tables should be converted to Timescale/TimescaleDB hypertables. **Trigger when user asks to:** - Analyze database tables for hypertable conversion potential - Identify time-series or event tables in an existing schema - Evaluate if a table would benefit from Timescale/TimescaleDB - Audit PostgreSQL tables for migration to Timescale/TimescaleDB/TigerData - Score or rank tables for hypertable candidacy **Keywords:** hypertable candidate, table analysis, migration assessment, Timescale, TimescaleDB, time-series detection, insert-heavy tables, event logs, audit tables Provides SQL queries to analyze table statistics, index patterns, and query patterns. Includes scoring criteria (8+ points = good candidate) and pattern recognition for IoT, events, transactions, and sequential data.
Retrieve stock price change statistics across multiple time periods using Octagon MCP. Use when analyzing short-term and long-term returns, comparing performance across timeframes, and evaluating momentum and historical growth.
Database operations for Supabase: query/write/migration/logs/type generation. Triggers: query/statistics/export/insert/update/delete/fix/backfill/migrate/logs/alerts/type generation. Does not trigger for: pure architecture discussion or code planning. Write operations require confirmation; UPDATE/DELETE without WHERE is refused. MCP is optional — works with CLI/Console too.
Use context-mode tools (ctx_execute, ctx_execute_file) instead of Bash/cat when processing large outputs. Triggers: "analyze logs", "summarize output", "process data", "parse JSON", "filter results", "extract errors", "check build output", "analyze dependencies", "process API response", "large file analysis", "page snapshot", "browser snapshot", "DOM structure", "inspect page", "accessibility tree", "Playwright snapshot", "run tests", "test output", "coverage report", "git log", "recent commits", "diff between branches", "list containers", "pod status", "disk usage", "fetch docs", "API reference", "index documentation", "call API", "check response", "query results", "find TODOs", "count lines", "codebase statistics", "security audit", "outdated packages", "dependency tree", "cloud resources", "CI/CD output". Also triggers on ANY MCP tool output that may exceed 20 lines. Subagent routing is handled automatically via PreToolUse hook.
SendGrid platform help — transactional email via Email API (REST + SMTP), Marketing Campaigns (drag-and-drop editor, automations, A/B testing, signup forms, segmentation), Email Validation API, Dynamic Templates (Handlebars), Event Webhooks, Inbound Parse, domain authentication (SPF/DKIM/DMARC), dedicated IPs, suppressions, Design Library, email testing, statistics. Use when asking 'how do I do X in SendGrid', sending transactional email with SendGrid, setting up Marketing Campaigns, configuring Event Webhooks, managing SendGrid domain authentication, using Dynamic Templates, or troubleshooting SendGrid deliverability. Do NOT use for general email marketing strategy (use /sales-email-marketing), cross-platform email deliverability (use /sales-deliverability), or email open/click tracking strategy (use /sales-email-tracking).
Conduct Exploratory Data Analysis (EDA) using descriptive statistics, visualizations, and data quality checks. Use this skill when the user has a dataset and needs to understand its structure, find patterns, detect anomalies, or prepare data for further analysis — even if they say 'what does this data look like', 'find interesting patterns', 'clean this data', or 'summarize this dataset'.