Loading...
Loading...
Found 1,230 Skills
MetricFire integration. Manage data, records, and automate workflows. Use when the user wants to interact with MetricFire data.
Bearer integration. Manage data, records, and automate workflows. Use when the user wants to interact with Bearer data.
Better Stack integration. Manage Incidents, Users, Teams. Use when the user wants to interact with Better Stack data.
NannyML integration. Manage data, records, and automate workflows. Use when the user wants to interact with NannyML data.
Validate, lint, audit, or fix PromQL queries and alerting rules; detects anti-patterns.
Check GPU usage on remote servers. Connect to servers via SSH, display video memory usage, running processes, and associated containers for each GPU card. Use this when the user says to check GPU, graphics card usage, or video memory usage.
Uptime Robot integration. Manage data, records, and automate workflows. Use when the user wants to interact with Uptime Robot data.
Audit competitors using ScaleBrick's 3-surface framework (social, web/pages, SEO). Categorizes their pricing, features, and landing pages. Identifies gaps you can exploit, positioning angles no one is claiming, and specific moves you can make this week.
Generate a time-windowed pulse report on what users experienced and how the product performed - usage, quality, errors, signals worth investigating. Use when the user says 'run a pulse', 'show me the pulse', 'how are we doing', 'weekly recap', 'launch-day check', or passes a time window like '24h' or '7d'. Configures via .compound-engineering/config.local.yaml and saves reports to docs/pulse-reports/.
Execute and manage Athena SQL queries across default and federated catalogs (Glue, S3 Tables, Redshift). Triggers on phrases like: query data, run SQL, athena query, analyze table, SQL query, workgroup status, profile table, query Redshift catalog, query S3 Tables. Do NOT use for finding specific data assets (use finding-data-lake-assets), full catalog audits (use exploring-data-catalog), importing data (use ingesting-into-data-lake).
Data engineering skill for building scalable data pipelines, ETL/ELT systems, and data infrastructure. Expertise in Python, SQL, Spark, Airflow, dbt, Kafka, and modern data stack. Includes data modeling, pipeline orchestration, data quality, and DataOps. Use when designing data architectures, building data pipelines, optimizing data workflows, implementing data governance, or troubleshooting data issues.
Quick data freshness check. Use when the user asks if data is up to date, when a table was last updated, if data is stale, or needs to verify data currency before using it.