Loading...
Loading...
Found 114 Skills
Trigger: Invoke when you start from scratch with extremely limited resources and need to find the minimum viable entry point first to build a stable base. Common signals include bootstrap, MVP, pilot, first foothold, and small team startup. Trigger when starting from almost nothing and needing a viable foothold before scaling up. Use this skill to build a durable base, start small, and grow from a validated nucleus instead of scattering effort.
Expert-level Databricks platform, Apache Spark, Delta Lake, MLflow, notebooks, and cluster management
Implement end-to-end Medallion Architecture (Bronze/Silver/Gold) lakehouse patterns in Microsoft Fabric using PySpark, Delta Lake, and Fabric Pipelines. Use when the user wants to: (1) design a Bronze/Silver/Gold data lakehouse, (2) set up multi-layer workspace with lakehouses for each tier, (3) build ingestion-to-analytics pipelines with data quality enforcement, (4) optimize Spark configurations per medallion layer, (5) orchestrate Bronze-to-Silver-to-Gold flows via notebooks. Triggers: "medallion architecture", "bronze silver gold", "lakehouse layers", "e2e data pipeline", "end-to-end lakehouse", "data lakehouse pattern", "multi-layer lakehouse", "build medallion", "setup medallion".
Transform raw data into analytical assets using ETL/ELT patterns, SQL (dbt), Python (pandas/polars/PySpark), and orchestration (Airflow). Use when building data pipelines, implementing incremental models, migrating from pandas to polars, or orchestrating multi-step transformations with testing and quality checks.
When the user wants to create UGC ad campaigns, recruit UGC creators, generate AI UGC content, or scale with user-generated content. Also use when the user mentions 'UGC,' 'user-generated content,' 'creator ads,' 'Spark Ads,' 'whitelisting,' 'AI UGC,' 'Arcads,' 'Creatify,' 'creator brief,' or 'UGC testing.' This skill covers the UGC growth framework from creator recruitment through AI-powered scaling.
Databricks SQL query optimizer: analyzes a slow SQL query, rewrites it for speed using SQL-level optimizations only, validates byte-for-byte result equivalence, and benchmarks both versions with statistical significance testing. Use this skill whenever the user wants to optimize, speed up, tune, or benchmark a SQL query on Databricks. Trigger on: "/databricks-sql-autotuner", "optimize this SQL", "make this query faster", "tune my Databricks query", "benchmark SQL on Databricks", "speed up this spark SQL", "SQL performance on Databricks", "EXPLAIN this query", "why is my query slow on Databricks", "SQL query optimization Databricks", or whenever a user pastes a SQL query and mentions performance, slowness, or runtime.
Develop Lakeflow Spark Declarative Pipelines (formerly Delta Live Tables) on Databricks. Use when building batch or streaming data pipelines with Python or SQL. Invoke BEFORE starting implementation.
Data engineering skill for building scalable data pipelines, ETL/ELT systems, and data infrastructure. Expertise in Python, SQL, Spark, Airflow, dbt, Kafka, and modern data stack. Includes data modeling, pipeline orchestration, data quality, and DataOps. Use when designing data architectures, building data pipelines, optimizing data workflows, implementing data governance, or troubleshooting data issues.
Generate ASCII mini charts (sparkline/bar/simple line) for plain-text trend inspection, with minimal + annotated variants and normalization notes.
Transform raw data into analytical assets using ETL/ELT patterns, SQL (dbt), Python (pandas/polars/PySpark), and orchestration (Airflow). Use when building data pipelines, implementing incremental models, migrating from pandas to polars, or orchestrating multi-step transformations with testing and quality checks.
Guide users through the Amore CLI for macOS app distribution — setup, releasing, code signing, notarization, DMG creation, S3 hosting, Sparkle updates, licensing, and configuration. Use this skill whenever the user mentions Amore, amore CLI, macOS app distribution outside the App Store, Sparkle updater setup, appcast.xml, notarization workflows, DMG creation, or self-publishing macOS apps. Also use when the user asks about release automation, S3 bucket hosting for app updates, EdDSA signing keys, or licensing with Stripe for macOS apps.
Migrate Databricks workloads from classic compute to serverless compute. Scans code for serverless compatibility issues, provides concrete fixes for the serverless Spark Connect architecture, and guides the full migration to serverless environments. Use for classic-to-serverless migrations, serverless code compatibility checks, or writing new serverless-compatible notebooks and jobs. Not for classic DBR version upgrades or cluster configuration changes within classic compute.