Loading...
Loading...
Found 29 Skills
Creates dbt models following project conventions. Use when working with dbt models for: (1) Creating new models (any layer - discovers project's naming conventions first) (2) Task mentions "create", "build", "add", "write", "new", or "implement" with model, table, or SQL (3) Modifying existing model logic, columns, joins, or transformations (4) Implementing a model from schema.yml specs or expected output requirements Discovers project conventions before writing. Runs dbt build (not just compile) to verify.
Structured data extraction from web pages using claude-in-chrome MCP with sequential-thinking planning. Focus on READ operations, data transformation, and pagination handling for multi-page extraction.
dbt (data build tool) patterns for model organization, incremental strategies, and testing.
Comprehensive toolkit for developing with the CocoIndex library. Use when users need to create data transformation pipelines (flows), write custom functions, or operate flows via CLI or API. Covers building ETL workflows for AI data processing, including embedding documents into vector databases, building knowledge graphs, creating search indexes, or processing data streams with incremental updates.
Convert between physical units (length, mass, temperature, time, etc.). Use for scientific calculations, data transformation, or unit standardization.
Prefect Flow Builder - Auto-activating skill for Data Pipelines. Triggers on: prefect flow builder, prefect flow builder Part of the Data Pipelines skill category.
JSON querying, filtering, and transformation with jq command-line tool. Use when working with JSON data, parsing JSON files, filtering JSON arrays/objects, or transforming JSON structures.
Create and manage Infrahub transforms. Use when building data transformations, config generation, or any workflow that converts Infrahub data into a different format (JSON, text, CSV, device configs) using Python or Jinja2 templates.
Automated data quality and transformation capabilities for Dataform/dbt/BigQuery pipelines. Processes data sourced from BigQuery or Cloud Storage (GCS), applying best practices for data ingestion, movement, schema mapping, and comprehensive data cleaning.
Guide for creating Nushell plugins in Rust using nu_plugin and nu_protocol crates. Use when users want to build custom Nushell commands, extend Nushell with new functionality, create data transformations, or integrate external tools/APIs into Nushell. Covers project setup, command implementation, streaming data, custom values, and testing.
Coaches users to transform messy data into clean, analysis-ready formats using Power Query UI. Diagnoses data problems, visualizes goals, and guides step-by-step transformations.
Ingest and transform data files (CSV/JSON/Parquet/Arrow IPC) into Elasticsearch with stream processing, custom transforms, and cross-version reindexing. Use when loading files, batch importing data, or migrating indices across versions — not for general ingest pipeline design or bulk API patterns.