Loading...
Loading...
Found 946 Skills
Query blockchain data via Allium APIs. Supports API key, x402 micropayments, and Tempo auth. Covers prices, wallets, tokens, and SQL analytics.
factory_boy test data generation specialist. Covers Factory, DjangoModelFactory, SQLAlchemyModelFactory, all field declarations (Faker, LazyAttribute, Sequence, SubFactory, RelatedFactory, post_generation, Trait, Maybe, Dict, List), batch creation, pytest integration, and Celery task testing patterns. USE WHEN: user mentions "factory_boy", "test factory", "DjangoModelFactory", "SQLAlchemyModelFactory", asks about "test data generation", "factory traits", "SubFactory", "factory fixtures". DO NOT USE FOR: pytest internals - use `pytest`; Django setup - use `pytest-django`; Hypothesis property testing - use `pytest` with Hypothesis
Use when writing SQL queries, building analytics dashboards, tracking metrics, designing data pipelines, or analyzing user behavior and product usage
Salesforce Data Cloud Retrieve phase. TRIGGER when: user runs Data Cloud SQL, describe, async queries, vector search, search-index workflows, or metadata introspection for Data Cloud objects. DO NOT TRIGGER when: the task is standard CRM SOQL (use sf-soql), segment creation or calculated insight design (use sf-datacloud-segment), or STDM/session tracing/parquet analysis (use sf-ai-agentforce-observability).
Salesforce Data Cloud Segment phase. TRIGGER when: user creates or publishes segments, manages calculated insights, inspects segment counts or membership, or troubleshoots audience SQL in Data Cloud. DO NOT TRIGGER when: the task is DMO/mapping/identity-resolution work (use sf-datacloud-harmonize), activation work (use sf-datacloud-act), query/search-index work (use sf-datacloud-retrieve), or STDM/session tracing (use sf-ai-agentforce-observability).
Toolkit-first AIClient patterns for generation, text-to-sql, and response parsing.
Used when user requests involve dataset queries, SQL creation, and BFF development for the Lovrabet/Yuntoo platform. Trigger words: dataset, data table, custom SQL, filter, sql.execute, bff.execute, get_dataset_detail, validate_sql_content, save_or_update_custom_sql, save_or_update_bff_script, @lovrabet/sdk, MCP SQL workflow, multi-table association, lovrabet development.
Expert-level Snowflake data warehouse platform, virtual warehouses, data sharing, streams, tasks, and SQL optimization
Manage daily check-in records stored in local SQLite, supporting functions including adding check-ins, viewing records, statistical analysis, querying consecutive check-in days, deleting and modifying records. This skill should be actively used when users mention check-in, sign-in, recording daily habits such as exercise, reading, learning, fitness, meditation, running, cycling, etc., or want to check how many days they have stuck to a certain habit, how much time they spent exercising this week, what they checked in today. Even if the user does not explicitly say "check-in", it is applicable as long as it involves daily habit tracking and activity recording. It also works for English scenarios, such as check in, log my workout, track my reading, how many days in a row, streak, habits.
Use this skill for any PostgreSQL database work — table design, indexing, data types, constraints, extensions (pgvector, PostGIS, TimescaleDB), search, and migrations. **Trigger when user asks to:** - Design or modify PostgreSQL tables, schemas, or data models - Choose data types, constraints, indexes, or partitioning strategies - Work with pgvector embeddings, semantic search, or RAG - Set up full-text search, hybrid search, or BM25 ranking - Use PostGIS for spatial/geographic data - Set up TimescaleDB hypertables for time-series data - Migrate tables to hypertables or evaluate migration candidates **Keywords:** PostgreSQL, Postgres, SQL, schema, table design, indexes, constraints, pgvector, PostGIS, TimescaleDB, hypertable, semantic search, hybrid search, BM25, time-series
Diagnose, compare, and optimize Apache Spark applications and SQL queries using Spark History Server data. Use this skill whenever the user wants to understand why a Spark app is slow, compare two benchmark runs or TPC-DS results, find performance bottlenecks (skew, GC pressure, shuffle spill, straggler tasks), get tuning recommendations, or optimize Spark/Gluten configurations. Also trigger when the user mentions 'diagnose', 'compare runs', 'why is this query slow', 'tune my Spark job', 'benchmark comparison', 'performance regression', or asks about executor skew, shuffle overhead, AQE effectiveness, or Gluten offloading issues.
개발 DB에 SQL을 직접 실행하여 데이터를 조회하는 스킬. "DB 확인", "데이터 조회", "테이블 조회", "SQL 실행", "데이터 검증" 키워드로 트리거. AI 에이전트가 백엔드/프론트엔드 개발 중 개발 DB 데이터를 즉시 확인할 때 사용.