Loading...
Loading...
Found 1,529 Skills
· Design/review HTTP APIs for FastAPI, Express, NestJS: REST, OpenAPI, pagination, OAuth/JWT. Triggers: 'fastapi', 'express', 'nestjs', 'openapi', 'pagination', 'idempotency'. Not for schemas (use databases).
Run the Upstash CLI (`upstash`) against the Upstash Developer API for Redis, Vector, Search, QStash, and teams. Use when listing or managing databases, backups, vector/search indexes, QStash instances, team members, stats, or any non-interactive Upstash automation with JSON output and terminal commands.
trendHERO platform help — Instagram influencer analytics (95M+ profiles, 20+ filters), Account Quality Score (AQS 1-100, fake follower detection), Audience Analysis, Daily Tracking, Ads Database (10M+ posts), Audience Overlap, REST API (Bearer auth, webhooks). Covers discovery search, AQS interpretation, audience vetting, tracking setup, ads database research, API integration, and pricing (Free/Lite/Pro/Advanced). Use when you suspect an influencer has fake followers, trendHERO search results aren't matching your niche, you need to monitor an influencer's metrics over time, you want to see which influencers your competitors are using, the trendHERO API isn't working as expected, you're unsure which trendHERO plan fits your budget, or you're deciding between trendHERO, HypeAuditor, and Heepsy. Do NOT use for influencer strategy across platforms (use /sales-influencer-marketing), TikTok marketing (use /sales-tiktok-marketing), gaming influencer marketing (use /sales-gaming-marketing), or ad campaigns (use /sales-retargeting).
Execute KQL management commands (table management, ingestion, policies, functions, materialized views) against Fabric Eventhouse and KQL Databases via CLI. Use when the user wants to: 1. Create or alter KQL tables, columns, or functions 2. Ingest data into an Eventhouse (inline, from storage, streaming) 3. Configure retention, caching, or partitioning policies 4. Create or manage materialized views and update policies 5. Manage data mappings for ingestion pipelines 6. Deploy KQL schema via scripts Triggers: "create kql table", "kql ingestion", "ingest into eventhouse", "kql function", "materialized view", "kql retention policy", "eventhouse schema", "kql authoring", "create eventhouse table", "kql mapping"
Automated cost estimation from BIM models using DDC CWICR database with 55,719 work items. AI classification + vector search for accurate pricing.
Finds and inspects data assets within Google Cloud. Relevant when any of the following conditions are true: 1. The user request involves finding, exploring, or inspecting data assets in Google Cloud, such as: - BigQuery datasets, tables, or views - BigLake catalog or tables - Spanner instances, databases or tables - etc. 2. You need to retrieve the schema, metadata, or governance policies for a GCP data asset. 3. You have a keyword or topic (e.g., "sales data") but lack the specific table or resource ID. 4. You are attempting to find data using `bq ls`, as this skill offers a superior approach. Don't use when: - Assets are outside Google Cloud
**STOP AND VERIFY**: Before running any command or tool that results in irreversible data loss, you MUST obtain explicit user consent. When in doubt, ask. It is better to wait for confirmation than to accidentally delete production data or critical project assets. Use this for: - SQL: DROP TABLE/VIEW/SCHEMA/DATABASE, TRUNCATE, or broad DELETE (missing WHERE or using 1=1). - Cloud Storage: gsutil rm or gcloud storage rm targeting production data or critical buckets. - Infrastructure: gcloud projects delete, deleting Spanner/BigQuery/Dataproc resources, deleting secrets, or KMS key destruction.
Create, implement, deploy, and debug Adobe Runtime actions with consistent layout, validation, and error handling. Use this skill whenever the user needs to add actions to an App Builder project, understand action structure (params, response format, web/raw actions), configure actions in the manifest, use App Builder SDKs (State, Files, Events, database), deploy and invoke actions via CLI, debug action issues, or implement patterns such as webhook receivers, custom event providers, journaling consumers, large payload redirects, action sequence pipelines, and Asset Compute workers. Also trigger when users mention serverless functions in Adobe context, action logging, IMS authentication for actions, or cron-style scheduled actions.
Debug production render failures in telecine. Inspect render state in the database, Valkey queues, and Cloud Run logs. Restart failed renders, trace the render pipeline flow, and diagnose fragment-level failures.
Creates Taubyte resources non-interactively via `tau new` for domain, website, library, function, application, database, storage, messaging, and service. Encodes the project-vs-application scope rule, the database `min < max` constraint, the website/library `--generate-repository` + import sequence, and the forbidden `--generated-fqdn-prefix` flag. Use when adding any resource to a Taubyte project's config repo.
Programmatic security management in Neo4j — RBAC/ABAC, user lifecycle (CREATE/ALTER/DROP USER), role lifecycle (CREATE/GRANT ROLE/DROP ROLE), privilege grants and denies (GRANT/DENY/REVOKE on graph, database, DBMS), property-level access control, sub-graph access control, SHOW PRIVILEGES inspection, and auth provider config reference (LDAP, OIDC/SSO). Use when an agent needs to manage users, roles, or privileges programmatically via Cypher on the system database. Does NOT handle Cypher query writing — use neo4j-cypher-skill. Does NOT handle cluster ops or backups — use neo4j-cli-tools-skill. Property-level security and ABAC require Enterprise Edition.
Import structured data into Neo4j — LOAD CSV, CALL IN TRANSACTIONS, neo4j-admin database import full (offline bulk), apoc.load.csv/json, apoc.periodic.iterate, driver batch writes. Covers method selection, header file format, type coercion, null handling, ON ERROR modes, CONCURRENT TRANSACTIONS, pre-import constraint setup, and post-import validation. Use when importing CSV/JSON/Parquet files, migrating relational data to graph, or bulk-loading large datasets. Does NOT handle unstructured document/PDF/vector chunking pipelines — use neo4j-document-import-skill. Does NOT handle live app write patterns (MERGE/CREATE) — use neo4j-cypher-skill. Does NOT handle neo4j-admin backup/restore/config — use neo4j-cli-tools-skill.