Loading...
Loading...
Found 94 Skills
Explore and query data on S3, Cloudflare R2, GCS, MinIO, or any S3-compatible storage. Use when the user mentions an s3://, r2://, gs://, or gcs:// URL, asks "what's in this bucket", wants to list remote files, preview remote Parquet/CSV/JSON, or query data on object storage without downloading it. Also triggers when the user wants to know the size, schema, or row count of remote datasets.
Execute and manage Athena SQL queries across default and federated catalogs (Glue, S3 Tables, Redshift). Triggers on phrases like: query data, run SQL, athena query, analyze table, SQL query, workgroup status, profile table, query Redshift catalog, query S3 Tables. Do NOT use for finding specific data assets (use finding-data-lake-assets), full catalog audits (use exploring-data-catalog), importing data (use ingesting-into-data-lake).
Import data into the AWS data lake from S3 files, local uploads, JDBC databases (Oracle, SQL Server, PostgreSQL, MySQL, RDS, Aurora), Amazon Redshift, Snowflake, BigQuery, DynamoDB, or existing Glue catalog tables (migration). Default target is S3 Tables; standard Iceberg on a general purpose bucket is supported where S3 Tables is not adopted. Handles one-time loads, recurring pipelines, migrations. Triggers on: import data, load data, ingest, sync database, migrate table, move data to AWS, set up pipeline, ETL, pull from Snowflake, query BigQuery into S3, export DynamoDB, CTAS, convert to Iceberg. Do NOT use for setting up or troubleshooting Glue connections (use connecting-to-data-source), creating empty tables (use creating-data-lake-table), running queries (use querying-data-lake), finding tables by fuzzy name (use finding-data-lake-assets), catalog audit (use exploring-data-catalog), or SaaS platforms like Salesforce, ServiceNow, SAP, MongoDB, Kafka.
Exports Amazon RDS or Aurora database snapshots to Amazon S3 in Apache Parquet format for analytics, backup, or data migration. Handles snapshot selection or creation, IAM role setup, KMS encryption, S3 bucket preparation, export task execution, progress monitoring, and data verification. Use when exporting RDS/Aurora data to S3 for Athena, Glue, or Redshift Spectrum consumption.
Configures EC2 instances to securely call AWS services by creating and attaching IAM roles via instance profiles, eliminating hardcoded credentials. Use when an EC2 instance needs permissions to access AWS services like S3, DynamoDB, SQS, or CloudWatch through temporary credentials.
Scaffolds new Terraform modules with standardized structure including main.tf, variables.tf, outputs.tf, versions.tf, and README.md. This skill should be used when users want to create a new Terraform module, set up module structure, or need templates for common infrastructure patterns like VPC, ECS, S3, or RDS modules.
Search data using vector similarity, full-text keywords, or hybrid methods with Reciprocal Rank Fusion (RRF). Use when setting up embeddings for search, configuring full-text indexing, writing vector_search/text_search/rrf SQL queries, using the /v1/search HTTP API, or configuring vector engines like S3 Vectors.
Manage CloudSync Master WordPress plugin via WP-CLI. This skill should be used when configuring, testing, monitoring, or operating CloudSync Master (media offloading to cloud storage). Covers account setup for S3/R2/GCS/DigitalOcean/Backblaze/Wasabi/Vultr/Linode, plugin settings, queue management, object management, license activation, migration from competitors, and wp-config.php configuration export. Use when the user mentions CloudSync Master, cloud media offloading, wp cloudsync commands, or needs to set up WordPress media on cloud storage.
Generate/create Loki configs — ingester, querier, compactor, ruler, S3/GCS/Azure backends.
Manages MongoDB Atlas Stream Processing (ASP) workflows. Handles workspace provisioning, data source/sink connections, processor lifecycle operations, debugging diagnostics, and tier sizing. Supports Kafka, Atlas clusters, S3, HTTPS, and Lambda integrations for streaming data workloads and event processing. NOT for general MongoDB queries or Atlas cluster management. Requires MongoDB MCP Server with Atlas API credentials.
Build and deploy new Goldsky Turbo pipelines from scratch. Triggers on: 'build a pipeline', 'index X on Y chain', 'set up a pipeline', 'track transfers to postgres', or any request describing data to move from a chain/contract to a destination (postgres, clickhouse, kafka, s3, webhook). Covers the full workflow: requirements → dataset selection → YAML generation → validation → deploy. Not for debugging (use /turbo-doctor) or syntax lookups (use /turbo-pipelines).
Lobstr.io platform help — no-code web scraping platform with 50+ ready-made scrapers for Google Maps, LinkedIn Sales Navigator, Twitter, YouTube, and more. Features cookie-based login sync, scheduled automation, multi-threading, and a full API with Python SDK and MCP Server. Use when configuring a Lobstr scraper, exporting data to Google Sheets or S3, setting up scheduled scraping, working with the Lobstr API or Python SDK, or managing credits. Do NOT use for general prospect list strategy (use /sales-prospect-list), cross-platform enrichment strategy (use /sales-enrich), or integration strategy (use /sales-integration).