Loading...
Loading...
Found 62 Skills
Fathom AI note-taker platform help — REST API for pulling meeting transcripts, summaries, action items, and CRM matches into CRMs, data warehouses, or Slack. Use when transcripts not syncing to HubSpot/Salesforce, Fathom webhook signatures failing HMAC verification, bot blocked by Google Meet as a security risk, OAuth app can't include transcript inline, building a Fathom→Snowflake/BigQuery pipeline, rate-limited at 60 calls/minute, or picking between Fathom free tier vs Premium vs Team vs Business. Do NOT use for selecting between Fathom and competitors like Fireflies/Gong/Avoma (use /sales-note-taker) or reviewing specific call recordings (use /sales-call-review).
Ensures proper Python dependency management, avoiding global `pip install` and adhering to project-specific tooling. Use this skill if any of the following are true: 1. Attempting to run `pip install {package_name}`. 2. Python packages or dependencies need to be added or modified. 3. Initiating a new Python project. 4. Creating a new notebook, even if just using BigQuery cells. 5. Generating Python code that includes `import` statements for third-party libraries. 6. Before executing Python scripts via the terminal to ensure the correct virtual environment is active.
CRITICAL RULE: You MUST use this skill whenever the task involves any machine learning tasks or data analysis. Use this skill if the user's prompt or requirements mention any of the following: * Clustering * Classification * Regression * Time series forecasting * Statistical testing * Model comparison * ML * Data analysis SQL/BigQuery ML HANDOFF: If the user requires a SQL solution, use this skill to dictate the ANALYSIS STEPS (e.g., markdown analysis cells, visualization logic), but defer to `bigquery` for all SQL syntax.
**STOP AND VERIFY**: Before running any command or tool that results in irreversible data loss, you MUST obtain explicit user consent. When in doubt, ask. It is better to wait for confirmation than to accidentally delete production data or critical project assets. Use this for: - SQL: DROP TABLE/VIEW/SCHEMA/DATABASE, TRUNCATE, or broad DELETE (missing WHERE or using 1=1). - Cloud Storage: gsutil rm or gcloud storage rm targeting production data or critical buckets. - Infrastructure: gcloud projects delete, deleting Spanner/BigQuery/Dataproc resources, deleting secrets, or KMS key destruction.
Guide the user through connecting a new data warehouse source — Postgres, MySQL, Stripe, Hubspot, MongoDB, Salesforce, BigQuery, Snowflake, and so on. Use when the user wants to "connect Stripe", "import data from Postgres", "add a new data source", "sync my warehouse tables", or wants to pick sync methods for each table. Walks through source-type discovery, credential validation, table discovery, per-table sync_type selection, and the final create call. Also covers picking a good prefix and what to do right after creation.
Google Cloud Platform CLI (gcloud, gcloud storage, bq). Use when: managing GCP resources, deploying to Cloud Run/Cloud Functions/GKE/App Engine, working with Cloud Storage, BigQuery, IAM, Compute Engine, Cloud SQL, Pub/Sub, Secret Manager, Artifact Registry, Cloud Build, Cloud Scheduler, Cloud Tasks, Vertex AI, VPC/networking, DNS, logging/monitoring, or any GCP service. Also covers: authentication, project/config management, CI/CD integration, serverless deployments, container registry, docker push to GCP, managing secrets, Workload Identity Federation, and infrastructure automation.
Use when "data pipelines", "ETL", "data warehousing", "data lakes", or asking about "Airflow", "Spark", "dbt", "Snowflake", "BigQuery", "data modeling"
Use this skill when architecting on Google Cloud Platform, selecting GCP services, or implementing data and compute solutions. Triggers on Cloud Run, BigQuery, Pub/Sub, GKE, Cloud Functions, Cloud Storage, Firestore, Spanner, Cloud SQL, IAM, VPC, and any task requiring GCP architecture decisions or service selection.
Implement applications using Google Cloud Platform (GCP) services. Use when building on GCP infrastructure, selecting compute/storage/database services, designing data analytics pipelines, implementing ML workflows, or architecting cloud-native applications with BigQuery, Cloud Run, GKE, Vertex AI, and other GCP services.
Use this skill when building real-time or near-real-time data pipelines. Covers Kafka, Flink, Spark Streaming, Snowpipe, BigQuery streaming, materialized views, and batch-vs-streaming decisions. Common phrases: "real-time pipeline", "Kafka consumer", "streaming vs batch", "low latency ingestion". Do NOT use for batch integration patterns (use integration-patterns-skill) or pipeline orchestration (use data-orchestration-skill).
Query GA4 reports (users, sessions, conversions, funnels, realtime), manage properties / data streams / key events / custom dimensions / audiences / access bindings, and send Measurement Protocol events via the `ga4` CLI. Use this skill whenever the user mentions GA4, Google Analytics, property IDs starting with `properties/`, tracking events, engagement or traffic metrics, attribution, conversions, key events, audiences, BigQuery links, access roles, or realtime users — even if they don't explicitly say "GA4". Do not use for Google Search Console (see google-search-console skill) or generic web analytics where the source isn't GA4 (ask first).
Expert-level Google Cloud Platform, services, and cloud architecture