Loading...
Loading...
Found 240 Skills
Expert guidance for building production-ready FastAPI applications with modular architecture where each business domain is an independent module with own routes, models, schemas, services, cache, and migrations. Uses UV + pyproject.toml for modern Python dependency management, project name subdirectory for clean workspace organization, structlog (JSON+colored logging), pydantic-settings configuration, auto-discovery module loader, async SQLAlchemy with PostgreSQL, per-module Alembic migrations, Redis/memory cache with module-specific namespaces, central httpx client, OpenTelemetry/Prometheus observability, conversation ID tracking (X-Conversation-ID header+cookie), conditional Keycloak/app-based RBAC authentication, DDD/clean code principles, and automation scripts for rapid module development. Use when user requests FastAPI project setup, modular architecture, independent module development, microservice architecture, async database operations, caching strategies, logging patterns, configuration management, authentication systems, observability implementation, or enterprise Python web services. Supports max 3-4 route nesting depth, cache invalidation patterns, inter-module communication via service layer, and comprehensive error handling workflows.
Bun implementation guide for PMA-managed backend and full-stack projects. Covers project layout (src/modules), strict linting with ESLint + @antfu/eslint-config, database access (Drizzle ORM + bun:sqlite or PostgreSQL), HTTP patterns (OpenAPIHono + Bun.serve), layered config with environment variables, dual logging (consola + pino), single-binary compilation with embedded assets, and CI quality gates.
Automatically discover database skills when working with SQL, PostgreSQL, MongoDB, Redis, database schema design, query optimization, migrations, connection pooling, ORMs, or database selection. Activates for database design, optimization, and implementation tasks.
Database operations including querying, schema exploration, and data analysis. Activates for tasks involving PostgreSQL, MySQL, MariaDB, SQLite, MongoDB, Redis, Elasticsearch, or ClickHouse databases.
This skill should be used when the user asks to "create a new FastAPI project", "setup a fastapi api", "new fastapi project", "scaffold a fastapi app", "initialize a fastapi backend", or "start a new python api". Scaffolds a complete production-ready FastAPI project with SQLAlchemy, PostgreSQL, JWT auth, Pydantic v2 settings, and uv package management.
Add a Docker dev service to this project. Supported services: Redis, RabbitMQ, PostgreSQL, MySQL/MariaDB, MongoDB. Writes Docker Compose and Taskfile configs to .devtools/.
Use when deploying a database to Zeabur. Use when user needs MySQL, PostgreSQL, MongoDB, or Redis. Use when user says "I need a database", "add database", "deploy postgres", "set up MySQL", "add Redis", "add MongoDB", or "connect to database". Also use when user mentions data persistence issues like "data lost after restart", "data not saved", "data disappears", "need persistent storage for data", or "how to persist data". Also use when integrating a database with an existing service.
GEOFlow open-source GEO/SEO content production system with AI generation, review workflow, and publishing pipeline built on PHP and PostgreSQL.
Implement AI Coaching best practices on AnalyticDB for PostgreSQL (ADBPG): Leverage Supabase projects (training data management) + ADBPG instances with vector optimization to build RAG-driven coaching systems that guide users through domain-specific workflows, decision-making, or skill development. Use when: User wants to create Supabase projects (spb-xxx), ADBPG instances (gp-xxx), vector knowledge bases, or RAG-driven coaching systems on ADBPG. Triggers: "Supabase", "ADBPG", "vector database", "knowledge base", "RAG", "AI coaching", "coaching system", "spb-xxx", "gp-xxx"
Import data into the AWS data lake from S3 files, local uploads, JDBC databases (Oracle, SQL Server, PostgreSQL, MySQL, RDS, Aurora), Amazon Redshift, Snowflake, BigQuery, DynamoDB, or existing Glue catalog tables (migration). Default target is S3 Tables; standard Iceberg on a general purpose bucket is supported where S3 Tables is not adopted. Handles one-time loads, recurring pipelines, migrations. Triggers on: import data, load data, ingest, sync database, migrate table, move data to AWS, set up pipeline, ETL, pull from Snowflake, query BigQuery into S3, export DynamoDB, CTAS, convert to Iceberg. Do NOT use for setting up or troubleshooting Glue connections (use connecting-to-data-source), creating empty tables (use creating-data-lake-table), running queries (use querying-data-lake), finding tables by fuzzy name (use finding-data-lake-assets), catalog audit (use exploring-data-catalog), or SaaS platforms like Salesforce, ServiceNow, SAP, MongoDB, Kafka.
Create and troubleshoot AWS Glue connections to JDBC databases (Oracle, SQL Server, PostgreSQL, MySQL, RDS), Redshift, Snowflake, and BigQuery. Gathers connection hints from user, discovers existing connections and RDS/Redshift candidates, registers credentials in Secrets Manager or IAM DB auth, configures VPC, and tests. Triggers on: connect to database, set up Glue connection, register data source, connect to Snowflake/BigQuery/RDS, connection timeout, test connection, troubleshoot connection. Do NOT use for moving data (use ingesting-into-data-lake), creating tables (use creating-data-lake-table), queries (use querying-data-lake), catalog exploration (use exploring-data-catalog), or SaaS (Salesforce, ServiceNow, SAP, MongoDB, Kafka).
Complete E2E (end-to-end) and integration testing skill for TypeScript/NestJS projects using Jest, real infrastructure via Docker, and GWT pattern. ALWAYS use this skill when user needs to: **SETUP** - Initialize or configure E2E testing infrastructure: - Set up E2E testing for a new project - Configure docker-compose for testing (Kafka, PostgreSQL, MongoDB, Redis) - Create jest-e2e.config.ts or E2E Jest configuration - Set up test helpers for database, Kafka, or Redis - Configure .env.e2e environment variables - Create test/e2e directory structure **WRITE** - Create or add E2E/integration tests: - Write, create, add, or generate e2e tests or integration tests - Test API endpoints, workflows, or complete features end-to-end - Test with real databases, message brokers, or external services - Test Kafka consumers/producers, event-driven workflows - Working on any file ending in .e2e-spec.ts or in test/e2e/ directory - Use GWT (Given-When-Then) pattern for tests **REVIEW** - Audit or evaluate E2E tests: - Review existing E2E tests for quality - Check test isolation and cleanup patterns - Audit GWT pattern compliance - Evaluate assertion quality and specificity - Check for anti-patterns (multiple WHEN actions, conditional assertions) **RUN** - Execute or analyze E2E test results: - Run E2E tests - Start/stop Docker infrastructure for testing - Analyze E2E test results - Verify Docker services are healthy - Interpret test output and failures **DEBUG** - Fix failing or flaky E2E tests: - Fix failing E2E tests - Debug flaky tests or test isolation issues - Troubleshoot connection errors (database, Kafka, Redis) - Fix timeout issues or async operation failures - Diagnose race conditions or state leakage - Debug Kafka message consumption issues **OPTIMIZE** - Improve E2E test performance: - Speed up slow E2E tests - Optimize Docker infrastructure startup - Replace fixed waits with smart polling - Reduce beforeEach cleanup time - Improve test parallelization where safe Keywords: e2e, end-to-end, integration test, e2e-spec.ts, test/e2e, Jest, supertest, NestJS, Kafka, Redpanda, PostgreSQL, MongoDB, Redis, docker-compose, GWT pattern, Given-When-Then, real infrastructure, test isolation, flaky test, MSW, nock, waitForMessages, fix e2e, debug e2e, run e2e, review e2e, optimize e2e, setup e2e