Loading...
Loading...
Found 946 Skills
Entry P1 category router for injection testing. Use when routing between XSS, SQLi, SSRF, XXE, SSTI, command injection, and NoSQL injection workflows based on how attacker-controlled input is consumed.
WAF bypass methodology and generic evasion techniques. Use when a web application firewall blocks injection payloads (SQLi, XSS, RCE) and you need to craft bypasses using encoding, protocol-level tricks, or WAF-specific weaknesses.
NTLM relay and authentication coercion playbook. Use when capturing and relaying NTLM authentication to escalate privileges via SMB, LDAP, HTTP, or MSSQL relay targets, combined with PetitPotam, PrinterBug, and other coercion methods.
Develop Lakeflow Spark Declarative Pipelines (formerly Delta Live Tables) on Databricks. Use when building batch or streaming data pipelines with Python or SQL. Invoke BEFORE starting implementation.
Bun implementation guide for PMA-managed backend and full-stack projects. Covers project layout (src/modules), strict linting with ESLint + @antfu/eslint-config, database access (Drizzle ORM + bun:sqlite or PostgreSQL), HTTP patterns (OpenAPIHono + Bun.serve), layered config with environment variables, dual logging (consola + pino), single-binary compilation with embedded assets, and CI quality gates.
Routes Snowflake-related operations to Cortex Code CLI for specialized Snowflake expertise. Use when user asks about Snowflake databases, data warehouses, SQL queries on Snowflake, Cortex AI features, Snowpark, dynamic tables, data governance in Snowflake, Snowflake security, or mentions "Cortex" explicitly. Do NOT use for general programming, local file operations, non-Snowflake databases, web development, or infrastructure tasks unrelated to Snowflake.
Authoritative reference for Odoo 19 syntax conventions across Python ORM, XML views, OWL/JavaScript, controllers, manifests, and SCSS. Use this skill BEFORE writing or modifying any Odoo code, whenever the user mentions Odoo, an Odoo module, an Odoo model, an XML view, an OWL component, or any file under an Odoo addons directory. Odoo 19 introduces breaking changes (130 model renames, res.groups privilege refactor, hr.contract→hr.version, models.Constraint, attrs removal continued, type='jsonrpc') that older training data does NOT reflect. Always consult this skill before generating code, even if the request looks routine — a model that "obviously" works in Odoo 18 may be wrong in Odoo 19. Trigger this skill on phrases like "create an Odoo module", "add a field to res.partner", "write a controller", "make an OWL component", "fix this Odoo view", "_sql_constraints", "tree view", or any time you see a `__manifest__.py`, `models/*.py`, `views/*.xml`, or `static/src/**/*.js` file in an Odoo project.
This skill helps the agent generate or update orchestration pipeline definitions for Google Cloud Composer to initialize orchestration pipeline or update the orchestration definition for orchestration of various data pipelines, like dbt pipelines, notebooks, Spark jobs, Dataform, Python scripts or inline BigQuery SQL queries. This skill also helps deploy and trigger orchestration pipelines.
Import data into the AWS data lake from S3 files, local uploads, JDBC databases (Oracle, SQL Server, PostgreSQL, MySQL, RDS, Aurora), Amazon Redshift, Snowflake, BigQuery, DynamoDB, or existing Glue catalog tables (migration). Default target is S3 Tables; standard Iceberg on a general purpose bucket is supported where S3 Tables is not adopted. Handles one-time loads, recurring pipelines, migrations. Triggers on: import data, load data, ingest, sync database, migrate table, move data to AWS, set up pipeline, ETL, pull from Snowflake, query BigQuery into S3, export DynamoDB, CTAS, convert to Iceberg. Do NOT use for setting up or troubleshooting Glue connections (use connecting-to-data-source), creating empty tables (use creating-data-lake-table), running queries (use querying-data-lake), finding tables by fuzzy name (use finding-data-lake-assets), catalog audit (use exploring-data-catalog), or SaaS platforms like Salesforce, ServiceNow, SAP, MongoDB, Kafka.
Creates a complete Amazon Aurora database cluster with instances, handling cluster creation, instance provisioning, and Secrets Manager password management in the proper sequence. Use when setting up new Aurora MySQL or PostgreSQL clusters with production-ready configuration.
Build type-safe D1 databases with Drizzle ORM. Includes schema definition, migrations with Drizzle Kit, relations, and D1 batch API patterns. Prevents 18 errors including SQL BEGIN failures, cascade data loss, 100-parameter limits, and foreign key issues. Use when: defining D1 schemas, managing migrations, bulk inserts, or troubleshooting D1_ERROR, BEGIN TRANSACTION, foreign keys, "too many SQL variables".
Build AI agents with Cloudflare Agents SDK on Workers + Durable Objects. Provides WebSockets, state persistence, scheduling, and multi-agent coordination. Prevents 23 documented errors. Use when: building WebSocket agents, RAG with Vectorize, MCP servers, or troubleshooting "Agent class must extend", "new_sqlite_classes", binding errors, WebSocket payload limits.