Loading...
Loading...
Found 3,980 Skills
Use when reviewing embedded or firmware code changes, especially in C/C++, bare-metal, RTOS, driver, ISR, DMA, boot, NFC, or other hardware-facing paths where cross-review by independent agents can catch correctness and safety issues
Use pgmicro — an in-process PostgreSQL reimplementation backed by SQLite-compatible storage, embeddable as a library or CLI
Alibaba Cloud OSS scheduled local-folder sync skill using aliyun CLI, including integrated ossutil commands for incremental upload. Use when the user wants to schedule recurring local-to-OSS uploads, validate OSS backup prerequisites, set up cron or Task Scheduler for OSS sync, or clearly separate what stays on aliyun CLI from what remains OS-local or manual. Conditional write operations: creates the target bucket (PutBucket) only when the user confirms the bucket does not exist yet; optionally deletes test objects (DeleteObject) only when the user explicitly requests cleanup after verification. Triggers: "OSS scheduled sync", "定时同步到OSS", "aliyun ossutil cp --max-age", "aliyun ossutil cp -u", "cron upload to OSS", "Task Scheduler OSS upload", "aliyun CLI OSS sync", "本地目录增量上传 OSS".
Crowdin integration. Manage data, records, and automate workflows. Use when the user wants to interact with Crowdin data.
주기적 실행 Cron 작업 코드를 생성하는 스킬. node-cron 기반 TypeScript Service + 로그 기록 + server.ts 통합. "크론 작업", "주기적 실행", "스케줄러 생성" 키워드로 트리거.
Microcks integration. Manage data, records, and automate workflows. Use when the user wants to interact with Microcks data.
Create, manage, and deploy Power BI semantic models inside Microsoft Fabric workspaces via `az rest` CLI against Fabric and Power BI REST APIs. Use when the user wants to: (1) create a semantic model from TMDL definition files, (2) retrieve or download semantic model definitions, (3) update a semantic model definition with modified TMDL, (4) trigger or manage dataset refresh operations, (5) configure data sources, parameters, or permissions, (6) deploy semantic models between pipeline stages. Covers Fabric Items API (CRUD) and Power BI Datasets API (refresh, data sources, permissions). For read-only DAX queries, use `powerbi-consumption-cli`. For fine-grained modeling changes, route to `powerbi-modeling-mcp`. Triggers: "create semantic model", "upload TMDL", "download semantic model TMDL", "refresh dataset", "semantic model deployment pipeline", "dataset permissions", "list dataset users", "semantic model authoring".
跨市场股票技术指标服务。支持 A股/港股/美股 27种技术指标,包括 MA/EMA/RSI/MACD/KDJ/布林带/ADX/ATR/CCI/VWAP等。 当用户询问"RSI""MACD""布林带""技术指标""KDJ""OBV"等时触发。 ⚠️ 加密货币技术指标请使用 crypto-indicators skill。
Microsoft Power BI integration. Manage Reports, Workspaces, Apps, Users. Use when the user wants to interact with Microsoft Power BI data.
Scroll areas inside a layout should be avoided wherever possible. When unavoidable, allow only one scroll axis at a time and always keep the user in control. Use when designing layouts, data tables, panels, or any component that might introduce an inner scroll container.
Execute authoring T-SQL (DDL, DML, data ingestion, transactions, schema changes) against Microsoft Fabric Data Warehouse and SQL endpoints from agentic CLI environments. Use when the user wants to: (1) create/alter/drop tables from terminal, (2) insert/update/delete/merge data via CLI, (3) run COPY INTO or OPENROWSET ingestion, (4) manage transactions or stored procedures, (5) perform schema evolution, (6) use time travel or snapshots, (7) generate ETL/ELT shell scripts, (8) create views/functions/procedures on Lakehouse SQLEP. Triggers: "create table in warehouse", "insert data via T-SQL", "load from ADLS", "COPY INTO", "run ETL with T-SQL", "alter warehouse table", "upsert with T-SQL", "merge into warehouse", "create T-SQL procedure", "warehouse time travel", "recover deleted warehouse data", "create warehouse schema", "deploy warehouse", "transaction conflict", "snapshot isolation error".
Analyze lakehouse data interactively using Fabric Livy sessions and PySpark/Spark SQL for advanced analytics, DataFrames, cross-lakehouse joins, Delta time-travel, and unstructured/JSON data. Use when the user explicitly asks for PySpark, Spark DataFrames, Livy sessions, or Python-based analysis — NOT for simple SQL queries. Triggers: "PySpark", "Spark SQL", "analyze with PySpark", "Spark DataFrame", "Livy session", "lakehouse with Python", "PySpark analysis", "PySpark data quality", "Delta time-travel with Spark".