Loading...
Loading...
Found 2,702 Skills
Map migration-relevant Megatron changes onto the official MindSpeed repository by resolving branch alignment, locating affected subsystems, and identifying concrete adaptation points. Use when Codex has structured Megatron change events and needs to decide whether MindSpeed already covers them, which MindSpeed files are likely affected, and whether patch generation is safe.
Debugging and Root Cause Localization for AscendC Operator Precision Issues. Used when operator precision tests fail (such as allclose failure, result deviation, all-zero/NaN output, etc.). Process: Error Distribution Analysis → Code Error-Prone Point Review → Experimental Isolation → printf/DumpTensor Instrumentation → Fix Verification. Keywords: precision debugging, precision issue, result inconsistency, error localization, allclose failure, output deviation, NaN, all-zero, precision debug.
Triage a daily msverl regression run by reading the baseline comparison log, stopping on success, extracting the most relevant training failure evidence from the daily training log when needed, collecting recent commits from verl main and MindSpeed master, and ranking the most likely culprit commits with concise fix-direction guidance.
Create Docker containers for Huawei Ascend NPU development with proper device mappings and volume mounts. Use when setting up Ascend development environments in Docker, running CANN applications in containers, or creating isolated NPU development workspaces. Supports privileged mode (default), basic mode, and full mode with profiling/logging. Auto-detects available NPU devices.
将简单Vector类型Triton算子从GPU迁移到昇腾NPU。当用户需要迁移Triton代码到NPU、提到GPU到NPU迁移、Triton迁移、昇腾适配时使用。注意:无法自动迁移存在编译问题的算子。
Generate Triton kernel code for Ascend NPU based on operator design documents. Used when users need to implement Triton operator kernels and convert requirement documents into executable code. Core capabilities: (1) Parse requirement documents to confirm computing logic (2) Design tiling partitioning strategy (3) Generate high-performance kernel code (4) Generate test code to verify correctness.
Track and normalize change requests against the official Megatron-LM repository by branch, PR, commit, commit range, or time window. Use when Codex needs to collect the exact upstream change set before deeper analysis, especially for branch-aware Megatron and MindSpeed migration work, daily/periodic tracking, or preparing inputs for change analysis and migration generation.
Complete AscendC Operator Verification Testcase Generation - Help users with testcase design. Use this skill when users mention testcase design, generalized testcase generation, operator benchmark, UT testcase, precision testcase, or performance testcase.
Ascend C Code Inspection Skill. Conduct security specification inspection on code based on the hypothesis testing methodology. When calling, you must clearly provide: code snippets and inspection rule descriptions. TRIGGER when: Users request code inspection, code review, ask code security questions, check coding specifications, or need to check specific code issues (such as memory leaks, integer overflows, null pointers, etc.). Keywords: Ascend C, code inspection, code review, security specification, memory, pointer, overflow, leak, coding specification.
Accepts Triton operator implementations, automatically invokes Torch small operator implementations (CPU or NPU) for precision comparison, and generates precision reports. It is used when users need to verify the correctness and precision of Triton operator implementations, compare precision with PyTorch implementations, and generate standardized precision reports.
Task Orchestration for Full-Process Development of Ascend Triton Operators. Used when users need to develop Triton Operators, covering the complete workflow of environment configuration → requirement design → code generation → static inspection → precision verification → performance evaluation → document generation → performance optimization.
In-process ClickHouse SQL engine for Python — run ClickHouse SQL queries directly on local files, remote databases, and cloud storage without a server. Use when the user wants to write SQL queries against Parquet/CSV/ JSON files, use ClickHouse table functions (mysql(), s3(), postgresql(), iceberg(), deltaLake() etc.), build stateful analytical pipelines with Session, use parametrized queries, window functions, or other advanced ClickHouse SQL features. Also use when the user explicitly mentions chdb.query(), ClickHouse SQL syntax, or wants cross-source SQL joins. Do NOT use for pandas-style DataFrame operations — use chdb-datastore instead.