Loading...
Loading...
GENERator DNA 序列生成模型的昇腾 NPU 迁移 Skill,适用于将基于 HuggingFace Transformers 的 Causal LM 从 CUDA 迁移到华为 Ascend NPU,覆盖环境搭建、依赖安装、代码适配、多进程处理和 sequence recovery 验证。
npx skill4agent add ascend-ai-coding/awesome-ascend-skills ai-for-science-generator| 项目 | 要求 |
|---|---|
| 硬件 | Ascend910 系列(至少 1 卡) |
| CANN | ≥ 8.2(验证版本 8.3.RC1) |
| Python | 3.11 |
| PyTorch | 2.5.1 |
| torch_npu | 2.5.1 |
source /usr/local/Ascend/ascend-toolkit/set_env.sh
export PIP_INDEX_URL=https://repo.huaweicloud.com/repository/pypi/simple/conda create -n GENERator python=3.11 -ypip install torch==2.5.1 -i https://repo.huaweicloud.com/repository/pypi/simple/
pip install torch_npu==2.5.1 # 从本地 whl 或华为源安装
pip install numpy==1.26.4 pyyaml decorator attrs psutil absl-py cloudpickle ml-dtypes scipy tornado
pip install transformers==4.49.0 huggingface_hub 'datasets<3.0.0' scikit-learn pandas tqdm pyarrowimport torch_npu
from torch_npu.contrib import transfer_to_npu| 原始代码 | 替换为 |
|---|---|
| |
| |
| |
| |
dtype=dtypetorch_dtype=dtypemodel = AutoModelForCausalLM.from_pretrained(
args.model_path,
trust_remote_code=True,
torch_dtype=dtype # 原为 dtype=dtype
).to(device)def process_data_shard(shard_id, ...):
import torch_npu
from torch_npu.contrib import transfer_to_npu
torch.npu.set_device(shard_id)
device = f"npu:{shard_id}"
...os.environ["HF_ENDPOINT"] = "https://hf-mirror.com"source /usr/local/Ascend/ascend-toolkit/set_env.sh
export ASCEND_RT_VISIBLE_DEVICES=0
conda activate GENERator
cd /root/GENERator
python src/tasks/downstream/sequence_recovery.py --bf16✅ Completed📊 Results saved./sequence_recovery_results/GENERator-v2-eukaryote-1.2b-base_bfloat16.parquetASCEND_RT_VISIBLE_DEVICESpython scripts/validate_generator_env.py --model-path /path/to/modelreferences/runtime-adaptation.md