modal-compute
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseModal Compute
Modal 计算
Use the CLI for serverless GPU workloads. No pod lifecycle to manage — write a decorated Python script and run it.
modal使用 CLI运行无服务器GPU工作负载。无需管理Pod生命周期——编写一个带装饰器的Python脚本即可运行。
modalSetup
设置
bash
pip install modal
modal setupbash
pip install modal
modal setupCommands
命令
| Command | Description |
|---|---|
| Run a script on Modal (ephemeral) |
| Run detached (background) |
| Deploy persistently |
| Serve with hot-reload (dev) |
| Interactive shell with GPU |
| List deployed apps |
| 命令 | 描述 |
|---|---|
| 在Modal上运行脚本(临时执行) |
| 后台运行(分离模式) |
| 持久化部署 |
| 热重载服务(开发环境) |
| 带GPU的交互式Shell |
| 列出已部署的应用 |
GPU types
GPU型号
T4L4A10GL40SA100A100-80GBH100H200B200Multi-GPU: for 4x H100s.
"H100:4"T4L4A10GL40SA100A100-80GBH100H200B200多GPU:表示4张H100显卡。
"H100:4"Script pattern
脚本模板
python
import modal
app = modal.App("experiment")
image = modal.Image.debian_slim(python_version="3.11").pip_install("torch==2.8.0")
@app.function(gpu="A100", image=image, timeout=600)
def train():
import torch
# training code here
@app.local_entrypoint()
def main():
train.remote()python
import modal
app = modal.App("experiment")
image = modal.Image.debian_slim(python_version="3.11").pip_install("torch==2.8.0")
@app.function(gpu="A100", image=image, timeout=600)
def train():
import torch
# 训练代码写在这里
@app.local_entrypoint()
def main():
train.remote()When to use
使用场景
- Stateless burst GPU jobs (training, inference, benchmarks)
- No persistent state needed between runs
- Check availability:
command -v modal
- 无状态突发GPU任务(训练、推理、基准测试)
- 运行之间无需持久化状态
- 检查可用性:
command -v modal