docker

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Docker Sandbox

Docker沙箱

Run research code inside Docker containers while Feynman stays on the host. The container gets the project files, runs the commands, and results sync back.
在Docker容器内运行研究代码,同时Feynman留在主机上。容器会获取项目文件、运行命令,并将结果同步回主机。

When to use

使用场景

  • User selects "Docker Sandbox" as the execution environment in
    /replicate
    or
    /autoresearch
  • Running untrusted code from a paper's repository
  • Experiments that install packages or modify system state
  • Any time the user asks to run something "safely" or "isolated"
  • 用户在
    /replicate
    /autoresearch
    中选择“Docker Sandbox”作为执行环境
  • 运行论文仓库中的不可信代码
  • 需要安装包或修改系统状态的实验
  • 用户要求“安全地”或“隔离地”运行代码的任何场景

How it works

工作原理

  1. Build or pull an appropriate base image for the research code
  2. Mount the project directory into the container
  3. Run experiment commands inside the container
  4. Results write back to the mounted directory
  1. 为研究代码构建或拉取合适的基础镜像
  2. 将项目目录挂载到容器中
  3. 在容器内运行实验命令
  4. 将结果写入挂载的目录

Running commands in a container

在容器中运行命令

For Python research code (most common):
bash
docker run --rm -v "$(pwd)":/workspace -w /workspace python:3.11 bash -c "
  pip install -r requirements.txt &&
  python train.py
"
For projects with a Dockerfile:
bash
docker build -t feynman-experiment .
docker run --rm -v "$(pwd)/results":/workspace/results feynman-experiment
For GPU workloads:
bash
docker run --rm --gpus all -v "$(pwd)":/workspace -w /workspace pytorch/pytorch:latest bash -c "
  pip install -r requirements.txt &&
  python train.py
"
对于Python研究代码(最常见):
bash
docker run --rm -v "$(pwd)":/workspace -w /workspace python:3.11 bash -c "
  pip install -r requirements.txt &&
  python train.py
"
对于包含Dockerfile的项目:
bash
docker build -t feynman-experiment .
docker run --rm -v "$(pwd)/results":/workspace/results feynman-experiment
对于GPU工作负载:
bash
docker run --rm --gpus all -v "$(pwd)":/workspace -w /workspace pytorch/pytorch:latest bash -c "
  pip install -r requirements.txt &&
  python train.py
"

Choosing the base image

选择基础镜像

Research typeBase image
Python ML/DL
pytorch/pytorch:latest
or
tensorflow/tensorflow:latest-gpu
Python general
python:3.11
Node.js
node:20
R / statistics
rocker/r-ver:4
Julia
julia:1.10
Multi-language
ubuntu:24.04
with manual installs
研究类型基础镜像
Python机器学习/深度学习
pytorch/pytorch:latest
tensorflow/tensorflow:latest-gpu
通用Python
python:3.11
Node.js
node:20
R/统计学
rocker/r-ver:4
Julia
julia:1.10
多语言
ubuntu:24.04
(需手动安装依赖)

Persistent containers

持久化容器

For iterative experiments (like
/autoresearch
), create a named container instead of
--rm
. Choose a descriptive name based on the experiment:
bash
docker create --name <name> -v "$(pwd)":/workspace -w /workspace python:3.11 tail -f /dev/null
docker start <name>
docker exec <name> bash -c "pip install -r requirements.txt"
docker exec <name> bash -c "python train.py"
This preserves installed packages across iterations. Clean up with:
bash
docker stop <name> && docker rm <name>
对于迭代式实验(如
/autoresearch
),创建一个命名容器而非使用
--rm
参数。根据实验选择一个描述性名称:
bash
docker create --name <name> -v "$(pwd)":/workspace -w /workspace python:3.11 tail -f /dev/null
docker start <name>
docker exec <name> bash -c "pip install -r requirements.txt"
docker exec <name> bash -c "python train.py"
这样可以在多次迭代中保留已安装的包。清理命令如下:
bash
docker stop <name> && docker rm <name>

Notes

注意事项

  • The mounted workspace syncs results back to the host automatically
  • Containers are network-enabled by default — add
    --network none
    for full isolation
  • For GPU access, Docker must be configured with the NVIDIA Container Toolkit
  • 挂载的工作区会自动将结果同步回主机
  • 容器默认启用网络连接——添加
    --network none
    参数可实现完全隔离
  • 若要使用GPU,Docker必须配置NVIDIA Container Toolkit