corridorkey-green-screen

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

CorridorKey Green Screen Keying

CorridorKey 绿幕抠像

Skill by ara.so — Daily 2026 Skills collection.
CorridorKey is a neural network that solves the color unmixing problem in green screen footage. For every pixel — including semi-transparent ones from motion blur, hair, or out-of-focus edges — it predicts the true straight (un-premultiplied) foreground color and a clean linear alpha channel. It reads/writes 16-bit and 32-bit EXR files for VFX pipeline integration.
ara.so开发的Skill — 属于Daily 2026 Skills合集。
CorridorKey是一款神经网络,可解决绿幕素材中的颜色分离问题。对于每一个像素——包括运动模糊、毛发或失焦边缘产生的半透明像素——它都能预测真实的直出(未预乘)前景颜色和干净的线性Alpha通道。它支持读写16位和32位EXR文件,可集成到VFX工作流中。

How It Works

工作原理

Two inputs required per frame:
  1. RGB green screen image — sRGB or linear gamma, sRGB/REC709 gamut
  2. Alpha Hint — rough coarse B&W mask (doesn't need to be precise)
The model fills in fine detail from the hint; it's trained on blurry/eroded masks.
每帧需要两个输入:
  1. RGB绿幕图像 — sRGB或线性伽马,sRGB/REC709色域
  2. Alpha提示图 — 粗略的黑白蒙版(无需精准)
模型会根据提示图补充细节;它是基于模糊/腐蚀蒙版训练的。

Installation

安装

Prerequisites

前置要求

  • uv package manager (handles Python automatically)
  • NVIDIA GPU with CUDA 12.8+ drivers (for GPU), or Apple M1+ (for MLX), or CPU fallback
  • uv包管理器(自动处理Python环境)
  • 支持CUDA 12.8+驱动的NVIDIA GPU(GPU加速),或Apple M1+芯片(MLX加速),或CPU fallback

Windows

Windows系统

bat
undefined
bat
undefined

Double-click or run from terminal:

双击或在终端运行:

Install_CorridorKey_Windows.bat
Install_CorridorKey_Windows.bat

Optional heavy modules:

可选的重型模块:

Install_GVM_Windows.bat Install_VideoMaMa_Windows.bat
undefined
Install_GVM_Windows.bat Install_VideoMaMa_Windows.bat
undefined

Linux / macOS

Linux / macOS系统

bash
undefined
bash
undefined

Install uv

安装uv

Install dependencies — pick one:

安装依赖 — 选择其中一项:

uv sync # CPU / Apple MPS (universal) uv sync --extra cuda # NVIDIA GPU (Linux/Windows) uv sync --extra mlx # Apple Silicon MLX
uv sync # CPU / Apple MPS(通用版) uv sync --extra cuda # NVIDIA GPU(Linux/Windows) uv sync --extra mlx # Apple Silicon MLX加速

Download required model (~300MB)

下载所需模型(约300MB)

mkdir -p CorridorKeyModule/checkpoints
mkdir -p CorridorKeyModule/checkpoints

Place downloaded CorridorKey_v1.0.pth as:

将下载的CorridorKey_v1.0.pth重命名为:

CorridorKeyModule/checkpoints/CorridorKey.pth

CorridorKeyModule/checkpoints/CorridorKey.pth


Model download: https://huggingface.co/nikopueringer/CorridorKey_v1.0/resolve/main/CorridorKey_v1.0.pth

模型下载地址:https://huggingface.co/nikopueringer/CorridorKey_v1.0/resolve/main/CorridorKey_v1.0.pth

Optional Alpha Hint Generators

可选的Alpha提示图生成器

bash
undefined
bash
undefined

GVM (automatic, ~80GB VRAM, good for people)

GVM(自动生成,约需80GB显存,适用于人物素材)

uv run hf download geyongtao/gvm --local-dir gvm_core/weights
uv run hf download geyongtao/gvm --local-dir gvm_core/weights

VideoMaMa (requires mask hint, <24GB VRAM with community tweaks)

VideoMaMa(需要蒙版提示,社区优化后显存需求<24GB)

uv run hf download SammyLim/VideoMaMa
--local-dir VideoMaMaInferenceModule/checkpoints/VideoMaMa
uv run hf download stabilityai/stable-video-diffusion-img2vid-xt
--local-dir VideoMaMaInferenceModule/checkpoints/stable-video-diffusion-img2vid-xt
--include "feature_extractor/" "image_encoder/" "vae/*" "model_index.json"
undefined
uv run hf download SammyLim/VideoMaMa
--local-dir VideoMaMaInferenceModule/checkpoints/VideoMaMa
uv run hf download stabilityai/stable-video-diffusion-img2vid-xt
--local-dir VideoMaMaInferenceModule/checkpoints/stable-video-diffusion-img2vid-xt
--include "feature_extractor/" "image_encoder/" "vae/*" "model_index.json"
undefined

Key CLI Commands

核心CLI命令

bash
undefined
bash
undefined

Run inference on prepared clips

对准备好的片段进行推理

uv run python main.py run_inference --device cuda uv run python main.py run_inference --device cpu uv run python main.py run_inference --device mps # Apple Silicon
uv run python main.py run_inference --device cuda uv run python main.py run_inference --device cpu uv run python main.py run_inference --device mps # Apple Silicon芯片

List available clips/shots

列出可用片段/镜头

uv run python main.py list
uv run python main.py list

Interactive setup wizard

交互式设置向导

uv run python main.py wizard uv run python main.py wizard --win_path /path/to/ClipsForInference
undefined
uv run python main.py wizard uv run python main.py wizard --win_path /path/to/ClipsForInference
undefined

Docker (Linux + NVIDIA GPU)

Docker部署(Linux + NVIDIA GPU)

bash
undefined
bash
undefined

Build

构建镜像

docker build -t corridorkey:latest .
docker build -t corridorkey:latest .

Run inference

运行推理

docker run --rm -it --gpus all
-e OPENCV_IO_ENABLE_OPENEXR=1
-v "$(pwd)/ClipsForInference:/app/ClipsForInference"
-v "$(pwd)/Output:/app/Output"
-v "$(pwd)/CorridorKeyModule/checkpoints:/app/CorridorKeyModule/checkpoints"
corridorkey:latest run_inference --device cuda
docker run --rm -it --gpus all
-e OPENCV_IO_ENABLE_OPENEXR=1
-v "$(pwd)/ClipsForInference:/app/ClipsForInference"
-v "$(pwd)/Output:/app/Output"
-v "$(pwd)/CorridorKeyModule/checkpoints:/app/CorridorKeyModule/checkpoints"
corridorkey:latest run_inference --device cuda

Docker Compose

Docker Compose部署

docker compose build docker compose --profile gpu run --rm corridorkey run_inference --device cuda docker compose --profile gpu run --rm corridorkey list
docker compose build docker compose --profile gpu run --rm corridorkey run_inference --device cuda docker compose --profile gpu run --rm corridorkey list

Pin to specific GPU on multi-GPU systems

在多GPU系统中指定特定GPU

NVIDIA_VISIBLE_DEVICES=0 docker compose --profile gpu run --rm corridorkey run_inference --device cuda
undefined
NVIDIA_VISIBLE_DEVICES=0 docker compose --profile gpu run --rm corridorkey run_inference --device cuda
undefined

Directory Structure

目录结构

CorridorKey/
├── ClipsForInference/          # Input shots go here
│   └── my_shot/
│       ├── frames/             # Green screen RGB frames (PNG/EXR)
│       ├── alpha_hints/        # Coarse alpha masks (grayscale)
│       └── VideoMamaMaskHint/  # Optional: hand-drawn hints for VideoMaMa
├── Output/                     # Processed results
│   └── my_shot/
│       ├── foreground/         # Straight RGBA EXR frames
│       └── alpha/              # Linear alpha channel frames
├── CorridorKeyModule/
│   └── checkpoints/
│       └── CorridorKey.pth     # Required model weights
├── gvm_core/weights/           # Optional GVM weights
└── VideoMaMaInferenceModule/
    └── checkpoints/            # Optional VideoMaMa weights
CorridorKey/
├── ClipsForInference/          # 输入镜头存放目录
│   └── my_shot/
│       ├── frames/             # 绿幕RGB帧(PNG/EXR格式)
│       ├── alpha_hints/        # 粗略Alpha蒙版(灰度图)
│       └── VideoMamaMaskHint/  # 可选:为VideoMaMa准备的手绘提示图
├── Output/                     # 处理结果输出目录
│   └── my_shot/
│       ├── foreground/         # 直出RGBA格式EXR帧
│       └── alpha/              # 线性Alpha通道帧
├── CorridorKeyModule/
│   └── checkpoints/
│       └── CorridorKey.pth     # 必需的模型权重文件
├── gvm_core/weights/           # 可选的GVM权重文件
└── VideoMaMaInferenceModule/
    └── checkpoints/            # 可选的VideoMaMa权重文件

Python Usage Examples

Python使用示例

Basic Inference Pipeline

基础推理工作流

python
import torch
from pathlib import Path
from CorridorKeyModule.model import CorridorKeyModel  # adjust to actual module path
from CorridorKeyModule.inference import run_inference
python
import torch
from pathlib import Path
from CorridorKeyModule.model import CorridorKeyModel  # 根据实际模块路径调整
from CorridorKeyModule.inference import run_inference

Load model

加载模型

device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = CorridorKeyModel() model.load_state_dict(torch.load("CorridorKeyModule/checkpoints/CorridorKey.pth")) model.to(device) model.eval()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = CorridorKeyModel() model.load_state_dict(torch.load("CorridorKeyModule/checkpoints/CorridorKey.pth")) model.to(device) model.eval()

Run inference on a shot folder

对单个镜头目录执行推理

run_inference( shot_dir=Path("ClipsForInference/my_shot"), output_dir=Path("Output/my_shot"), device=device, )
undefined
run_inference( shot_dir=Path("ClipsForInference/my_shot"), output_dir=Path("Output/my_shot"), device=device, )
undefined

Reading/Writing EXR Files

读写EXR文件

python
import cv2
import numpy as np
import os

os.environ["OPENCV_IO_ENABLE_OPENEXR"] = "1"
python
import cv2
import numpy as np
import os

os.environ["OPENCV_IO_ENABLE_OPENEXR"] = "1"

Read a 32-bit linear EXR frame

读取32位线性EXR帧

frame = cv2.imread("frame_0001.exr", cv2.IMREAD_UNCHANGED | cv2.IMREAD_ANYCOLOR)
frame = cv2.imread("frame_0001.exr", cv2.IMREAD_UNCHANGED | cv2.IMREAD_ANYCOLOR)

frame is float32, linear light, BGR channel order

frame为float32类型,线性光,BGR通道顺序

Convert BGR -> RGB for processing

转换为RGB格式用于处理

frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

Write output EXR (straight RGBA)

输出EXR文件(直出RGBA格式)

Assume
foreground
is float32 HxWx4 (RGBA, linear, straight alpha)

假设
foreground
是float32类型的HxWx4数组(RGBA、线性光、直出Alpha)

foreground_bgra = cv2.cvtColor(foreground, cv2.COLOR_RGBA2BGRA) cv2.imwrite("output_0001.exr", foreground_bgra.astype(np.float32))
undefined
foreground_bgra = cv2.cvtColor(foreground, cv2.COLOR_RGBA2BGRA) cv2.imwrite("output_0001.exr", foreground_bgra.astype(np.float32))
undefined

Generating a Coarse Alpha Hint with OpenCV

使用OpenCV生成粗略Alpha提示图

python
import cv2
import numpy as np

def generate_chroma_key_hint(image_bgr: np.ndarray, erode_px: int = 5) -> np.ndarray:
    """
    Quick-and-dirty green screen hint for CorridorKey input.
    Returns grayscale mask (0=background, 255=foreground).
    """
    hsv = cv2.cvtColor(image_bgr, cv2.COLOR_BGR2HSV)

    # Tune these ranges for your specific green screen
    lower_green = np.array([35, 50, 50])
    upper_green = np.array([85, 255, 255])

    green_mask = cv2.inRange(hsv, lower_green, upper_green)
    foreground_mask = cv2.bitwise_not(green_mask)

    # Erode to pull mask away from edges (CorridorKey handles edge detail)
    kernel = np.ones((erode_px, erode_px), np.uint8)
    eroded = cv2.erode(foreground_mask, kernel, iterations=2)

    # Optional: slight blur to soften hint
    blurred = cv2.GaussianBlur(eroded, (15, 15), 5)
    return blurred
python
import cv2
import numpy as np

def generate_chroma_key_hint(image_bgr: np.ndarray, erode_px: int = 5) -> np.ndarray:
    """
    为CorridorKey生成快速绿幕提示图。
    返回灰度蒙版(0=背景,255=前景)。
    """
    hsv = cv2.cvtColor(image_bgr, cv2.COLOR_BGR2HSV)

    # 根据你的绿幕调整以下范围
    lower_green = np.array([35, 50, 50])
    upper_green = np.array([85, 255, 255])

    green_mask = cv2.inRange(hsv, lower_green, upper_green)
    foreground_mask = cv2.bitwise_not(green_mask)

    # 腐蚀蒙版以远离边缘(CorridorKey会处理边缘细节)
    kernel = np.ones((erode_px, erode_px), np.uint8)
    eroded = cv2.erode(foreground_mask, kernel, iterations=2)

    # 可选:轻微模糊柔化提示图
    blurred = cv2.GaussianBlur(eroded, (15, 15), 5)
    return blurred

Usage

使用示例

frame = cv2.imread("greenscreen_frame.png") hint = generate_chroma_key_hint(frame, erode_px=8) cv2.imwrite("alpha_hint.png", hint)
undefined
frame = cv2.imread("greenscreen_frame.png") hint = generate_chroma_key_hint(frame, erode_px=8) cv2.imwrite("alpha_hint.png", hint)
undefined

Batch Processing Frames

批量处理帧

python
from pathlib import Path
import cv2
import numpy as np
import os

os.environ["OPENCV_IO_ENABLE_OPENEXR"] = "1"

def prepare_shot_folder(
    raw_frames_dir: Path,
    output_shot_dir: Path,
    hint_generator_fn=None
):
    """
    Prepares a CorridorKey shot folder from raw green screen frames.
    """
    frames_out = output_shot_dir / "frames"
    hints_out = output_shot_dir / "alpha_hints"
    frames_out.mkdir(parents=True, exist_ok=True)
    hints_out.mkdir(parents=True, exist_ok=True)

    frame_paths = sorted(raw_frames_dir.glob("*.png")) + \
                  sorted(raw_frames_dir.glob("*.exr"))

    for frame_path in frame_paths:
        frame = cv2.imread(str(frame_path), cv2.IMREAD_UNCHANGED | cv2.IMREAD_ANYCOLOR)

        # Copy frame
        cv2.imwrite(str(frames_out / frame_path.name), frame)

        # Generate hint
        if hint_generator_fn:
            hint = hint_generator_fn(frame)
        else:
            hint = generate_chroma_key_hint(frame)

        hint_name = frame_path.stem + ".png"
        cv2.imwrite(str(hints_out / hint_name), hint)

    print(f"Prepared {len(frame_paths)} frames in {output_shot_dir}")


prepare_shot_folder(
    raw_frames_dir=Path("raw_footage/shot_01"),
    output_shot_dir=Path("ClipsForInference/shot_01"),
)
python
from pathlib import Path
import cv2
import numpy as np
import os

os.environ["OPENCV_IO_ENABLE_OPENEXR"] = "1"

def prepare_shot_folder(
    raw_frames_dir: Path,
    output_shot_dir: Path,
    hint_generator_fn=None
):
    """
    将原始绿幕帧转换为CorridorKey可用的镜头目录结构。
    """
    frames_out = output_shot_dir / "frames"
    hints_out = output_shot_dir / "alpha_hints"
    frames_out.mkdir(parents=True, exist_ok=True)
    hints_out.mkdir(parents=True, exist_ok=True)

    frame_paths = sorted(raw_frames_dir.glob("*.png")) + \
                  sorted(raw_frames_dir.glob("*.exr"))

    for frame_path in frame_paths:
        frame = cv2.imread(str(frame_path), cv2.IMREAD_UNCHANGED | cv2.IMREAD_ANYCOLOR)

        # 复制帧文件
        cv2.imwrite(str(frames_out / frame_path.name), frame)

        # 生成提示图
        if hint_generator_fn:
            hint = hint_generator_fn(frame)
        else:
            hint = generate_chroma_key_hint(frame)

        hint_name = frame_path.stem + ".png"
        cv2.imwrite(str(hints_out / hint_name), hint)

    print(f"已在{output_shot_dir}中准备{len(frame_paths)}帧")


prepare_shot_folder(
    raw_frames_dir=Path("raw_footage/shot_01"),
    output_shot_dir=Path("ClipsForInference/shot_01"),
)

Using clip_manager.py Alpha Hint Generators

使用clip_manager.py生成Alpha提示图

python
undefined
python
undefined

GVM (automatic — no extra input needed)

GVM(自动生成 — 无需额外输入)

from clip_manager import generate_alpha_hints_gvm
generate_alpha_hints_gvm( shot_dir="ClipsForInference/my_shot", device="cuda" )
from clip_manager import generate_alpha_hints_gvm
generate_alpha_hints_gvm( shot_dir="ClipsForInference/my_shot", device="cuda" )

VideoMaMa (place rough mask in VideoMamaMaskHint/ first)

VideoMaMa(需先在VideoMamaMaskHint/目录中放置粗略蒙版)

from clip_manager import generate_alpha_hints_videomama
generate_alpha_hints_videomama( shot_dir="ClipsForInference/my_shot", device="cuda" )
from clip_manager import generate_alpha_hints_videomama
generate_alpha_hints_videomama( shot_dir="ClipsForInference/my_shot", device="cuda" )

BiRefNet (lightweight option, no large VRAM needed)

BiRefNet(轻量选项,无需大显存)

from clip_manager import generate_alpha_hints_birefnet
generate_alpha_hints_birefnet( shot_dir="ClipsForInference/my_shot", device="cuda" )
undefined
from clip_manager import generate_alpha_hints_birefnet
generate_alpha_hints_birefnet( shot_dir="ClipsForInference/my_shot", device="cuda" )
undefined

Alpha Hint Best Practices

Alpha提示图最佳实践

python
undefined
python
undefined

GOOD: Eroded, slightly blurry hint — pulls away from edges

推荐:经过腐蚀、轻微模糊的提示图 — 远离边缘

The model fills edge detail from the hint

模型会根据提示图补充边缘细节

kernel = np.ones((10, 10), np.uint8) good_hint = cv2.erode(raw_mask, kernel, iterations=3) good_hint = cv2.GaussianBlur(good_hint, (21, 21), 7)
kernel = np.ones((10, 10), np.uint8) good_hint = cv2.erode(raw_mask, kernel, iterations=3) good_hint = cv2.GaussianBlur(good_hint, (21, 21), 7)

BAD: Expanded / dilated hint — model is worse at subtracting

不推荐:膨胀后的提示图 — 模型难以去除多余蒙版区域

Don't push the mask OUTWARD past the true subject boundary

不要让蒙版超出主体真实边界

bad_hint = cv2.dilate(raw_mask, kernel, iterations=3) # avoid this
bad_hint = cv2.dilate(raw_mask, kernel, iterations=3) # 避免这种操作

ACCEPTABLE: Binary rough chroma key as-is

可接受:直接使用二进制粗略色度键蒙版

Even a hard binary mask works — just not expanded

即使是生硬的二进制蒙版也可使用 — 但不要膨胀

acceptable_hint = raw_chroma_key_mask # no dilation
undefined
acceptable_hint = raw_chroma_key_mask # 不要膨胀
undefined

Output Integration (Nuke / Fusion / Resolve)

输出集成(Nuke / Fusion / Resolve)

CorridorKey outputs straight (un-premultiplied) RGBA EXRs in linear light:
python
undefined
CorridorKey输出直出(未预乘)RGBA格式EXR文件,采用线性光:
python
undefined

In Nuke: read as EXR, set colorspace to "linear"

在Nuke中:读取EXR文件,设置色彩空间为"linear"

The alpha is already clean — no need for Unpremult node

Alpha通道已处理干净 — 无需使用Unpremult节点

Connect straight to a Merge (over) node with your background plate

直接连接到Merge(over)节点与背景板合成

Verify output is straight alpha (not premultiplied):

验证输出为直出Alpha(非预乘):

import cv2, numpy as np, os os.environ["OPENCV_IO_ENABLE_OPENEXR"] = "1"
result = cv2.imread("Output/shot_01/foreground/frame_0001.exr", cv2.IMREAD_UNCHANGED | cv2.IMREAD_ANYCOLOR)
import cv2, numpy as np, os os.environ["OPENCV_IO_ENABLE_OPENEXR"] = "1"
result = cv2.imread("Output/shot_01/foreground/frame_0001.exr", cv2.IMREAD_UNCHANGED | cv2.IMREAD_ANYCOLOR)

result[..., 3] = alpha channel (linear 0.0–1.0)

result[..., 3] = Alpha通道(线性0.0–1.0)

result[..., :3] = straight color (not multiplied by alpha)

result[..., :3] = 直出颜色(未与Alpha相乘)

Check a semi-transparent pixel

检查半透明像素

h, w = result.shape[:2] sample_alpha = result[h//2, w//2, 3] sample_color = result[h//2, w//2, :3] print(f"Alpha: {sample_alpha:.3f}, Color: {sample_color}")
h, w = result.shape[:2] sample_alpha = result[h//2, w//2, 3] sample_color = result[h//2, w//2, :3] print(f"Alpha值: {sample_alpha:.3f}, 颜色值: {sample_color}")

Color values should be full-strength even where alpha < 1.0 (straight alpha)

即使Alpha < 1.0,颜色值也应保持全强度(直出Alpha特性)

undefined
undefined

Troubleshooting

故障排除

CUDA not detected / falling back to CPU

CUDA未被检测到 / 自动回退到CPU

bash
undefined
bash
undefined

Check CUDA version requirement: driver must support CUDA 12.8+

检查CUDA版本要求:驱动需支持CUDA 12.8+

nvidia-smi # shows max supported CUDA version
nvidia-smi # 显示支持的最高CUDA版本

Reinstall with explicit CUDA extra

重新安装并指定CUDA扩展

uv sync --extra cuda
uv sync --extra cuda

Verify PyTorch sees GPU

验证PyTorch是否识别GPU

uv run python -c "import torch; print(torch.cuda.is_available(), torch.version.cuda)"
undefined
uv run python -c "import torch; print(torch.cuda.is_available(), torch.version.cuda)"
undefined

OpenEXR read/write fails

OpenEXR读写失败

bash
undefined
bash
undefined

Must set environment variable before importing cv2

必须在导入cv2前设置环境变量

export OPENCV_IO_ENABLE_OPENEXR=1 uv run python your_script.py
export OPENCV_IO_ENABLE_OPENEXR=1 uv run python your_script.py

Or in Python (must be BEFORE import cv2)

或在Python中设置(必须在import cv2之前)

import os os.environ["OPENCV_IO_ENABLE_OPENEXR"] = "1" import cv2
undefined
import os os.environ["OPENCV_IO_ENABLE_OPENEXR"] = "1" import cv2
undefined

Out of VRAM

显存不足

bash
undefined
bash
undefined

Use CPU fallback

使用CPU回退

uv run python main.py run_inference --device cpu
uv run python main.py run_inference --device cpu

Or reduce batch size / use tiled inference if supported

或减少批量大小 / 支持时使用分块推理

The engine dynamically scales to 2048x2048 tiles — for 4K,

引擎会自动适配2048x2048分块 — 处理4K素材时,

ensure at least 6-8GB VRAM

确保至少有6-8GB显存

Apple Silicon: use MPS

Apple Silicon芯片:使用MPS加速

uv run python main.py run_inference --device mps
undefined
uv run python main.py run_inference --device mps
undefined

Model file not found

模型文件未找到

bash
undefined
bash
undefined

Verify exact filename and location:

验证文件名和路径是否正确:

ls CorridorKeyModule/checkpoints/
ls CorridorKeyModule/checkpoints/

Must be named exactly: CorridorKey.pth

文件名必须为:CorridorKey.pth

Not: CorridorKey_v1.0.pth

不能是:CorridorKey_v1.0.pth

mv CorridorKeyModule/checkpoints/CorridorKey_v1.0.pth
CorridorKeyModule/checkpoints/CorridorKey.pth
undefined
mv CorridorKeyModule/checkpoints/CorridorKey_v1.0.pth
CorridorKeyModule/checkpoints/CorridorKey.pth
undefined

Docker GPU passthrough fails

Docker GPU直通失败

bash
undefined
bash
undefined

Test NVIDIA container toolkit

测试NVIDIA容器工具包

docker run --rm --gpus all nvidia/cuda:12.6.3-runtime-ubuntu22.04 nvidia-smi
docker run --rm --gpus all nvidia/cuda:12.6.3-runtime-ubuntu22.04 nvidia-smi

If it fails, install/reconfigure nvidia-container-toolkit:

如果失败,安装/重新配置nvidia-container-toolkit:

Then restart Docker daemon

然后重启Docker守护进程

sudo systemctl restart docker
undefined
sudo systemctl restart docker
undefined

Poor keying results

抠像效果不佳

  • Hint too expanded: Erode your alpha hint more — CorridorKey is better at adding edge detail than removing unwanted mask area
  • Wrong color space: Ensure input is sRGB/REC709 gamut; don't pass log-encoded footage directly
  • Green spill: The model handles color unmixing, but extreme green spill in source may degrade results; consider a despill pass before inference
  • Static subjects: GVM works best on people; try VideoMaMa with a hand-drawn hint for props/objects
  • 提示图过度膨胀:进一步腐蚀Alpha提示图 — CorridorKey更擅长补充边缘细节,而非去除多余蒙版区域
  • 色彩空间错误:确保输入为sRGB/REC709色域;不要直接传入对数编码素材
  • 绿幕溢色:模型可处理颜色分离,但源素材中严重的绿幕溢色会降低效果;可在推理前先进行去溢色处理
  • 静态主体:GVM最适用于人物素材;对于道具/物体,可尝试使用带手绘提示图的VideoMaMa

Community & Resources

社区与资源