typegpu

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

TypeGPU / WebGPU for HyperFrames

面向HyperFrames的TypeGPU / WebGPU

HyperFrames supports TypeGPU and raw WebGPU through its
typegpu
runtime adapter. The adapter does not own your pipeline. It publishes HyperFrames time and dispatches a seek event so your composition can render the exact GPU frame.
HyperFrames通过其
typegpu
运行时适配器支持TypeGPU与原生WebGPU。该适配器不会接管你的管线,它会发布HyperFrames时间并分发seek事件,以便你的合成内容可以渲染出精准的GPU帧。

Contract

约定

  • Initialize WebGPU asynchronously (
    await navigator.gpu.requestAdapter()
    ), but register all GSAP tweens synchronously — before any
    await
    . The HyperFrames player reads the timeline immediately at page load.
  • Render from HyperFrames time, not
    performance.now()
    .
  • Listen for the
    hf-seek
    event and re-render at exactly that time.
  • Guard against environments where WebGPU is unavailable — the adapter does not check for you.
  • For video renders, call
    await device.queue.onSubmittedWorkDone()
    after submitting GPU work to ensure the canvas is flushed before the frame is captured.
The adapter sets
window.__hfTypegpuTime
and dispatches
new CustomEvent("hf-seek", { detail: { time } })
on each seek.
  • 异步初始化WebGPU(
    await navigator.gpu.requestAdapter()
    ),但需同步注册所有GSAP补间动画——在任何
    await
    之前完成。HyperFrames播放器会在页面加载时立即读取时间线。
  • 基于HyperFrames时间进行渲染,而非
    performance.now()
  • 监听
    hf-seek
    事件,并在指定时间点重新渲染。
  • 需对WebGPU不可用的环境做兼容处理——适配器不会自动检查。
  • 若用于视频渲染,提交GPU任务后需调用
    await device.queue.onSubmittedWorkDone()
    ,确保在捕获帧之前画布已完成刷新。
适配器会设置
window.__hfTypegpuTime
,并在每次seek时分发
new CustomEvent("hf-seek", { detail: { time } })
事件。

Basic Pattern

基础模式

html
<canvas id="gpu-layer"></canvas>
<script>
  (async () => {
    if (!navigator.gpu) return;
    const adapter = await navigator.gpu.requestAdapter();
    if (!adapter) return;
    const device = await adapter.requestDevice();
    const canvas = document.getElementById("gpu-layer");
    canvas.width = 1920;
    canvas.height = 1080;
    const ctx = canvas.getContext("webgpu");
    const fmt = navigator.gpu.getPreferredCanvasFormat();
    ctx.configure({ device, format: fmt, alphaMode: "opaque" });

    // Build your pipeline, buffers, bind groups...
    const timeUniform = new Float32Array([0]);
    const timeBuf = device.createBuffer({
      size: 16,
      usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST,
    });

    function render(t) {
      timeUniform[0] = t;
      device.queue.writeBuffer(timeBuf, 0, timeUniform);
      const enc = device.createCommandEncoder();
      const pass = enc.beginRenderPass({
        colorAttachments: [
          {
            view: ctx.getCurrentTexture().createView(),
            loadOp: "clear",
            clearValue: { r: 0, g: 0, b: 0, a: 1 },
            storeOp: "store",
          },
        ],
      });
      pass.setPipeline(pipeline);
      pass.setBindGroup(0, bindGroup);
      pass.draw(3);
      pass.end();
      device.queue.submit([enc.finish()]);
    }

    render(0);
    window.addEventListener("hf-seek", (e) => render(e.detail.time));
  })();
</script>
html
<canvas id="gpu-layer"></canvas>
<script>
  (async () => {
    if (!navigator.gpu) return;
    const adapter = await navigator.gpu.requestAdapter();
    if (!adapter) return;
    const device = await adapter.requestDevice();
    const canvas = document.getElementById("gpu-layer");
    canvas.width = 1920;
    canvas.height = 1080;
    const ctx = canvas.getContext("webgpu");
    const fmt = navigator.gpu.getPreferredCanvasFormat();
    ctx.configure({ device, format: fmt, alphaMode: "opaque" });

    // Build your pipeline, buffers, bind groups...
    const timeUniform = new Float32Array([0]);
    const timeBuf = device.createBuffer({
      size: 16,
      usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST,
    });

    function render(t) {
      timeUniform[0] = t;
      device.queue.writeBuffer(timeBuf, 0, timeUniform);
      const enc = device.createCommandEncoder();
      const pass = enc.beginRenderPass({
        colorAttachments: [
          {
            view: ctx.getCurrentTexture().createView(),
            loadOp: "clear",
            clearValue: { r: 0, g: 0, b: 0, a: 1 },
            storeOp: "store",
          },
        ],
      });
      pass.setPipeline(pipeline);
      pass.setBindGroup(0, bindGroup);
      pass.draw(3);
      pass.end();
      device.queue.submit([enc.finish()]);
    }

    render(0);
    window.addEventListener("hf-seek", (e) => render(e.detail.time));
  })();
</script>

Timeline Registration

时间线注册

GSAP tweens that drive text, captions, or HTML elements must be registered synchronously — before any
await
:
js
const tl = gsap.timeline({ paused: true });

// Caption tweens: synchronous, added before WebGPU init
gsap.set(".cap", { opacity: 0 });
tl.to("#cap-1", { opacity: 1, duration: 0.3 }, 1.0);
tl.to("#cap-1", { opacity: 0, duration: 0.2 }, 3.5);

window.__timelines["my-comp"] = tl;

// GPU-dependent tweens can go inside the async IIFE
(async () => {
  // ... WebGPU init ...
  const proxy = { value: 0 };
  tl.to(proxy, { value: 1, duration: 2, onUpdate: render }, 0.5);
})();
驱动文本、字幕或HTML元素的GSAP补间动画必须同步注册——在任何
await
之前完成:
js
const tl = gsap.timeline({ paused: true });

// Caption tweens: synchronous, added before WebGPU init
gsap.set(".cap", { opacity: 0 });
tl.to("#cap-1", { opacity: 1, duration: 0.3 }, 1.0);
tl.to("#cap-1", { opacity: 0, duration: 0.2 }, 3.5);

window.__timelines["my-comp"] = tl;

// GPU-dependent tweens can go inside the async IIFE
(async () => {
  // ... WebGPU init ...
  const proxy = { value: 0 };
  tl.to(proxy, { value: 1, duration: 2, onUpdate: render }, 0.5);
})();

Video-Backed Effects (Liquid Glass, Distortion)

基于视频的效果(液态玻璃、扭曲)

To use a
<video>
as the GPU input texture:
js
const videoEl = document.getElementById("aroll");

// Wait for video metadata before creating the texture
await new Promise((r) => {
  if (videoEl.readyState >= 1) r();
  else videoEl.addEventListener("loadedmetadata", r, { once: true });
});

// Create texture at the video's NATIVE resolution
const vw = videoEl.videoWidth,
  vh = videoEl.videoHeight;
const bgTex = device.createTexture({
  size: [vw, vh],
  format: "rgba8unorm",
  usage:
    GPUTextureUsage.COPY_DST | GPUTextureUsage.TEXTURE_BINDING | GPUTextureUsage.RENDER_ATTACHMENT,
});

function render(t) {
  try {
    device.queue.copyExternalImageToTexture({ source: videoEl }, { texture: bgTex }, [vw, vh]);
  } catch (_) {
    /* frame not decoded yet */
  }
  // ... draw ...
}
Render-mode caveat: headless Chrome may fail
copyExternalImageToTexture
for video elements. For production renders, pre-extract key frames via FFmpeg as PNGs and load them as image textures instead.
若要将
<video>
作为GPU输入纹理:
js
const videoEl = document.getElementById("aroll");

// Wait for video metadata before creating the texture
await new Promise((r) => {
  if (videoEl.readyState >= 1) r();
  else videoEl.addEventListener("loadedmetadata", r, { once: true });
});

// Create texture at the video's NATIVE resolution
const vw = videoEl.videoWidth,
  vh = videoEl.videoHeight;
const bgTex = device.createTexture({
  size: [vw, vh],
  format: "rgba8unorm",
  usage:
    GPUTextureUsage.COPY_DST | GPUTextureUsage.TEXTURE_BINDING | GPUTextureUsage.RENDER_ATTACHMENT,
});

function render(t) {
  try {
    device.queue.copyExternalImageToTexture({ source: videoEl }, { texture: bgTex }, [vw, vh]);
  } catch (_) {
    /* frame not decoded yet */
  }
  // ... draw ...
}
渲染模式注意事项: 无头Chrome可能无法对视频元素执行
copyExternalImageToTexture
。生产环境渲染时,建议通过FFmpeg预先提取关键帧为PNG格式,再将其作为图像纹理加载。

Frosted Blur via Downsample Pass

通过下采样实现磨砂模糊

A single-pass Gaussian kernel is too weak for glass-like frosted blur. Use a two-pass approach:
  1. Pass 1 — Downsample: render the full-res texture to a small texture (1/6 resolution). Bilinear filtering during the downsample naturally averages pixels.
  2. Pass 2 — Glass composite: sample the small texture for the frosted interior (bilinear upscale = heavy smooth blur) and the full-res texture for sharp areas and chromatic refraction.
This matches TypeGPU's
textureSampleBias
mip-level approach without generating mipmaps.
单通道高斯内核的效果不足以实现类玻璃的磨砂模糊,建议采用两步法:
  1. 第一步——下采样: 将全分辨率纹理渲染到小尺寸纹理(1/6分辨率)。下采样过程中的双线性过滤会自然地对像素进行平均。
  2. 第二步——玻璃合成: 采样小尺寸纹理来实现磨砂内部效果(双线性放大=强平滑模糊),同时采样全分辨率纹理来保留清晰区域和色散折射效果。
这种方式无需生成mipmap,即可匹配TypeGPU的
textureSampleBias
mip级别方案。

Transparent vs Opaque Canvas

透明画布与不透明画布

  • alphaMode: 'opaque'
    — the GPU canvas renders the full frame (video + effect). Use when the GPU pipeline handles all visual content.
  • alphaMode: 'premultiplied'
    — the GPU canvas is transparent where alpha = 0, letting HTML elements below show through. Use for overlays (particles, path animations) on top of a regular
    <video>
    element.
  • alphaMode: 'opaque'
    —— GPU画布渲染完整帧(视频+特效)。当GPU管线处理所有视觉内容时使用此模式。
  • alphaMode: 'premultiplied'
    —— GPU画布在alpha=0的区域为透明,下方的HTML元素可透过显示。适用于在常规
    <video>
    元素之上添加叠加层(粒子、路径动画)的场景。

WGSL Full-Screen Triangle

WGSL全屏三角形

The standard vertex shader for full-screen effects (no vertex buffer needed):
wgsl
struct Vo { @builtin(position) pos: vec4f, @location(0) uv: vec2f }

@vertex fn vs(@builtin(vertex_index) vi: u32) -> Vo {
  let ps = array<vec2f, 3>(vec2f(-1., -1.), vec2f(3., -1.), vec2f(-1., 3.));
  let ts = array<vec2f, 3>(vec2f(0., 1.), vec2f(2., 1.), vec2f(0., -1.));
  return Vo(vec4f(ps[vi], 0., 1.), ts[vi]);
}
Draw with
pass.draw(3)
— one triangle that covers the viewport.
用于全屏特效的标准顶点着色器(无需顶点缓冲区):
wgsl
struct Vo { @builtin(position) pos: vec4f, @location(0) uv: vec2f }

@vertex fn vs(@builtin(vertex_index) vi: u32) -> Vo {
  let ps = array<vec2f, 3>(vec2f(-1., -1.), vec2f(3., -1.), vec2f(-1., 3.));
  let ts = array<vec2f, 3>(vec2f(0., 1.), vec2f(2., 1.), vec2f(0., -1.));
  return Vo(vec4f(ps[vi], 0., 1.), ts[vi]);
}
调用
pass.draw(3)
进行绘制——单个三角形即可覆盖视口。

Rounded-Rect SDF (Liquid Glass Pill)

圆角矩形SDF(液态玻璃胶囊形状)

wgsl
fn sdf_box(p: vec2f, half_size: vec2f, corner_radius: f32) -> f32 {
  let d = abs(p) - half_size + vec2f(corner_radius);
  return length(max(d, vec2f(0.))) + min(max(d.x, d.y), 0.) - corner_radius;
}
Use this to define inside/ring/outside zones for glass effects. Negative values are inside the shape.
wgsl
fn sdf_box(p: vec2f, half_size: vec2f, corner_radius: f32) -> f32 {
  let d = abs(p) - half_size + vec2f(corner_radius);
  return length(max(d, vec2f(0.))) + min(max(d.x, d.y), 0.) - corner_radius;
}
使用此函数定义玻璃效果的内部/边缘/外部区域。负值表示处于形状内部。

Deterministic Rendering

确定性渲染

  • No
    Math.random()
    — use a seeded PRNG.
  • No
    requestAnimationFrame
    for the render loop — render only in response to
    hf-seek
    .
  • No
    performance.now()
    for animation time — read
    window.__hfTypegpuTime
    or
    e.detail.time
    .
  • After GPU submit, call
    await device.queue.onSubmittedWorkDone()
    for render-mode frame capture.
  • 禁止使用
    Math.random()
    ——使用带种子的伪随机数生成器(PRNG)。
  • 不要用
    requestAnimationFrame
    作为渲染循环——仅在响应
    hf-seek
    事件时进行渲染。
  • 不要用
    performance.now()
    作为动画时间——读取
    window.__hfTypegpuTime
    e.detail.time
  • 提交GPU任务后,调用
    await device.queue.onSubmittedWorkDone()
    以支持渲染模式下的帧捕获。