typegpu
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseTypeGPU / WebGPU for HyperFrames
面向HyperFrames的TypeGPU / WebGPU
HyperFrames supports TypeGPU and raw WebGPU through its runtime adapter. The adapter does not own your pipeline. It publishes HyperFrames time and dispatches a seek event so your composition can render the exact GPU frame.
typegpuHyperFrames通过其运行时适配器支持TypeGPU与原生WebGPU。该适配器不会接管你的管线,它会发布HyperFrames时间并分发seek事件,以便你的合成内容可以渲染出精准的GPU帧。
typegpuContract
约定
- Initialize WebGPU asynchronously (), but register all GSAP tweens synchronously — before any
await navigator.gpu.requestAdapter(). The HyperFrames player reads the timeline immediately at page load.await - Render from HyperFrames time, not .
performance.now() - Listen for the event and re-render at exactly that time.
hf-seek - Guard against environments where WebGPU is unavailable — the adapter does not check for you.
- For video renders, call after submitting GPU work to ensure the canvas is flushed before the frame is captured.
await device.queue.onSubmittedWorkDone()
The adapter sets and dispatches on each seek.
window.__hfTypegpuTimenew CustomEvent("hf-seek", { detail: { time } })- 异步初始化WebGPU(),但需同步注册所有GSAP补间动画——在任何
await navigator.gpu.requestAdapter()之前完成。HyperFrames播放器会在页面加载时立即读取时间线。await - 基于HyperFrames时间进行渲染,而非。
performance.now() - 监听事件,并在指定时间点重新渲染。
hf-seek - 需对WebGPU不可用的环境做兼容处理——适配器不会自动检查。
- 若用于视频渲染,提交GPU任务后需调用,确保在捕获帧之前画布已完成刷新。
await device.queue.onSubmittedWorkDone()
适配器会设置,并在每次seek时分发事件。
window.__hfTypegpuTimenew CustomEvent("hf-seek", { detail: { time } })Basic Pattern
基础模式
html
<canvas id="gpu-layer"></canvas>
<script>
(async () => {
if (!navigator.gpu) return;
const adapter = await navigator.gpu.requestAdapter();
if (!adapter) return;
const device = await adapter.requestDevice();
const canvas = document.getElementById("gpu-layer");
canvas.width = 1920;
canvas.height = 1080;
const ctx = canvas.getContext("webgpu");
const fmt = navigator.gpu.getPreferredCanvasFormat();
ctx.configure({ device, format: fmt, alphaMode: "opaque" });
// Build your pipeline, buffers, bind groups...
const timeUniform = new Float32Array([0]);
const timeBuf = device.createBuffer({
size: 16,
usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST,
});
function render(t) {
timeUniform[0] = t;
device.queue.writeBuffer(timeBuf, 0, timeUniform);
const enc = device.createCommandEncoder();
const pass = enc.beginRenderPass({
colorAttachments: [
{
view: ctx.getCurrentTexture().createView(),
loadOp: "clear",
clearValue: { r: 0, g: 0, b: 0, a: 1 },
storeOp: "store",
},
],
});
pass.setPipeline(pipeline);
pass.setBindGroup(0, bindGroup);
pass.draw(3);
pass.end();
device.queue.submit([enc.finish()]);
}
render(0);
window.addEventListener("hf-seek", (e) => render(e.detail.time));
})();
</script>html
<canvas id="gpu-layer"></canvas>
<script>
(async () => {
if (!navigator.gpu) return;
const adapter = await navigator.gpu.requestAdapter();
if (!adapter) return;
const device = await adapter.requestDevice();
const canvas = document.getElementById("gpu-layer");
canvas.width = 1920;
canvas.height = 1080;
const ctx = canvas.getContext("webgpu");
const fmt = navigator.gpu.getPreferredCanvasFormat();
ctx.configure({ device, format: fmt, alphaMode: "opaque" });
// Build your pipeline, buffers, bind groups...
const timeUniform = new Float32Array([0]);
const timeBuf = device.createBuffer({
size: 16,
usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST,
});
function render(t) {
timeUniform[0] = t;
device.queue.writeBuffer(timeBuf, 0, timeUniform);
const enc = device.createCommandEncoder();
const pass = enc.beginRenderPass({
colorAttachments: [
{
view: ctx.getCurrentTexture().createView(),
loadOp: "clear",
clearValue: { r: 0, g: 0, b: 0, a: 1 },
storeOp: "store",
},
],
});
pass.setPipeline(pipeline);
pass.setBindGroup(0, bindGroup);
pass.draw(3);
pass.end();
device.queue.submit([enc.finish()]);
}
render(0);
window.addEventListener("hf-seek", (e) => render(e.detail.time));
})();
</script>Timeline Registration
时间线注册
GSAP tweens that drive text, captions, or HTML elements must be registered synchronously — before any :
awaitjs
const tl = gsap.timeline({ paused: true });
// Caption tweens: synchronous, added before WebGPU init
gsap.set(".cap", { opacity: 0 });
tl.to("#cap-1", { opacity: 1, duration: 0.3 }, 1.0);
tl.to("#cap-1", { opacity: 0, duration: 0.2 }, 3.5);
window.__timelines["my-comp"] = tl;
// GPU-dependent tweens can go inside the async IIFE
(async () => {
// ... WebGPU init ...
const proxy = { value: 0 };
tl.to(proxy, { value: 1, duration: 2, onUpdate: render }, 0.5);
})();驱动文本、字幕或HTML元素的GSAP补间动画必须同步注册——在任何之前完成:
awaitjs
const tl = gsap.timeline({ paused: true });
// Caption tweens: synchronous, added before WebGPU init
gsap.set(".cap", { opacity: 0 });
tl.to("#cap-1", { opacity: 1, duration: 0.3 }, 1.0);
tl.to("#cap-1", { opacity: 0, duration: 0.2 }, 3.5);
window.__timelines["my-comp"] = tl;
// GPU-dependent tweens can go inside the async IIFE
(async () => {
// ... WebGPU init ...
const proxy = { value: 0 };
tl.to(proxy, { value: 1, duration: 2, onUpdate: render }, 0.5);
})();Video-Backed Effects (Liquid Glass, Distortion)
基于视频的效果(液态玻璃、扭曲)
To use a as the GPU input texture:
<video>js
const videoEl = document.getElementById("aroll");
// Wait for video metadata before creating the texture
await new Promise((r) => {
if (videoEl.readyState >= 1) r();
else videoEl.addEventListener("loadedmetadata", r, { once: true });
});
// Create texture at the video's NATIVE resolution
const vw = videoEl.videoWidth,
vh = videoEl.videoHeight;
const bgTex = device.createTexture({
size: [vw, vh],
format: "rgba8unorm",
usage:
GPUTextureUsage.COPY_DST | GPUTextureUsage.TEXTURE_BINDING | GPUTextureUsage.RENDER_ATTACHMENT,
});
function render(t) {
try {
device.queue.copyExternalImageToTexture({ source: videoEl }, { texture: bgTex }, [vw, vh]);
} catch (_) {
/* frame not decoded yet */
}
// ... draw ...
}Render-mode caveat: headless Chrome may fail for video elements. For production renders, pre-extract key frames via FFmpeg as PNGs and load them as image textures instead.
copyExternalImageToTexture若要将作为GPU输入纹理:
<video>js
const videoEl = document.getElementById("aroll");
// Wait for video metadata before creating the texture
await new Promise((r) => {
if (videoEl.readyState >= 1) r();
else videoEl.addEventListener("loadedmetadata", r, { once: true });
});
// Create texture at the video's NATIVE resolution
const vw = videoEl.videoWidth,
vh = videoEl.videoHeight;
const bgTex = device.createTexture({
size: [vw, vh],
format: "rgba8unorm",
usage:
GPUTextureUsage.COPY_DST | GPUTextureUsage.TEXTURE_BINDING | GPUTextureUsage.RENDER_ATTACHMENT,
});
function render(t) {
try {
device.queue.copyExternalImageToTexture({ source: videoEl }, { texture: bgTex }, [vw, vh]);
} catch (_) {
/* frame not decoded yet */
}
// ... draw ...
}渲染模式注意事项: 无头Chrome可能无法对视频元素执行。生产环境渲染时,建议通过FFmpeg预先提取关键帧为PNG格式,再将其作为图像纹理加载。
copyExternalImageToTextureFrosted Blur via Downsample Pass
通过下采样实现磨砂模糊
A single-pass Gaussian kernel is too weak for glass-like frosted blur. Use a two-pass approach:
- Pass 1 — Downsample: render the full-res texture to a small texture (1/6 resolution). Bilinear filtering during the downsample naturally averages pixels.
- Pass 2 — Glass composite: sample the small texture for the frosted interior (bilinear upscale = heavy smooth blur) and the full-res texture for sharp areas and chromatic refraction.
This matches TypeGPU's mip-level approach without generating mipmaps.
textureSampleBias单通道高斯内核的效果不足以实现类玻璃的磨砂模糊,建议采用两步法:
- 第一步——下采样: 将全分辨率纹理渲染到小尺寸纹理(1/6分辨率)。下采样过程中的双线性过滤会自然地对像素进行平均。
- 第二步——玻璃合成: 采样小尺寸纹理来实现磨砂内部效果(双线性放大=强平滑模糊),同时采样全分辨率纹理来保留清晰区域和色散折射效果。
这种方式无需生成mipmap,即可匹配TypeGPU的 mip级别方案。
textureSampleBiasTransparent vs Opaque Canvas
透明画布与不透明画布
- — the GPU canvas renders the full frame (video + effect). Use when the GPU pipeline handles all visual content.
alphaMode: 'opaque' - — the GPU canvas is transparent where alpha = 0, letting HTML elements below show through. Use for overlays (particles, path animations) on top of a regular
alphaMode: 'premultiplied'element.<video>
- —— GPU画布渲染完整帧(视频+特效)。当GPU管线处理所有视觉内容时使用此模式。
alphaMode: 'opaque' - —— GPU画布在alpha=0的区域为透明,下方的HTML元素可透过显示。适用于在常规
alphaMode: 'premultiplied'元素之上添加叠加层(粒子、路径动画)的场景。<video>
WGSL Full-Screen Triangle
WGSL全屏三角形
The standard vertex shader for full-screen effects (no vertex buffer needed):
wgsl
struct Vo { @builtin(position) pos: vec4f, @location(0) uv: vec2f }
@vertex fn vs(@builtin(vertex_index) vi: u32) -> Vo {
let ps = array<vec2f, 3>(vec2f(-1., -1.), vec2f(3., -1.), vec2f(-1., 3.));
let ts = array<vec2f, 3>(vec2f(0., 1.), vec2f(2., 1.), vec2f(0., -1.));
return Vo(vec4f(ps[vi], 0., 1.), ts[vi]);
}Draw with — one triangle that covers the viewport.
pass.draw(3)用于全屏特效的标准顶点着色器(无需顶点缓冲区):
wgsl
struct Vo { @builtin(position) pos: vec4f, @location(0) uv: vec2f }
@vertex fn vs(@builtin(vertex_index) vi: u32) -> Vo {
let ps = array<vec2f, 3>(vec2f(-1., -1.), vec2f(3., -1.), vec2f(-1., 3.));
let ts = array<vec2f, 3>(vec2f(0., 1.), vec2f(2., 1.), vec2f(0., -1.));
return Vo(vec4f(ps[vi], 0., 1.), ts[vi]);
}调用进行绘制——单个三角形即可覆盖视口。
pass.draw(3)Rounded-Rect SDF (Liquid Glass Pill)
圆角矩形SDF(液态玻璃胶囊形状)
wgsl
fn sdf_box(p: vec2f, half_size: vec2f, corner_radius: f32) -> f32 {
let d = abs(p) - half_size + vec2f(corner_radius);
return length(max(d, vec2f(0.))) + min(max(d.x, d.y), 0.) - corner_radius;
}Use this to define inside/ring/outside zones for glass effects. Negative values are inside the shape.
wgsl
fn sdf_box(p: vec2f, half_size: vec2f, corner_radius: f32) -> f32 {
let d = abs(p) - half_size + vec2f(corner_radius);
return length(max(d, vec2f(0.))) + min(max(d.x, d.y), 0.) - corner_radius;
}使用此函数定义玻璃效果的内部/边缘/外部区域。负值表示处于形状内部。
Deterministic Rendering
确定性渲染
- No — use a seeded PRNG.
Math.random() - No for the render loop — render only in response to
requestAnimationFrame.hf-seek - No for animation time — read
performance.now()orwindow.__hfTypegpuTime.e.detail.time - After GPU submit, call for render-mode frame capture.
await device.queue.onSubmittedWorkDone()
- 禁止使用——使用带种子的伪随机数生成器(PRNG)。
Math.random() - 不要用作为渲染循环——仅在响应
requestAnimationFrame事件时进行渲染。hf-seek - 不要用作为动画时间——读取
performance.now()或window.__hfTypegpuTime。e.detail.time - 提交GPU任务后,调用以支持渲染模式下的帧捕获。
await device.queue.onSubmittedWorkDone()