compression
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCompressing JavaScript
JavaScript压缩
Compress your JavaScript and keep an eye on your chunk sizes for optimal performance. Overly high JavaScript bundle granularity can help with deduplication & caching, but can suffer from poorer compression & impact loading in the 50-100 chunks range (due to browser processes, cache checks etc). Ultimately, pick the compression strategy that works best for you.
JavaScript is the second biggest contributor to page size and the second most requested web resource on the internet after images. We use patterns that reduce the transfer, load, and execution time for JavaScript to improve website performance. Compression can help reduce the time needed to transfer scripts over the network.
压缩你的JavaScript并关注代码块大小,以实现最佳性能。过高的JavaScript包粒度有助于代码去重和缓存,但在50-100个代码块的范围内,压缩效果会变差,还会影响加载性能(这是由于浏览器进程、缓存检查等因素导致的)。最终,请选择最适合你的压缩策略。
JavaScript是互联网上仅次于图片的第二大页面大小贡献因素,也是第二大最常被请求的网络资源。我们采用各种模式来减少JavaScript的传输、加载和执行时间,以提升网站性能。压缩技术有助于缩短脚本通过网络传输的时间。
When to Use
适用场景
- Use this when you need to reduce JavaScript payload sizes for faster page loads
- This is helpful when optimizing network transfer times, especially for users on slower connections
- Use this alongside minification, code-splitting, and caching strategies
- 当你需要减小JavaScript有效负载大小以实现更快的页面加载时使用
- 这在优化网络传输时间时很有帮助,尤其是针对使用慢速网络的用户
- 与代码压缩(minification)、代码分割和缓存策略配合使用
Instructions
操作指南
- Prefer Brotli compression over Gzip for better compression ratios at similar speed
- Use static compression for assets that don't change frequently and dynamic compression for frequently changing content
- Enable compression at the server or CDN level (e.g., Nginx, Vercel, Netlify)
- Always minify JavaScript before applying compression
- Be mindful of the granularity trade-off: larger bundles compress better, but smaller chunks cache better
- 优先使用Brotli压缩而非Gzip,因为在压缩速度相近的情况下,Brotli的压缩率更高
- 对不经常更改的资源使用静态压缩,对频繁更改的内容使用动态压缩
- 在服务器或CDN层面启用压缩(例如Nginx、Vercel、Netlify)
- 在应用压缩前务必先对JavaScript进行代码压缩(minification)
- 注意粒度权衡:较大的包压缩效果更好,但较小的代码块缓存效果更佳
Details
详细说明
You can combine compression with other techniques such as minification, code-splitting, bundling, caching, and lazy-loading to reduce the performance impact of large amounts of JavaScript. However, the goals of these techniques can sometimes be at odds with each other. This section explores JavaScript compression techniques and discusses the nuances you should consider when deciding on a code-splitting and compression strategy.
- Gzip and Brotli are the most common ways to compress JavaScript and are widely supported by modern browsers.
- Brotli offers a better compression ratio at similar compression levels.
- Next.js provides Gzip compression by default but recommends enabling it on an HTTP proxy like Nginx.
- If you use Webpack to bundle your code, you can use the CompressionPlugin for Gzip compression or the BrotliWebpackPlugin for Brotli compression.
- Oyo saw a 15-20% reduction, and Wix saw a 21-25% reduction in file sizes after switching to Brotli compression instead of Gzip.
- compress(a + b) <= compress(a) + compress(b) - A single large bundle will give better compression than multiple smaller ones. This causes the granularity trade-off where de-duplication and caching are at odds with browser performance and compression. Granular chunking can help deal with this trade-off.
你可以将压缩与其他技术(如代码压缩、代码分割、打包、缓存和懒加载)结合使用,以减少大量JavaScript对性能的影响。不过,这些技术的目标有时会相互冲突。本节将探讨JavaScript压缩技术,并讨论在决定代码分割和压缩策略时需要考虑的细微差别。
- Gzip和Brotli是最常用的JavaScript压缩方式,并且被现代浏览器广泛支持。
- Brotli在相似的压缩级别下提供更高的压缩率。
- Next.js默认提供Gzip压缩,但建议在Nginx等HTTP代理上启用它。
- 如果你使用Webpack打包代码,可以使用**CompressionPlugin**进行Gzip压缩,或使用BrotliWebpackPlugin进行Brotli压缩。
- Oyo在切换到Brotli压缩替代Gzip后,文件大小减少了15-20%;Wix则减少了21-25%。
- compress(a + b) <= compress(a) + compress(b) —— 单个大包的压缩效果会优于多个小包的压缩效果之和。这就导致了粒度权衡问题:代码去重和缓存与浏览器性能及压缩效果相互冲突。细粒度代码分割有助于应对这种权衡。
HTTP compression
HTTP压缩
Compression reduces the size of documents and files, so they take up less disk space than the originals. Smaller documents consume lower bandwidth and can be transferred over a network quickly. HTTP compression uses this simple concept to compress website content, reduce page weights, lower bandwidth requirement, and improve performance.
HTTP data compression may be categorized in different ways. One of them is lossy vs. lossless.
Lossy compression implies that the compression-decompression cycle results in a slightly altered document while retaining its usability. The change is mostly imperceptible to the end-user. The most common example of lossy compression is JPEG compression for images.
With Lossless compression, the data recovered after compression and subsequent decompression will match precisely with the original. PNG images are an example of lossless compression. Lossless compression is relevant to text transfers and should be applied to text-based formats such as HTML, CSS, and JavaScript.
Since you want all valid JS code on the browser, you should use lossless compression algorithms for JavaScript code. Before we compress the JS, minification helps eliminate the unnecessary syntax and reduce it to only the code required for execution.
压缩可以减小文档和文件的大小,使其占用的磁盘空间小于原始文件。更小的文档消耗的带宽更低,能够更快地通过网络传输。HTTP压缩正是利用这个简单的概念来压缩网站内容,减少页面大小,降低带宽需求并提升性能。
HTTP数据压缩可以分为不同类型,其中一种是有损压缩与无损压缩。
有损压缩意味着压缩-解压缩循环后得到的文档会略有改动,但仍保持可用性。这种变化对终端用户来说几乎不可察觉。最常见的有损压缩例子是图片的JPEG压缩。
无损压缩则是指压缩后再解压缩恢复的数据与原始数据完全一致。PNG图片就是无损压缩的例子。无损压缩与文本传输相关,应应用于HTML、CSS和JavaScript等基于文本的格式。
由于你希望浏览器能获得所有有效的JS代码,因此应该对JavaScript代码使用无损压缩算法。在压缩JS之前,代码压缩(minification)有助于消除不必要的语法,只保留执行所需的代码。
Minification
代码压缩(Minification)
To reduce payload sizes, you can minify JavaScript before compression. Minification complements compression by removing whitespace and any unnecessary code to create a smaller but perfectly valid code file. When writing code, we use line breaks, indentation, spaces, well-named variables, and comments to improve code readability and maintainability. However, these elements contribute to the overall JavaScript size and are not necessary for execution on the browser. Minification reduces the JavaScript code to the minimum required for successful execution.
Minification is a standard practice for JS and CSS optimization. It's common for JavaScript library developers to provide minified versions of their files for production deployments, usually denoted with a min.js name extension. (e.g., and )
jquery.jsjquery.min.jsMultiple tools are available for the minification of HTML, CSS, and JS resources. Terser is a popular JavaScript compression tool for ES6+, and Webpack v4 includes a plugin for this library by default to create minified build files. You can also use the with older versions of Webpack or use Terser as a CLI tool without a module bundler.
TerserWebpackPlugin为了减小有效负载大小,你可以在压缩前对JavaScript进行代码压缩。代码压缩通过去除空格和任何不必要的代码,生成更小但完全有效的代码文件,与压缩技术相辅相成。编写代码时,我们使用换行、缩进、空格、命名规范的变量和注释来提高代码的可读性和可维护性。然而,这些元素会增加JavaScript的整体大小,并且对于在浏览器中执行来说并非必需。代码压缩将JavaScript代码精简到执行所需的最小程度。
代码压缩是JS和CSS优化的标准做法。JavaScript库开发者通常会为生产部署提供代码压缩后的版本,通常以min.js作为文件扩展名(例如和)。
jquery.jsjquery.min.js有多种工具可用于HTML、CSS和JS资源的代码压缩。Terser是一款适用于ES6+的流行JavaScript压缩工具,Webpack v4默认包含该库的插件,用于生成压缩后的构建文件。你也可以在旧版本的Webpack中使用,或者不使用模块打包器,直接将Terser作为CLI工具使用。
TerserWebpackPluginStatic vs. dynamic compression
静态压缩 vs 动态压缩
Minification helps to reduce file sizes significantly, but compression of JS can provide more significant gains. You can implement server-side compression in two ways.
Static Compression: You can use static compression to pre-compress resources and save them ahead of time as part of the build process. You can afford to use higher compression levels, in this case, to improve the download time for code. The high build-time will not affect the website performance. It would be best if you used static compression for files that do not change very often.
Dynamic Compression: With this process, compression takes place on the fly when the browser requests resources. Dynamic compression is easier to implement, but you are restricted to using lower compression levels. Higher compression levels would require more time, and you would lose the advantage gained from the smaller content sizes. It would help if you used dynamic compression with content that changes frequently or is application-generated.
You can use static or dynamic compression depending on the type of application content. You can enable both static and dynamic compression using popular compression algorithms, but the recommended compression levels are different in each case.
代码压缩可以显著减小文件大小,但JS压缩能带来更显著的收益。你可以通过两种方式实现服务器端压缩。
静态压缩:你可以使用静态压缩在构建过程中预压缩资源,并提前保存。在这种情况下,你可以使用更高的压缩级别来提高代码的下载速度。较长的构建时间不会影响网站性能。你应该对不经常更改的文件使用静态压缩。
动态压缩:在这种模式下,当浏览器请求资源时会实时进行压缩。动态压缩更容易实现,但你只能使用较低的压缩级别。更高的压缩级别需要更多时间,你会失去从小内容大小中获得的优势。你应该对频繁更改或应用生成的内容使用动态压缩。
你可以根据应用内容的类型选择静态或动态压缩。你可以使用流行的压缩算法启用静态或动态压缩,但每种情况下推荐的压缩级别不同。
Compression algorithms
压缩算法
Gzip
Gzip
The Gzip compression format has been around for almost 30 years and is a lossless algorithm based on the Deflate algorithm. The deflate algorithm itself uses a combination of the LZ77 algorithm and Huffman coding on blocks of data in an input data stream.
The LZ77 algorithm identifies duplicate strings and replaces them with a backreference, which is a pointer to the place where it previously appeared, followed by the length of the string. Subsequently, Huffman coding identifies the commonly used references and replaces them with references with shorter bit sequences. Longer bit sequences are used to represent infrequently used references.
All major browsers support Gzip. The Zopfli compression algorithm is a slower but improved version of Deflate/Gzip, producing smaller GZip compatible files. It is most suitable for static compression, where it can provide more significant gains.
Brotli
Brotli
In 2015, Google introduced the Brotli algorithm and the Brotli compressed data format. Like GZip, Brotli too is a lossless algorithm based on the LZ77 algorithm and Huffman coding. Additionally, it uses 2nd order context modeling to yield denser compression at similar speeds. Context modeling is a feature that allows multiple Huffman trees for the same alphabet in the same block. Brotli also supports a larger window size for backreferences and has a static dictionary. These features help increase its efficiency as a compression algorithm.
2015年,谷歌推出了Brotli算法和Brotli压缩数据格式。与Gzip一样,Brotli也是基于LZ77算法和霍夫曼编码的无损算法。此外,它还使用二阶上下文建模,在相似的速度下实现了更密集的压缩。上下文建模是指在同一个数据块中,允许为同一字母表使用多个霍夫曼树。Brotli还支持更大的反向引用窗口大小,并拥有一个静态字典。这些特性有助于提高其作为压缩算法的效率。
Comparing Gzip and Brotli
Gzip与Brotli的对比
Here are a few insights from Chrome research into compression of JS using Gzip and Brotli:
- Gzip 9 has the best compression rate with a good compression speed, and you should consider using it before other levels of Gzip.
- With Brotli, consider levels 6-11. Otherwise, we can achieve a similar compression rate much faster with Gzip.
- Over all size ranges Brotli 9-11 performs much better than Gzip, but it's pretty slow.
- The bigger the bundle, the better compression rate and speed you would get.
- The relationships between algorithms are similar for all bundle sizes (for example, Brotli 7 is better than Gzip 9 for every bundle size, and Gzip 9 is faster than Brotli 5 for all size ranges).
以下是Chrome团队对使用Gzip和Brotli压缩JS的研究得出的一些见解:
- Gzip 9的压缩率最佳,压缩速度也不错,你应该优先考虑使用它,而不是其他级别的Gzip。
- 对于Brotli,考虑使用6-11级。否则,我们可以用Gzip更快地实现相似的压缩率。
- 在所有大小范围内,Brotli 9-11的表现都比Gzip好得多,但速度相当慢。
- 包越大,压缩率和速度就越好。
- 不同算法之间的关系在所有包大小下都相似(例如,对于每个包大小,Brotli 7都比Gzip 9好,而Gzip 9比Brotli 5快)。
Enabling compression
启用压缩
You can enable static compression as part of the build. If you use Webpack to bundle your code, you can use the CompressionPlugin for Gzip compression or the BrotliWebpackPlugin for Brotli compression. The plugin can be included in the Webpack config file as follows.
js
module.exports = {
//...
plugins: [
//...
new CompressionPlugin(),
],
};Next.js provides Gzip compression by default but recommends enabling it on an HTTP proxy like Nginx. Both Gzip and Brotli are supported on the Vercel platform at the proxy level.
You can enable dynamic lossless compression on servers (including Node.js) that support different compression algorithms. The browser communicates the compression algorithms it supports through the Accept-Encoding HTTP header in the request. For example, .
Accept-Encoding: gzip, brThis indicates that the browser supports Gzip and Brotli. You can enable different types of compression on your server by following instructions for the specific server type. For example, you can find instructions for enabling Brotli on the Apache server here. Express is a popular web framework for Node and provides a compression middleware library. Use it to compress any asset as it gets requested.
Brotli is recommended over other compression algorithms because it generates smaller file sizes. You can enable Gzip as a fallback for browsers that don't support Brotli. If successfully configured, the server will return the Content-Encoding HTTP response header to indicate the compression algorithm used in the response. E.g., .
Content-Encoding: br你可以在构建过程中启用静态压缩。如果你使用Webpack打包代码,可以使用CompressionPlugin进行Gzip压缩,或使用BrotliWebpackPlugin进行Brotli压缩。可以按照以下方式在Webpack配置文件中引入该插件:
js
module.exports = {
//...
plugins: [
//...
new CompressionPlugin(),
],
};Next.js默认提供Gzip压缩,但建议在Nginx等HTTP代理上启用它。Vercel平台的代理层同时支持Gzip和Brotli压缩。
你可以在支持不同压缩算法的服务器(包括Node.js)上启用动态无损压缩。浏览器会通过请求中的Accept-EncodingHTTP头来告知它支持的压缩算法,例如。
Accept-Encoding: gzip, br这表示浏览器支持Gzip和Brotli。你可以按照特定服务器类型的说明在服务器上启用不同类型的压缩。例如,你可以在这里找到在Apache服务器上启用Brotli的说明。Express是Node.js的流行Web框架,它提供了一个compression中间件库,用于在请求资源时对其进行压缩。
推荐使用Brotli而非其他压缩算法,因为它生成的文件大小更小。你可以启用Gzip作为不支持Brotli的浏览器的回退方案。如果配置成功,服务器会返回Content-EncodingHTTP响应头,以指示响应中使用的压缩算法,例如。
Content-Encoding: brAuditing compression
压缩审计
You can check if the server compressed the downloaded scripts or text in Chrome DevTools → Network → Headers. DevTools displays the content-encoding used in the response.
The Lighthouse report includes a performance audit for "Enable Text Compression" that checks for text-based resource types received without the content-encoding header set to 'br', 'gzip' or 'deflate'. Lighthouse uses Gzip to compute the potential savings for the resource.
你可以在Chrome DevTools → Network → Headers中检查服务器是否压缩了下载的脚本或文本。DevTools会显示响应中使用的内容编码。
Lighthouse报告包含一项“启用文本压缩”的性能审计,用于检查未将content-encoding头设置为'br'、'gzip'或'deflate'的基于文本的资源类型。Lighthouse使用Gzip计算资源的潜在节省空间。
JavaScript compression and loading granularity
JavaScript压缩与加载粒度
To fully grasp the effects of JavaScript compression, you must also consider other aspects of JavaScript optimization, such as route-based splitting, code-splitting, and bundling.
Modern web applications with large amounts of JavaScript code often use different code-splitting and bundling techniques to load code efficiently. Apps use logical boundaries to split the code, such as route level splitting for Single Page Applications or incrementally serving JavaScript on interaction or viewport visibility. You can configure bundlers to recognize these boundaries.
Bundling terminology
打包术语
Following are some of the key terms relevant to our discussion.
- Module: Modules are discrete chunks of functionality designed to provide solid abstractions and encapsulation. See Module pattern for more detail.
- Bundle: Group of distinct modules that contain the final versions of source files and have already undergone the loading and compilation process in the bundler.
- Bundle splitting: The process utilized by bundlers to split the application into multiple bundles such that each bundle can be isolated, published, downloaded, or cached independently.
- Chunk: Adopted from Webpack terminology, a chunk is the final output of the bundling and code-splitting process. Webpack can split bundles into chunks based on the entry configuration, SplitChunksPlugin, or dynamic imports.
If modules are contained in source files, then the final output of the build process after code or bundle splitting is known as a chunk. Note that both the source files and the chunks may be dependent on each other.
The output size for JavaScript refers to the size of chunks or raw size after optimization by a JavaScript bundler or compiler. Large JS applications can be deconstructed into chunks of independently loadable JavaScript files. Loading granularity refers to the number of output chunks — the higher the number of chunks, the smaller each chunk's size and higher the granularity.
Some chunks are more critical than others because they are loaded more frequently or are part of more impactful code paths (e.g., loading the 'checkout' widget). Knowing which chunks matter most requires application knowledge, though it is safe to assume that the 'base' chunk is always essential.
Every byte of the chunks required by a page needs to be downloaded and parsed/executed by user devices. This is the code that directly affects the application performance. Since chunks are the code that will be eventually downloaded, compressing chunks can lead to better download speeds.
以下是与我们讨论相关的一些关键术语:
- 模块(Module):模块是离散的功能块,旨在提供可靠的抽象和封装。有关详细信息,请参阅模块模式。
- 包(Bundle):一组不同的模块,包含源文件的最终版本,并且已经在打包器中完成加载和编译过程。
- 包分割(Bundle splitting):打包器将应用拆分为多个包的过程,这样每个包都可以独立地发布、下载或缓存。
- 代码块(Chunk):源自Webpack术语,是打包和代码分割过程的最终输出。Webpack可以根据入口配置、SplitChunksPlugin或动态导入将包拆分为代码块。
如果模块包含在源文件中,那么经过代码分割或包分割后的构建过程的最终输出就是代码块。请注意,源文件和代码块之间可能存在依赖关系。
JavaScript的输出大小是指代码块的大小,或经过JavaScript打包器或编译器优化后的原始大小。大型JS应用可以分解为可独立加载的JavaScript文件代码块。加载粒度是指输出代码块的数量——代码块数量越多,每个代码块的大小越小,粒度越高。
有些代码块比其他代码块更重要,因为它们的加载频率更高,或者属于更关键的代码路径(例如加载“结账”小部件)。要了解哪些代码块最重要需要应用相关知识,但可以肯定的是,“基础”代码块始终是必不可少的。
页面所需的每个代码块的字节都需要被用户设备下载并解析/执行。这些代码会直接影响应用性能。由于代码块是最终会被下载的代码,因此压缩代码块可以提高下载速度。
The granularity trade-off
粒度权衡
In an ideal world, the granularity and chunking strategy should aim to achieve the following goals, which are at odds with each other.
- Improve download speed: As seen in the previous sections, download speeds can be improved using compression. However, compressing one large chunk will yield a better result or smaller file size than compressing multiple small chunks with the same code.
compress(a + b) <= compress(a) + compress(b)Limited local data suggests a 5% to 10% loss for smaller chunks. The extreme case of unbundled chunks shows a 20% increase in size. Additional IPC, I/O, and processing costs are attached to each chunk that gets shared in the case of larger chunks. The v8 engine has a 30K streaming/parsing threshold. This means that all chunks smaller than 30K will parse on the critical loading path even if it is non-critical.
For the above reasons, larger chunks may prove to be more efficient than smaller chunks for the same code for optimizing download and browser performance.
- Improve cache hits and caching efficiency: Smaller-sized chunks result in better caching efficiency, especially for apps that load JS incrementally.
-
Changes are isolated to fewer chunks with smaller chunks. If there is a code change, only the affected chunks need to be re-downloaded, and the code size corresponding to these is likely to be small. The remaining chunks can be found in the cache thus, increasing the number of cache hits.
-
With larger chunks, it is likely that a large size of code is affected and requires a re-download after code changes.Thus, smaller chunks are desirable to utilize the caching mechanism.
- Execute fast - For code to execute fast, it should satisfy the following.
- All required dependencies are readily available - they have been downloaded together or are available in the cache. This would mean you should bundle all related code together as a larger chunk.
- Only the code needed by the page/route should execute. This requires that no extra code is downloaded or executed. A chunk that includes common dependencies may have dependencies required by most but not all pages. De-duplication of code requires smaller independent chunks.
commons - Long tasks on the main thread can block it for a long time. As such, these need to be broken up into smaller chunks.
A loading granularity that tries to optimize one of the above goals can take you away from the other goals. This is the problem of granularity trade-off.
De-duplication and caching are at odds with browser performance and compression.
As a result of this trade-off, the maximum number of chunks used today by most production apps is around 10. This limit needs to be increased to support better caching and de-duplication for apps with large amounts of JavaScript.
在理想情况下,粒度和代码分割策略应旨在实现以下相互冲突的目标:
- 提高下载速度:如前几节所述,使用压缩可以提高下载速度。然而,压缩一个大代码块会比压缩包含相同代码的多个小代码块产生更好的效果或更小的文件大小。
compress(a + b) <= compress(a) + compress(b)有限的本地数据表明,小代码块会有5%到10%的压缩损失。极端情况下,未打包的代码块大小会增加20%。对于较大的代码块,每个共享的代码块都会附加额外的IPC、I/O和处理成本。v8引擎有一个30K的流式/解析阈值,这意味着所有小于30K的代码块即使是非关键的,也会在关键加载路径上进行解析。
出于上述原因,对于相同的代码,较大的代码块在优化下载和浏览器性能方面可能比较小的代码块更高效。
- 提高缓存命中率和缓存效率:较小的代码块会带来更好的缓存效率,尤其是对于增量加载JS的应用。
-
小代码块的更改只会影响更少的代码块。如果代码发生更改,只需要重新下载受影响的代码块,并且这些代码块对应的代码大小可能很小。其余代码块可以在缓存中找到,从而提高缓存命中率。
-
对于大代码块,代码更改可能会影响大量代码,需要重新下载。因此,为了利用缓存机制,小代码块是更理想的选择。
- 快速执行 - 要使代码快速执行,需要满足以下条件:
- 所有所需的依赖项都随时可用——它们已经一起下载或在缓存中。这意味着你应该将所有相关代码打包成一个更大的代码块。
- 只有页面/路由需要的代码才应该执行。这要求不下载或执行额外的代码。包含公共依赖的代码块可能包含大多数页面但并非所有页面所需的依赖项。代码去重需要更小的独立代码块。
commons - 主线程上的长任务会阻塞主线程很长时间。因此,这些任务需要拆分为更小的代码块。
试图优化上述目标之一的加载粒度可能会偏离其他目标。这就是粒度权衡问题。
代码去重和缓存与浏览器性能及压缩效果相互冲突。
由于这种权衡,如今大多数生产应用使用的最大代码块数量约为10个。对于包含大量JavaScript的应用,需要提高这个限制以支持更好的缓存和代码去重。
SplitChunksPlugin
and Granular chunking
SplitChunksPluginSplitChunksPlugin
与细粒度代码分割
SplitChunksPluginA potential solution for the granularity trade-off would address the following requirements.
- Allow a larger number of chunks (40 to 100) with smaller chunk sizes for better caching and de-duplication without affecting performance.
- Address performance overhead for multiple smaller chunks due to IPC, I/O, and processing costs for many script tags.
- Address compression loss in case of multiple smaller chunks.
A potential solution that addresses these requirements is still in the works. However, Webpack v4's SplitChunksPlugin and a granular chunking strategy can help increase the loading granularity to some extent.
Earlier versions of Webpack used the for bundling common dependencies or shared modules into a single chunk. This could lead to an unnecessary increase in the download and execution times for pages that did not use these common modules. To allow for better optimization for such pages, Webpack introduced the in v4. Multiple split chunks are created based on defaults or configuration to prevent fetching duplicated code across various routes.
CommonsChunkPluginSplitChunksPluginNext.js adopted the SplitChunksPlugin and implemented the following Granular Chunking strategy to generate Webpack chunks that address the granularity trade-off.
- Any sufficiently sizable third-party module (greater than 160 KB) is split into an individual chunk.
- A separate frameworks chunk is created for framework dependencies. (react, react-dom, and so on)
- As many shared chunks as needed are created. (up to 25)
- The minimum size for a chunk to be generated is changed to 20 KB.
Emitting multiple shared chunks instead of a single one minimizes the amount of unnecessary (or duplicate) code downloaded or executed on different pages. Generating independent chunks for large third-party libraries improves caching as they are unlikely to change frequently. A minimum chunk size of 20 kB ensures that compression loss is reasonably low.
The granular chunking strategy helped several Next JS apps reduce the total JavaScript used by the site. The granular chunking strategy was also implemented in Gatsby with similar benefits observed.
解决粒度权衡问题的潜在方案需要满足以下要求:
- 允许使用更多数量的代码块(40到100个),代码块大小更小,以实现更好的缓存和代码去重,同时不影响性能。
- 解决多个小代码块带来的性能开销,例如多个脚本标签的IPC、I/O和处理成本。
- 解决多个小代码块的压缩损失问题。
满足这些要求的潜在解决方案仍在开发中。不过,Webpack v4的SplitChunksPlugin和细粒度代码分割策略可以在一定程度上提高加载粒度。
早期版本的Webpack使用将公共依赖项或共享模块打包到单个代码块中。这可能会导致不需要这些公共模块的页面的下载和执行时间不必要地增加。为了更好地优化此类页面,Webpack在v4中引入了。根据默认设置或配置,会创建多个分割代码块,以防止在不同路由中获取重复代码。
CommonsChunkPluginSplitChunksPluginNext.js采用了SplitChunksPlugin,并实施了以下细粒度代码分割策略来生成Webpack代码块,以解决粒度权衡问题:
- 任何足够大的第三方模块(大于160 KB)都会被拆分为单独的代码块。
- 为框架依赖项创建一个单独的frameworks代码块(例如react、react-dom等)。
- 创建所需数量的共享代码块(最多25个)。
- 将生成代码块的最小大小更改为20 KB。
生成多个共享代码块而非单个共享代码块,最大限度地减少了不同页面下载或执行的不必要(或重复)代码量。为大型第三方库生成独立代码块可以提高缓存效率,因为这些库不太经常更改。20 kB的最小代码块大小确保压缩损失保持在合理较低的水平。
细粒度代码分割策略帮助多个Next JS应用减少了网站使用的总JavaScript量。Gatsby也实施了细粒度代码分割策略,并观察到了类似的收益。
Conclusion
结论
Compression alone cannot solve all JavaScript performance issues, but understanding how browsers and bundlers work behind the scenes can help create a better bundling strategy that will support better compression. The loading granularity problem needs to be addressed across different platforms in the ecosystem. Granular chunking may be one step in that direction, but we have a long way to go.
仅靠压缩无法解决所有JavaScript性能问题,但了解浏览器和打包器的幕后工作原理可以帮助制定更好的打包策略,从而支持更好的压缩。加载粒度问题需要在生态系统中的不同平台上解决。细粒度代码分割可能是朝着这个方向迈出的一步,但我们还有很长的路要走。