prpl

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

PRPL Pattern

PRPL模式

Making our applications globally accessible can be a challenge! We have to make sure the application is performant on low-end devices and in regions with a poor internet connectivity. In order to make sure our application can load as efficiently as possible in difficult conditions, we can use the PRPL pattern.
让我们的应用程序实现全球访问可能是一项挑战!我们必须确保应用在低端设备和网络连接不佳的地区仍能保持良好性能。为了让应用在复杂条件下尽可能高效地加载,我们可以使用PRPL模式。

When to Use

适用场景

  • Use this when building applications that need to perform well on low-end devices and slow networks
  • This is helpful for optimizing the critical rendering path of web applications
  • 适用于构建需要在低端设备和慢速网络环境下表现良好的应用程序
  • 有助于优化Web应用的关键渲染路径

Instructions

操作步骤

  • Push critical resources efficiently using HTTP/2 server push or preload hints
  • Render the initial route as soon as possible for fast first paint
  • Pre-cache frequently visited routes using service workers for offline support
  • Lazily load routes and assets that aren't immediately needed
  • Use an app shell architecture as the main entry point
  • 推送(Push):使用HTTP/2服务器推送或预加载提示高效推送关键资源
  • 渲染(Render):尽快渲染初始路由,实现快速首次绘制
  • 预缓存(Pre-cache):借助Service Workers预缓存频繁访问的路由,提供离线支持
  • 懒加载(Lazily load):延迟加载非立即需要的路由和资源
  • 将应用壳架构(app shell architecture)作为主要入口点

Details

详细说明

The PRPL pattern focuses on four main performance considerations:
  • Push critical resources efficiently, which minimizes the amount of roundtrips to the server and reducing the loading time.
  • Render the initial route as soon as possible to improve the user experience
  • Pre-cache assets in the background for frequently visited routes to minimize the amount of requests to the server and enable a better offline experience
  • Lazily load routes or assets that aren't requested as frequently
When we want to visit a website, we first have to make a request to the server in order to get those resources. The file that the entrypoint points to gets returned from the server, which is usually our application's initial HTML file! The browser's HTML parser starts to parse this data as soon as it starts receiving it from the server. If the parser discovers that more resources are needed, such as stylesheets or scripts, another HTTP request is sent to the server in order to get those resources!
Having to repeatedly request the resources isn't optimal, as we're trying to minimize the amount of round trips between the client and the server!
For a long time, we used HTTP/1.1 in order to communicate between the client and the server. HTTP/2 introduced some significant changes compared to HTTP/1.1, which make it easier for us to optimize the message exchange between the client and the server.
Whereas HTTP/1.1 used a newline delimited plaintext protocol in the requests and responses, HTTP/2 splits the requests and responses up in smaller pieces called frames. An HTTP request that contains headers and a body field gets split into at least two frames: a headers frame, and a data frame!
HTTP/1.1 had a maximum amount of 6 TCP connections between the client and the server. Before a new request can get sent over the same TCP connection, the previous request has to be resolved! If the previous request is taking a long time to resolve, this request is blocking the other requests from being sent. This common issue is called head of line blocking, and can increase the loading time of certain resources!
HTTP/2 makes use of bidirectional streams, which makes it possible to have one single TCP connection that includes multiple bidirectional streams, which can carry multiple request and response frames between the client and the server!
HTTP/2 solves head of line blocking by allowing multiple requests to get sent on the same TCP connection before the previous request resolves!
HTTP/2 also introduced a more optimized way of fetching data, called server push. Instead of having to explicitly ask for resources each time by sending an HTTP request, the server can send the additional resources automatically, by "pushing" these resources.
After the client has received the additional resources, the resources will get stored in browser cache. When the resources get discovered while parsing the entry file, the browser can quickly get the resources from cache instead of having to make an HTTP request to the server!
Although pushing resources reduces the amount of time to receive additional resources, server push is not HTTP cache aware! The pushed resources won't be available to us the next time we visit the website, and will have to be requested again. In order to solve this, the PRPL pattern uses service workers after the initial load to cache those resources in order to make sure the client isn't making unnecessary requests.
As the authors of a site, we usually know what resources are critical to fetch early on, while browsers do their best to guess this. We can help the browser by adding a
preload
resource hint to the critical resources!
By telling the browser that you'd like to preload a certain resource, you're telling the browser that you would like to fetch it sooner than the browser would otherwise discover it! Preloading is a great way to optimize the time it takes to load resources that are critical for the current route.
Although preloading resources is a great way to reduce the amount of roundtrips and optimize loading time, pushing too many files can be harmful. The browser's cache is limited, and you may be unnecessarily using bandwidth by requesting resources that weren't actually needed by the client.
The PRPL pattern focuses on optimizing the initial load. No other resources get loaded before the initial route has loaded and rendered completely!
We can achieve this by code-splitting our application into small, performant bundles. Those bundles should make it possible for the users to only load the resources they need, when they need it, while also maximizing cachability!
Caching larger bundles can be an issue. It can happen that multiple bundles share the same resources. A browser has a hard time identifying which parts of the bundle are shared between multiple routes, and can therefore not cache these resources. Caching resources is important to reduce the number of roundtrips to the server, and to make our application offline-friendly!
When working with the PRPL pattern, we need to make sure that the bundles we're requesting contain the minimal amount of resources we need at that time, and are cachable by the browser.
The PRPL pattern often uses an app shell as its main entry point, which is a minimal file that contains most of the application's logic and is shared between routes! It also contains the application's router, which can dynamically request the necessary resources.
The PRPL pattern makes sure that no other resources get requested or rendered before the initial route is visible on the user's device. Once the initial route has been loaded successfully, a service worker can get installed in order to fetch the resources for the other frequently visited routes in the background!
Since this data is being fetched in the background, the user won't experience any delays. If a user wants to navigate to a frequently visited route that's been cached by the service worker, the service worker can quickly get the required resources from cache instead of having to send a request to the server.
Resources for routes that aren't as frequently visited can be dynamically imported.
PRPL模式聚焦于四个主要性能考量点:
  • 推送(Push):高效推送关键资源,减少与服务器的网络往返次数,缩短加载时间。
  • 渲染(Render):尽快渲染初始路由,提升用户体验
  • 预缓存(Pre-cache):在后台预缓存频繁访问路由的资源,减少对服务器的请求次数,优化离线体验
  • 懒加载(Lazily load):延迟加载不常请求的路由或资源
当我们想要访问一个网站时,首先需要向服务器发送请求以获取资源。入口指向的文件会从服务器返回,通常是我们应用的初始HTML文件!浏览器的HTML解析器一旦开始接收来自服务器的数据,就会立即开始解析。如果解析器发现需要更多资源,比如样式表或脚本,就会向服务器发送另一个HTTP请求来获取这些资源!
反复请求资源并非最优方案,因为我们正试图减少客户端与服务器之间的往返次数!
长期以来,我们使用HTTP/1.1实现客户端与服务器之间的通信。与HTTP/1.1相比,HTTP/2带来了一些重大变化,使我们更容易优化客户端与服务器之间的消息交换。
HTTP/1.1在请求和响应中使用换行分隔的明文协议,而HTTP/2将请求和响应拆分成更小的片段,称为帧(frames)。包含标头和主体字段的HTTP请求至少会拆分为两个帧:标头帧(headers frame)和数据帧(data frame)!
HTTP/1.1限制客户端与服务器之间最多只能有6个TCP连接。在同一TCP连接上发送新请求之前,必须先完成之前的请求!如果之前的请求需要很长时间才能完成,这个请求就会阻塞其他请求的发送。这个常见问题被称为队头阻塞(head of line blocking),会增加某些资源的加载时间!
HTTP/2使用双向流(bidirectional streams),使得单个TCP连接可以包含多个双向流,这些流可以在客户端和服务器之间传输多个请求和响应帧!
HTTP/2通过允许在同一TCP连接上发送多个请求,而无需等待之前的请求完成,解决了队头阻塞问题!
HTTP/2还引入了一种更优化的数据获取方式,称为服务器推送(server push)。无需每次通过发送HTTP请求来显式请求资源,服务器可以自动发送额外资源,即“推送”这些资源。
客户端收到额外资源后,这些资源会存储在浏览器缓存中。当解析入口文件时发现这些资源,浏览器可以快速从缓存中获取,而无需向服务器发送HTTP请求!
虽然推送资源减少了获取额外资源的时间,但服务器推送并不感知HTTP缓存!推送的资源在下次访问网站时不可用,必须重新请求。为了解决这个问题,PRPL模式在初始加载后使用Service Workers来缓存这些资源,确保客户端不会发出不必要的请求。
作为网站开发者,我们通常知道哪些资源需要尽早获取,而浏览器只能尽力猜测。我们可以通过为关键资源添加
preload
资源提示来帮助浏览器!
通过告诉浏览器你想要预加载某个资源,就是在告诉浏览器你希望比浏览器原本发现它的时间更早地获取它!预加载是优化当前路由关键资源加载时间的绝佳方式。
虽然预加载资源是减少往返次数和优化加载时间的好方法,但推送过多文件可能有害。浏览器的缓存是有限的,请求客户端实际不需要的资源可能会不必要地消耗带宽。
PRPL模式专注于优化初始加载。在初始路由完全加载并渲染之前,不会加载任何其他资源!
我们可以通过将应用拆分为小型、高性能的包来实现这一点。这些包应该使用户能够仅在需要时加载所需资源,同时最大限度地提高可缓存性!
缓存较大的包可能会有问题。可能会出现多个包共享相同资源的情况。浏览器很难识别包中哪些部分在多个路由之间共享,因此无法缓存这些资源。缓存资源对于减少与服务器的往返次数以及使应用支持离线访问至关重要!
使用PRPL模式时,我们需要确保请求的包只包含当时所需的最少资源,并且可以被浏览器缓存。
PRPL模式通常将应用壳(app shell)作为主要入口点,这是一个包含应用大部分逻辑且在各路由之间共享的最小文件!它还包含应用的路由器,可以动态请求必要的资源。
PRPL模式确保在初始路由显示在用户设备上之前,不会请求或渲染任何其他资源。一旦初始路由成功加载,Service Worker就会安装,以便在后台获取其他频繁访问路由的资源!
由于这些数据是在后台获取的,用户不会感受到任何延迟。如果用户想要导航到已被Service Worker缓存的频繁访问路由,Service Worker可以快速从缓存中获取所需资源,而无需向服务器发送请求。
不常访问的路由资源可以动态导入。

Source

来源