Many web applications like blogs and e-commerce sites, even dashboards experience high read density, hereby defined as reads per unit time. A read can be in the form of additional/alternative content in the same page/URL, or navigating to a new page/URL. Yet, something as simple as reading data can be quite nuanced and difficult to get right over high latency networks. This is made worse when there are long chains of data dependencies.
For example, a chain of questions need to be resolved when a user lands on a page. Who is the current user? What's their preferred language? What content in the preferred language should appear? What other content is related? Unless this is a large multi-table join, it is very likely that these need to be sequentially resolved.
On the one hand, if the resolution happens client-side, then each link in the chain adds another network round trip. On the other hand, if it happens server-side, then possibly no data is available on the client, until all data is available. Both result in poor user experience.
To give the best user experience, data needs to be streamed. Early data, or in the case of parallel requests, fast data, should be delivered as soon as it is available, even whilst the rest of the stream is being generated.
Streaming is not new. Images can be streamed with formats like progressive JPEG, which provides increasingly higher resolutions as the file is streamed. Audio and video files, which are consumed in time-order obviously stream in time-order too. Importantly, browsers are capable of streaming HTML, rendering partial documents despite missing closing tags.
Typically, the first few hundred bytes of a HTML document are statically known, and can contain the location of related assets like CSS and JS. Returning this immediately gives the browser the opportunity to download them early (eg, using
<script async>) whilst the server is still awaiting data for the remainder of the page.
The stream can be paused upon a fragment that is still awaiting data, and then resumed when the data is resolved. The process repeats until the last fragment is resolved. During this process, the user gradually sees more and more of the document.
With streaming, the initial response is many times faster, and the total response time is just slightly longer than the time taken to resolve the data.
Even on subsequent data loads, where the initial HTML + JS load does not exist, the CSR timelines will never be as fast as with streaming HTML. This hints towards using a streaming architecture for optimal performance in applications with high read density.
Most data stores are fundamentally stream-friendly. For optimal end-to-end performance, stream-friendly backend formats like ND-JSON or ProtoBuf should be preferred over blocking / batching formats like JSON or GraphQL. Streaming is good for backend performance too, as it keeps memory consumption low, and allows back-pressure to be applied.
Consider the following diagrams of a component tree. With full-tree hydration, every component needs to be client (re)rendered, even if it never changes. This requires not only code to render the component, but also its corresponding backing data. The entire tree is rendered once in the browser (repeating the server's work), and upon subsequent changes, followed by either a Virtual DOM to reconcile changed DOM nodes like React , or a runtime reactive graph to determine the required changes like Solid.js .
Different frameworks approach this problem differently. React Server Components require developers to manually identify which components are server or client (but will warn if it's not correct). It also comes with the cost of increasing the base vendor code. Marko 5 automatically builds the sub-tree at build-time, while its successor Marko 6 and Qwik takes a fine-grained approach to bundle sub-components.
When used with streaming HTML, we can't wait for the entire HTML document with
<script defer>. Fragments need to be interactive as soon as they stream in. The browser already handles this correctly for native elements like
<input> tags. But non-native custom components like carousels or modals require custom code. For example, Marko inserts the framework runtime as a
<script async> in the HTML head, and inlines the fragment specific code in the HTML as the fragments stream in. And Qwik inlines the framework runtime, and downloads the fragment specific code on-demand, achieving constant time-to-interactive regardless of application complexity.
Browsers know how to POST/GET forms. It knows how to collect data in accessible input fields, and submit the results in
fetchpriority allow developers to customise this. Hidden content can also be deferred with
<img loading=lazy> and
In earlier drafts of this article, I initially segmented apps as mostly read, ie, content consumption vs mostly write, eg, content creation. But I soon realised that with the latter, you can still have one class that is read-light, and another class that is read-heavy; to get references, point to existing content, etc.
That's when I concluded that write density is a different problem entirely, independent of read density. Where reading has a lot to do with streaming partial content to the user as fast as possible whilst fetching more data, writing has more to do with optimistic UI to unblock the next user interaction as fast as possible. Perhaps more in a different article…
It is unfortunate that SPAs have dominated developer (and non-developer) mindshare over the last decade or so, such that its use has spread into areas well outside of its niche. But such is hindsight. The SPA revolution lowered the barrier to entry, and spawned new ways of thinking, some of which made its way into server rendered frameworks.
When are Single Page Applications the optimal choice? I can think of only two scenarios. Firstly, when there is a low read density, which essentially means an offline-oriented app, able to operate for long periods with no backing support from a server runtime. Secondly, when you're trading off the performance budget for the server runtime budget, as static websites are cheap to host.