Yet browsers are far from dumb runtimes, it is a rich runtime optimised to handle many scenarios and edge cases. Let's have a look at some of its features that are often reinvented when using an SPA.
Web browsers are very efficient at handling HTTP routes. Given a starting route, usually
index.html, it can navigate from one route to another through hyperlinks. Hyperlinks can be parameterised with query parameters that are sent to the server such as
/products?page=2, or with internal links that reference IDs within the page and are not sent to the server, such as
But the browser discards the JS runtime, CSS Object Models, DOM event listeners when navigating from page to page, and possibly recreates much of the same on the next page. So, a SPA that retains all of these must be better! The History API , or its successor the App History API provide the means to reinvent browser navigation (and duplicated on the server). And so, instead of finding ways to write less JS to reduce the cost of changing pages, we write even more to fix the problem of having too much in the first place.
As a side-effect of reinventing routing above, we are also forced to reinvent caching. More likely though, we don't cache, and instead refresh the data on every (re)visit to a route that uses that it. As a result, parts of the UI may be out of sync, because data for the different parts are independently fetched. GraphQL solves some of these problems by allowing fetching data for the entire page atomically. But, this adds even more JS.
On slow networks (or slow servers), browsers have native progress indicators, for both the main HTML, as well as resources referenced therein. Using the
Content-Length header, browsers can estimate when a resource can be downloaded completely. Where there are concurrent requests in flight, it knows how to prioritize critical resources at a network level to improve load times. It can even lazily load resources accordingly.
With SPAs, we may have multiple data fetches in flight. Putting aside the problem of synchronisation, this means that each needs to be represented by each own spinner. More JS. Yet, the browser treats these no different from each other; they all have the same XHR network priority.
HTTP is a streaming protocol, and the web is built on the top of it. Browsers are built to exploit this capability. It is capable of rendering HTML fragments whilst it is still being generated/downloaded, providing users with usable content without having to wait for the closing tag(s). Progressive JPEG is another example of the browser showing early feedback to users even with just part of the full file. And of course, there's videos streams, where playback is immediate even whilst the rest of the video is downloading.
With SPAs, we'd have to reinvent streaming. The JS is large, so we add lazy loading. More tooling, and more code. Data fetching will need to be modified to a stream-able format; we can't use JSON arrays because the JSON parser cannot deal with incomplete data, so we have to resort to using ND-JSON if we wanted to stream data. More complexity.
Web pages are inherently asynchronous. The main HTML is downloaded first (though this may stream), then associated assets like JS, CSS, and images are downloaded asynchronously. These may in turn lazily load even more associated assets. If a new version of the site is deployed, this chain of dependencies is altered. When navigating to a different route using native browser functionality, the new chain of dependencies is automatically applied.
At least one SPA Turbo has native support for reloading when its dependencies change using the
data-turbo-track attribute. But in general, we'd need to invent a bespoke update mechanism, perhaps polling for version numbers or similar, to know when to refresh the chain of dependencies. How many times have you seen sites that display "Refresh your browser to view the latest version"? That's complexity to compensate.
Browser cookies adds stateful capabilities to an otherwise stateless HTTP protocol. Access can be restricted with the
With SPAs, state often exists in the browser in memory, SessionStorage, or LocalStorage, and only within the one browser. With this also comes the complexity of having to deal with schema migrations if/when the state format changes from version to another. Technically, it is possible to maintain state at the server by sending and receiving cookies via Ajax requests, but most SPAs seem to shy away from this. Turbo with its emphasis on server rendering is perhaps the exception here.
With server-side rendering comes another non-obvious benefit; common work can be shared. Using techniques like Russian Doll caching and CDN caching allows common HTML fragments to be shared across multiple users and requests. Centralising computation in this way relieves slower devices from having to perform essentially duplicate computation.
Contrast this with a SPA design whereby every user visiting a page has to perform the same calculation to render out the same page. Arguably, this increases opportunity to customise the page according to user locale, and other preferences. But the redundant computation is often noticeable on slower devices.
By now, it should be clear that Single Page Applications can add significant costs. A lot of native browser capabilities have to be reinvented, either by the application developer, or the framework. We're shipping code that duplicates (sometimes incorrectly) native browser capabilities. And the extra code weight also translates to extra bugs.
Before SPAs, web applications were built as Multi-Page Applications. This architecture is characterised by a small JS payload, such that the cost of discarding the JS runtime between pages is low. Use of event delegation keeps the cost of rebuilding the DOM low. Routing does not need to be duplicated across browser and server. History is native, as are loading indicators. HTML streaming can eliminate most lazy loading logic, and common elements are shared across multiple users on the server and don't need to be re-computed. User sessions are stored server-side, eliminating client-side state management.
But if MPAs were so good, how did we end up using SPAs for over a decade? Possibly, this is due to tooling. It is easier to reason with and to test a complex application written entirely in one language, than it is to rely on functionality offered by browsers beyond our control. As tooling improves, we can now statically determine exactly the minimum amount of JS needed - not just in terms of transpilation targets, but also in terms of the exact logic needed in the browser. Further, as modern browsers became more mature, consistent, and reliable, we can start leaning into it more and reduce the JS weight.
Increasingly, the trend is to return to server rendered applications. Rails 7 will ship with Hotwired enabled by default. From the list presented above, the only thing it doesn't do is HTML streaming. That's where Marko really shines. When combined with a server like Express or Koa , it is capable of much of the above, except Russian Doll Caching, but even that can be easily built due to its asynchronous rendering. Undoubtedly, there will be more frameworks popping up.
If you're building a new application, perhaps reconsider whether a SPA is the right architecture for you.