Skip to content

Data loading patterns

In the last 10 years we’ve made big changes in our data loading patterns. And now, we are sort of making a full circle… Sort of.

Motivation

In this article, we are going to discuss a few approaches to data loading bound to route change. That means, at least for today, we are not going to concern ourselves with data loading for specific user interactions. For example, when a user clicks a “like” button, we do not need to change our route. Instead, usually, we would send a request, which updates our like status for the particular item and update the UI (either optimistically or show a loader).

But, when a user wants to see more details about the person who posted the item, we would usually want to redirect them to a new route. The new route would be dedicated to displaying information about users and fetching the specified user’s data, when someone lands on that route, and rendering it would be the main task of the route’s component. There are a couple of philosophies about loading data on route change, and we will be discussing those today.

Traditional patterns

Multipage applications (MPAs)

Let’s explore how we used to load data in the olden days. Back in the day, when the web was ruled by the likes of PHP, Ruby on Rails, Perl and ASP.NET even, websites are buit as multipage apps (MPAs).

The basic idea of a multipage app architecture is that every route change requires a full page reload. Most of the time the user would click on a link, which would be rendered by an <a> element, which contains the new route in an href, and the browser would make a new HTTP GET request to the selected route. The server responds with a new generated HTML response.

This approach meant that every UI update was blocked by the data loading pipe. Every time a user makes a request, the server has to fetch the data, and only then it can render the HTML and ship it to the client. If you had a bottleneck somewhere in the data pipe, any update to the UI would have to wait until it was resolved, or until a timeout happened. The user would see a spinning favicon and basicaly have to wait until the whole page refetches.

Of course, there were ways of making the site somewhat interactive. In the really olden days, people used to write Java applets and Flash apps. Those were highly interactive but were quickly abandoned. I wouldn’t be able to say much about them as those methods were a bit before my time. I barely remember the time I last used a Flash app. By the time I was writing web apps, that way of writing web apps was long gone.

Pros

There are two huge pros when it comes to the traditional SSR-ed MPA approach, in my opinion.

  1. Instant data

Once the client is done loading the HTML, that’s it. After the first render, the user is presented with all of the data. No need for spinners and additional fetches.

That does not mean, of course, that once Ajax was a thing people didn’t use it to postpone the fetch of less critical data.

  1. SEO

You could argue that this is a pro of SSR in general, but SSR kind of implies blocking until the data is ready. Basically, it’s easier for web crawlers to index your website if all of the data was there on route load.

Cons

The biggest con of traditional SSR-ed data loading is the blocking. Your web app does not feel like an app. Each route feels like transitioning to a new website.

Web app era

With the development of modern JS client side frameworks like Angular and React (and some more modern frameworks like SolidJs or Svelte), more and more of the workload was dumped on the client. This was huge in pushing to make websites feel more like apps, rather than loosely connected HTML documents.

Singlepage applications (SPAs)

With the push for the client to do more of the work, routing was bound to come to the client. In order to do that, frameworks (or routing libraries) were patching the functionality of an anchor <a> HTML element. In a standard HTML document, an anchor would contain an href. When the element was clicked, the browser would fetch and render the content contained in the route. But now, we don’t want to ofload the route change to the server. We want to change the route and update the DOM. We already have all the JavaScript that we need to render new elements. We might be missing some assets, maybe we have a lazy loaded component, but we can fetch those later.

The only thing we really are missing is the critical data the user wants to see. And we can get that data via the browser’s fetch function.

The skeletons await

Most of the client side routers don’t really care about your data loading patterns. They are really there only to render the correct component and pass the route and query parameters.

It was really up to the developers, and most developers chose the loading state path.

Render the skeleton of the application, initiate the data fetch, show a loading state and await. Once the data is loaded, resolve the loading state and render the data. This felt really fast and snappy. Route changes were instant. No more favicons spinning and browser loading progress bars. You click and boom, the DOM updates.

The downside of purely client SPA skeletons

The biggest downside of completely client side rendered (CSR-ed) SPAs using skeletons, in my opinion, is the huge waterfall on initial load.

GET /index.htmlGET /index.jsGET /index.cssGET /user-info.jsonINITIAL LOAD- empty page until HTML loadsRELATED ASSETS LOAD- HTML is renderd (but only after the CSS loads)- waiting on JS to load and executeJS TAKES OVER- JS renders a loading state- init data fetch

The user is forced to wait for this waterfall to resolve bore their data is rendered. This is not the case for every following route change, because only the last step is repeated, as the HTML, CSS and JS remain the same, and we don’t fetch it again.

Remixing the pendulum back

Remix, and the React router team, has an interesting philosophy about route changes and data loading, which I personally really like.

Their philosophy, in a nutshell, is death to the skeletons (and the spinners 🙂). The way Remix routing works is: wait for the data to load, then render the route change.

Remix is a React metaframework built by the people from React router, and it is built around React router itself. The biggest difference is that Remix uses file based routing.

Basically, every file in the /routes directory will create a route. That means that all of the everything inside one of those folders should be bound to that single route. If you need to reuse a component, you should not export it from the route file, rather you should extract it into a file in the /components directory (or whatever directory name you like).

Your default export is the component the router will render when loaded. And besides the default component, there are a couple of specially named functions and components you can export which would affect the routing and data loading.

One of those is the loader() function.

import { json } from "@remix-run/node"; // or cloudflare/deno
import { useLoaderData } from "@remix-run/react";

export const loader = async () => {
  return json([
    { id: "1", name: "Pants" },
    { id: "2", name: "Jacket" },
  ]);
};

export default function Products() {
  const products = useLoaderData<typeof loader>();
  return (
    <div>
      <h1>Products</h1>
      {products.map((product) => (
        <div key={product.id}>{product.name}</div>
      ))}
    </div>
  );
}

This example is taken from the official Remix guide on data loading.

The loader() function is not the only “special” export you can use, but it is the only one we are interesting in today. The loader() is bound to the route, that means that every time your route changes the corresponding loader() will be called. Remix supports nested routing, which means that there can be multiple loader() functions which could be called on a given route.

Usually fetches are bound to the UI hierarchy. That means that a nested route component is able to execute only when the parent is done rendering, and that results in a sort of waterfall. But the people at Remix are smarter than that. Remix will initiate all of the required fetches at the same time. But nested routing is not really our focus for today. If you want to know more about that, Ryan Florence wrote a great blog post about that.

Remix is powered by progressive enhancement. Which means that Remix is a great mix of the oldscool “block until data” SSR and the modern SPA architectures. Remix is not the only framework that does this, in fact most of the modern metaframeworks (eg. Next.js) do this. That is just the industry standard currently.