Web Performance Calendar

The speed geek's favorite time of year
2017 Edition
ABOUT THE AUTHOR

Dean Hume

Dean Hume (@DeanoHume) is a software developer and author based in London, U.K. He is passionate about web performance, and he regularly writes on his blog - deanhume.com.

If you visit sites such as Facebook, Pinterest or Medium regularly, you may have noticed that the first time you load the page, you’ll see a page with low quality or even blurry images. Then as the page continues to load, the blurry / low quality images are then replaced with the full quality versions. To see an example of this in action, let’s take a look at the image below.

The image above is a screenshot of a loading page on Medium. The screenshot on the left shows the page as it first loads with the low quality images, and then the screenshot on the right shows the page once it has finished loading with the full quality images in place.

This image loading technique is known as LQIP (Low Quality Image Placeholders) and was first proposed by Guy Podjarny a few years ago. The idea behind this technique is that on a slow connection, you are able to present the user with a fully usable web page as quickly as possible, giving them a much better experience. Even on a better network connection, this still gives users a usable page faster and is an improved experience. From a web performance point of view, this means that a usable version of your web page will load much faster and (depending on a number of other factors) you should have an improved time to first meaningful paint.

In fact, in this years Performance Calendar, Tobias Baldauf has written a great in-depth article about LQIP loading techniques and a tool he has created called SQIP.

SQIP is a tool that creates a low quality version of an image as an SVG which can be used as a placeholder and then the full quality version loaded once the connection allows. I’ve recently started experimenting with SQIP and it can be quite fun to start creating low quality versions of images!

A while ago, I wrote about an image lazy loading technique using Intersection Observer. The idea behind lazy loading images is that you wait until a user scrolls further down the page and the image comes into view before making the network request for it. If your web page contains multiple images, but you only load each image as they are scrolled into view, you’ll end up saving bandwidth as well as ensuring that your web page loads quicker.

This got me thinking; I wonder if I could combine Intersection Observer and low quality placeholder images that were created using Tobias’ SQIP tool. Once I started experimenting further, it was easier than I thought. Using a lazy loading technique would mean that the user only loads what they see in their viewport, which combined with a low quality image would mean a double web performance whammy!

In this article, I am going to talk you through the steps I went through and how you can get started with this technique for yourself.

Getting Started

Before we go any further, we need to install SQIP. There are a few prerequisites that need to be installed before we can use this tool out of the box. First up, you need to install Go. I found the installation a little tricky at first, but came across this brilliant article which pointed me in the right direction. Once Go is installed, you need to install a tool called Primitive. This can easily be done from your terminal with the following command:

go get -u github.com/fogleman/primitive

Finally, we can install SQIP with the following command:

npm install -g sqip

We are now ready to start using SQIP to create low quality placeholder images. I’ve chosen a random picture of a dog as my test image (who doesn’t like dogs!). In order to process our image, we need to run the following command in your terminal:

sqip -o dog.svg dog.jpg

The above command will fire up the SQIP tool, process the dog.jpg image and spit out a low quality placeholder file called dog.svg. The newly processed images now look a little like the following.

The image on the left shows the low quality SVG version, and the image on the right is the full quality version. The great thing about this tool is the low quality version of this image comes in at just 800 bytes – amazing!

Lazy Loading with Intersection Observer

Now that we have our both versions of our image, we can start lazy loading them. If you’ve not heard of Intersection Observer before, it is built into most modern browsers and lets you know when an observed element enters or exits the browser’s viewport. This makes it ideal because it is able to deliver data asynchronously and won’t impact the main thread, making it an efficient means of giving you feedback.

If you’ve ever used a traditional image lazy loader, you’ll be aware that almost all of these libraries tap into the scroll event or use a periodic timer that checks the bounding on an element before determining if it is in view. The problem with this approach is that it forces the browser to re-layout the entire page and in certain conditions will introduce considerable jank to your website. We can do much better using Intersection Observer.

In this article, I am going to touch on the basics of this lazy loading technique, but if you’d like a more in-depth guide, I recommend reading this article for more information.

Okay, let’s get started. Imagine a basic HTML page with three images similar to the one above. On the web page, you’ll have image elements that are similar to the code below:

<img class="js-lazy-image" src="dog.svg" data-src="dog.jpg">

In the code above you may notice that there are two image sources in the image tag. Firstly, we are loading the dog.svg image on page load, which is our low quality image. Next, we are using a data attribute called data-src that points to the full quality image source. We will use this to replace the low quality image with the full quality images as soon as they come into viewport. You may also notice that the image element also has a class called js-lazy-image – this is used in the JavaScript code in order to determine which elements we want to lazy load.

I’ve created a JavaScript file called lazyload.js – it contains the following code:

// Get all of the images that are marked up to lazy load
const images = document.querySelectorAll('.js-lazy-image');
const config = {
  // If the image gets within 50px in the Y axis, start the download.
  rootMargin: '50px 0px',
  threshold: 0.01
};

// The observer for the images on the page
let observer = new IntersectionObserver(onIntersection, config);
  images.forEach(image => {
    observer.observe(image);
  });
}

The example above looks like a lot of code, but let’s break it down step by step. Firstly, I am selecting all of the images on the page that have the class js-lazy-image. Next, I’m creating a new IntersectionObserver and using it to observe all of the images that we have selected that have the class js-lazy-image.

Using the default options for IntersectionObserver, your callback will be called both when the element comes partially into view and when it completely leaves the viewport. In this case, I am passing through a few extra configuration options to the IntersectionObserver. Using a rootMargin allows you to specify the margins for the root, effectively allowing you to either grow or shrink the area used for intersections. We want to ensure that if the image gets within 50px in the Y axis, we will start the download.

Now that we’ve created an Intersection Observer and are observing the images on the page, we can tap into the intersection event which will be fired when the element comes into view.

function onIntersection(entries) {
  // Loop through the entries
  entries.forEach(entry => {
    // Are we in viewport?
    if (entry.intersectionRatio > 0) {

      // Stop watching and load the image
      observer.unobserve(entry.target);
      preloadImage(entry.target);
    }
  });
}

In the code above, whenever an element that we are observing comes into the user’s viewport, the onIntersection function will be triggered. At this point we can then loop through the images that we are observing and determine which one is in viewport. If the current element is in the intersection ratio, we know that the image is in the users viewport and we can load it. Once the image is loaded, we don’t need to observe it any more, and using unobserve() will remove it from the list of entries in the Intersection Observer. That’s it! As soon as a user scrolls and the image comes into view, the appropriate image will be loaded.

Show me the money!

If you’d like to test this code in action, I have created a demo page which can be found at deanhume.github.io/lazy-observer-load. To give you a very high level visual of what the entire web page might look like, let’s imagine the page below.

You’ll notice that because the middle image is in the user’s viewport, it is lazy loaded and the low quality image is replaced with the full quality image. Everything below the viewport (red line), is still blurry. These images will only get replaced if the user scrolls to them, saving the user bandwidth and ensuring that the page loads faster.

If you are testing this demo on a fast connection, you might not even notice the images being swapped out. I found the best way to test was to enable Network Throttling in Chrome Developer tools and disable cache.

Summary

Using Low Quality Image Placeholders (LQIP) ensures that your users are presented with a usable page much faster. When combined with a lazy loading technique using Intersection Observer, your users save on bandwidth too. It has been fun to experiment with LQIP!

At the time of writing this article, Intersection Observer is currently supported in modern versions of Edge, Firefox, Chrome, and Opera which is great news. Using basic feature detection would allow you to provide a fallback for older browsers that don’t support this feature.

If you’d prefer not to write your own version of this lazy loading code, you can simply use the code directly from the Github repository.

For further reading, I thoroughly recommend the following articles:

A big thank you to both Tobias Baldauf and Robin Osborne for their help in reviewing this article.