Modal Title
Frontend Development / Software Development

5 Dev Tips to Improve Your Largest Contentful Paint (LCP)

A tutorial for developers looking to improve loading performance and get their LCP to within 2.5 seconds, in accordance with Core Web Vitals.
Apr 15th, 2023 4:00am by
Featued image for: 5 Dev Tips to Improve Your Largest Contentful Paint (LCP)
Image via Shutterstock 

Largest Contentful Paint (LCP) measures loading performance. It’s one of the three Core Web Vitals from Google, but one that “developers are having trouble optimizing for.” It’s a measurement, in seconds, of the largest image, video, or text block to render in the viewport of the webpage.

A “good” LCP is anything under 2.5 seconds. But “only 52.7% of sites meet the “good” LCP threshold”. So why is it so tricky to achieve? And how do we do a good LCP on this existing site?

Demo

I’ve built a demo website with all the classic non-performance features:

  • Google Tag Manager for analytics
  • Multiple different web fonts
  • Third-party library like Bootstrap
  • Moment.js for the date year copyright
  • A sprinkling of jQuery for hover animations
  • An unoptimized autoplay video in the hero
  • Unoptimized PNG images
  • Ad and cookie banners injection for a good old Layout Shift

I’ve tried to build a site that is as “real world” as I can, not simply a carousel (although I do have one of those too) but an average website that’s checked on EcoPing; a website that helps you track key metrics like website performance and website carbon emissions. Only 26% of the internet runs on renewables, meaning that large websites typically produce emissions. The average 2.3 MB website produces around 0.6 grams of CO2e per page view.

You might be thinking, “Surely people know to optimize their images by now,” but unfortunately the size of images on websites has grown by 12.7% in the last five years and it is an ongoing problem in sites of all sectors. One contributor to why overall page transfer size is on the rise too.

This being said, our demo site is a whopping 64 MB, with a mobile LCP of 6 seconds on mobile. In Core Web Vitals land, this is labeled as “poor”, so let’s see how we can improve it.

Goals:

  • Improve mobile LCP score to be “good.”
  • Render all the page HTML in the document.
  • Defer as much CSS and JavaScript that isn’t critical.
  • Limit the number of requests that fire after the fold.
  • Reduce blocking requests.

Start by Measuring Our LCP

Highlighted metrics from PageSpeed insights report.

We can only start to improve a site if we start to measure. Then we know if the refactoring that we do is beneficial or making the problem worse.

PageSpeed Insights is a great tool that helps us make our websites faster. It gives us a page report with all our Core Web Vital metrics in one place. Here’s a look at our demo current metrics, which I like to write down and pop in a spreadsheet to measure the change over time against my improvements.

  • First Contentful Paint: 3.1 s
  • Largest Contentful Paint: 3.2 s
  • Total Blocking Time: 0 ms
  • Cumulative Layout Shift: 0.201
  • Speed Index: 14.9 s

Each report has fantastic features like graphs and treemaps of our bundle sizes; all help us visualize any performance problems. It also gives tips and tricks on where to improve and potentially how much by. These are what we will use to improve our LCP.

List of PageSpeed insight audits and suggested improvements from high to low priority.

Step 1: 🔺Properly Size Images

Our site is very image heavy, just like many around the web. We’ve got 13 png images with an average transfer size of 4.3 MB, making each image over twice the average web page transfer size.

I’ve swapped the first two opportunities on the list around, since before we serve the images in different formats, we need to resize them.

For image resizing and optimizing, use a free online tool. Many content creators or uploaders don’t have the resources to do this programmatically in the CLI, or in other tools. It’s a bit cumbersome, but here’s how:

  1. Using BulkResize, we can take our large 2000px plus .png images from Unsplash to 750px (the size of the max space I have on my desktop). Doing this reduces the total file size from 61.6 MB to 1.5 MB (a 97% decrease in size). Putting it through TinyJPG for good luck brings this down by 334 KB (22%) to 1.2 MB.
  2. We can follow step one to take our 750px .jpg files and get a 480px version for small devices. Getting a final file size of 618 KB…

Awesome, now we have some super small .jpg images. It’s taken our site images from 58.8 MB to 1.2 MB.

Step 2: 🔺Serve Images in Next-Gen Formats

PageSpeed insights table of potential savings from serving next-gen image formats

Serving images in next-gen formats means that we should convert older image formats like these .pngs to a web-optimized and overall more performant format, like .webp or .avif. For this site, we’re going to do both. In total, we will have three image sizes: .jpg (fallback image), .webp (performant image) and .avif (next-level performant image).

To do this we’re going to:

  • Convert both 480px and 750px .jpg images to .webp by using a free online tool, which will drop the total file size down by 200 KB.
  • Convert both 480px and 750px .jpg images to .avif by using a free online tool that will drop the total file size down by 100 KB.

To serve these images, we’re going to upgrade the humble img HTML element to use the responsive picture element.


Once this has been deployed, we can see our newly served images:

Loaded site images showing their new avif format.

When our page loads, it currently requests and renders all of these images, even though all of them are below the fold. This isn’t ideal. In this case, it won’t help the LCP too much, but will improve overall performance and Speed Index.

A graph of website resources changing over time and the decrease in image optimisation.

To get around this, I’m adding a little 3 KB package called lazysizes which will handle all lazy loading of my images, even in the picture element. It’s not ideal to add another script, but I think for such a small package, the juice is worth the squeeze. Plus, we will come to clean up the JavaScript in the next section below.

A list of images that are requested and rendered when the page is loaded.

Step 3: 🔺Eliminate Render-Blocking Resources

A timing waterfall of a websites resources and the first items to render.

Scripts are a very common reason for a longer LCP timing. Most of the time they are added to the head of the document, so they load and block before our content is shown.

Most of our website resources are needed below the fold of the page. So we don’t need to load so much when the page loads. By utilizing the defer attribute to all of our scripts which tells the browser to execute them after the document has been parsed.

For third-party scripts (anything independent) we’re going to add async, so this will be added to our Gtag.

Step 4: 🔺Reduce Unused JavaScript

A table of website JavaScript and CSS resources ordered in the most unused bytes.

The next two steps are probably the most important in terms of improving our LCP. We’ve seen some great improvements so far, taking it down by 1.4 seconds, but hopefully, we can get it down more.

To understand how much JS is unused, we can use the coverage report in dev tools, and Treemap within PageSpeed insights. This gives us a breakdown of all the used/unused bytes within our website. For our site, over 50% of the resources aren’t used, so do we need them at all? Probably not. So let’s have a look at what we have, and why we have them all:

  • GTag is 87.3 KB and 36% is unused.
    • Purpose: Site Analytics
    • Alternative: Swapping it out for Plausible is a fantastic alternative coming under 1KB. Defer it to load, it’s also cookie-free so no popup is needed. Also improves CLS.
  • jQuery.slim is 25 KB and 64% is unused.
    • Purpose: Initially added to toggle some classes, add hover over effect to images, and to enqueue our ad and cookie banners.
    • Alternative: It’s the slim version but it’s still not needed, as we could do this with vanilla JavaScript. Plus, show the ads immediately in order to reduce CLS.
  • Bootstrap is 25.2 KB and 83.5% is unused.
    • Purpose: It was added to animate a carousel.
    • Alternative: Use a dependency-free alternative like Glider.js.
  • Moment is 17.5 KB and 69% unused.
    • Purpose: was added for the year copyright data.
    • Alternative: Do in vanilla JavaScript: Date.getFullYear()

If we still wanted to use these libraries, the best way would be to use a bundler like WebPack, so that we can ensure the unused parts are tree shaken out of it.

the unused parts are tree shaken out of it

Step 5: 🟧 Minify JavaScript and Reduce Unused CSS

CSS

CSS

To keep my hypothetical client happy, I needed a site online quickly, so I added Bootstrap to the site — a wonderful utility library.

It’s full of fantastic utility classes, but for this site, it leads to quite a lot of bloat — as we can see on a favorite tool of mine, CSSStat. Looking at the dev tools coverage report, 93% of the CSS is unused. Not good.

I copied as much CSS as I needed from the inspector, the containers and a columned grid, and popped it all into a CSS file. I could have had it inline within the document and it would have loaded faster, but it’s a bit messy; and I like the idea of it being cached for returning users.

Making my own styles instead of a framework took my CSS from 50 KB down to 2.5 KB.

Removing this by 0.4 seconds also improved FCP by 1 second and Speed Index by half a second.

Fonts

The median font requests in the last five years are four on mobile (up 33%), and five on desktop. In line with this, I’ve added my four favorite Google Web Fonts: Oxygen (10.4 KB), PT Serif, Playfair Display (35 KB), and Rufina.

I’m actually only using two of these: Oxygen for the body text and Playfair Display for headings. But the CSS for this does block the LCP, so I’m going to get rid of them entirely.

Designers may hate me for this, but I’ve swapped out two web fonts for system fonts; ones already on the user’s computer. A saving of 45 KB in total and, importantly for us, stops blocking the CSS from loading. And with my developer’s eye, I think it’s worth it. You can’t tell too much, or nothing a bit of line height and letter spacing can’t fix.

Web Safe Fonts System Fonts
Heading: Playfair display
Paragraph: Oxygen
Heading: Georgia, serif
Paragraph: Helvetica, sans-serif

Housekeeping

There are a few more things we can do to improve this site….

Video

For the video, we’ve currently got a 23 MB .mp4 hero that is 1080p and 9 seconds long. Reducing the length, quality and format will be a big saving.

For this, I used iMovie to halve the size of the video and export it to 720p. Already, this has brought it down to 4.7 MB. From there, I’d heard of a new format — the sister to WebP, called Webm — so I converted the .mp4 to .webm (already well supported by browsers).

This reduced it even further to 3.7 MB… Perfect! It’s still more than the entire average website, but it’s not 23 MB so I’m going to say that’s a win. I did try adding it as an iframe coming from Vimeo, but it shot the LCP up to 5.3, doubled the speed index, and increased blocking time, so I removed it again.

Top tip: Remove auto-play video altogether for a static image or SVG animation, with something like Lottie.

Conclusion

Before

Before

After

After

It’s been a fun process documenting how I improved this website’s LCP score.

As every site is built differently and has its own needs and set of requirements, sadly there is no one fix for everything. Improving LCP is an amalgamation of lots of different smaller tweaks together. It takes time. Using great tools like PageSpeed Insights can give us enhanced visibility and make sure we measure the improvements over time allowing us to see we’re moving in the right direction.

Our website resources over time in a graph from 69MB to under 3MB

Tools I used:

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Moment.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.