Measuring web performance: lab vs. production

Carlos Morales

Carlos Morales • 2022-06-29

Measuring web performance is tricky, the performance of a site can vary dramatically based on a user’s device, the network conditions, and how the user interacts with the page. This complexity starts with what data to collect and how to collect/send the data. This document explains first how to collect the data and then offers a high-level solution.

Lab vs field measurements

Performance metrics are generally measured in one of two ways:

Let’s further analyze these two options and how they could be implemented.

In the lab

The baseline is to collect synthetic data with some tool, the tool simulates a page load and quantifies the time it takes to load. This process is done in development or CICD environment.



Options for implementation

In the field

The goal is to measure how the web performs for real users. Some JavaScript must be injected into the web client, this code should not affect the overall performance while collecting data and sending it to a backend.



Options for implementation

The problem of measuring only in the lab

Because it is complex to measure web performance in the field, people normally only focuses on synthetic measurements. Sadly this does not show the real picture.

Based on the research document Lighthouse scores as predictors of page-level CrUX data, although there is a positive correlation between lab performance measurements (Lighthouse) and real user experience, more than 40% of all pages that scored >90 on Lighthouse did not meet one or more of the recommended Core Web Vitals thresholds. The conclusions of this research document: ”for some pages, there are issues troubling performance in the field that Lighthouse isn’t (yet?) able to capture … remember to check both lab and field data if you’re looking to accurately assess your page’s performance“.

In conclusion, collecting data in the field should be mandatory for assessing and monitoring user-perceived performance.

Build a solution for collecting web performance from real users

In my current company we have experience using AppDynamics and Dynatrace. They both have modules that would allow us to monitor the web performance. Sadly, these solutions experienced some resistance (either because of license costs or complexity) in the last few years.

My proposal was to reuse the available Kubernetes infrastructure (Prometheus and Grafana) and fill the gaps building our own solution: a client agent built-in TypeScript that collects the performance metrics and a backend service that would expose this data to Prometheus.

In this solution, one of the trickiest problems is to define what metrics to collect. My proposal was to use Web Vitals.

Web Vitals is an initiative by Google to ”provide unified guidance for quality signals that are essential to delivering a great user experience on the web”. It offers: