Skip to main content
Real User Monitoring

What Is Network Latency and How to Reduce It to Improve Website Performance

John Demian John Demian on

So you finally launched your service worldwide, great! The next thing you’ll see is thousands and thousands of people flooding into your amazing website from all corners of the world expecting to have the same experience regardless of their location.

Here is where things get tricky. Having an infrastructure that will support the expansion of your service across the globe without sacrificing user experience is going to be real though as distance will introduce latency.

So in this post, we are going to see what latency is, how it affects your website performance and how you can measure and reduce it to ensure a flawless experience for all your users regardless of their location around the world.


What Is Latency?

Latency is the time it takes for a request to travel from the sender to the receiver and for the receiver to process that request. In other words, it’s the time it takes for the request sent from the browser to be processed and returned by the server.

Consider a simple e-commerce store that caters to users worldwide. High latency will make browsing categories and products very difficult and in a space as competitive as online retailing, the few extra seconds can amount a fortune in lost sales.


How Network Latency Affects Website Performance

There are no two ways about it. Your app will react differently based on your user’s location and it has to do with something we call server latency.

The experience of your customers living in a different part of the world will be vastly different from what you are seeing in your tests. It’s crucial to gather data from every location that your users live in, that way you can take the necessary steps to fix the issue.

What Is a Good Latency?

While there’s no clear number for good latency because every case is unique in its own way, as a guideline, your website should try to completely load under 3 seconds. That means images, scripts, dynamic elements, everything needs to load in under 3 seconds.

There’s going to be some delay created by the client itself as the browser needs time to process the requests and render the response. So while 3 seconds might sound enough to load a few kb of data, the truth is that you should be very mindful about the size and complexity of your code and requests as they will take a while to load and render.

Why Should You Measure Network Latency?

Providing an overall good service across the world is not just good for your image, it’s good for your business. You’ll soon come to realize that bad user experience will affect your bottom line directly.

measure network latency

            Source: fastcompany.com

To use the eCommerce example again here is a little quote from Yoast that will put things into perspective: 79% of online shoppers say they won’t go back to a website if they’ve had trouble with load speed.

And let’s not forget that fast-loading sites have higher conversion rates and lower bounce rates. So it’s not a matter of whether you should or shouldn’t invest in optimizing your load speed across multiple regions, but rather if you can afford not to.


What Causes Network Latency

There are thousands of little variables that make up your network latency but they mainly fall under one of these 4 categories:

  • Transmission mediums – Your data travels across large distances in different forms, either through electrical signals over copper cabling, light waves over fiber optics. Every time it switches from one medium to another a few extra milliseconds will be added to the overall transmission time.
  • Payload – The more data gets transferred the slower the communication between client and server.
  • Routers – Take time to analyze the header information of a packet and, in some cases, add additional information. Each hop a packet takes from router to router increases the latency time. Furthermore, depending on the device there can be MAC address and routing tables lookups.
  • Storage Delays – Delays can occur when a packet is stored or accessed. This results in a delay caused by intermediate devices like switches and bridges.

How to Measure Network Latency?

Network Latency is always measured in milliseconds (ms) and exposed through two metrics – Time to First Byte and Round Trip Time. You can use either of them for the test on your network. However, regardless of what you choose, make sure to keep all records in the same test category. It’s important to monitor any changes and address the culprits if you want to ensure a smooth user experience and keep your customer satisfaction score high.

Time to First Byte

Time to First Byte (TTFB) is the time the first byte of each file reaches the user’s browser after a server connection has been established.

The TTFB itself is affected by three main factors:

  • The time it takes for your request to propagate through the network to the web server
  • The time it takes for the web server to process the request and generate the response
  • The time it takes for the response to propagate back to your browser

Learn more about TTFB and how you can reduce it from our post about the most important website performance metrics you should monitor to improve page load speed. You can easily test TTFB by using a website speed testing tool which will give you detailed information about many other metrics such as render-blocking code, page sizes and more.

Round Trip Time

Round Trip Time (RTT), also called Round Trip Delay (RTD) is the duration it takes for a browser to send a request and receive a response from a server. RTT is perhaps the most popular metric involved in measuring network latency and it is measured in milliseconds (ms).

RTT is influenced by a few key components of your network:

  • Distance – The bigger the distance between server and client the longer it takes to get the signal back
  • Transmission medium – The medium used to route a signal
  • The number of network hops – Intermediate routers or servers take time to process a signal, increasing RTT. The more hops a signal has to travel through, the higher the RTT.
  • Traffic levels – RTT typically increases when a network is congested with high levels of traffic. Conversely, low traffic times can result in decreased RTT.

How to Reduce Latency: Using RUM Tools to Fix Network Latency

So you’ll be dealing with a user base that’s spread across multiple continents, you understand the importance of keeping your performance in good standing regardless of the user location and you even understand what you need to look for. All there’s left is the “how” part.

In the past few years, the popularity of RUM (real user monitoring) tools grew exponentially as a byproduct of the increased number of companies that took their services and products globally. An ever-growing need to monitor and understand user behavior led to a lot of monitoring companies switching from just monitoring server resources to looking at how users experience the website.

This is where Sematext Experience comes into play. It’s a RUM solution that provides insightful data about how your webapp is performing in all the different corners of the world by measuring data in real-time directly from your users.

The tool offers a real-time overview of your users’ interactions with the website, allowing you to pinpoint exactly where your system is underperforming.

Latency

With our advanced dashboard dedicated to Page Loads, you’ll see all the details on what is causing Latency and show you a breakdown by each category, from DNS and server processing time to how long does the render take. From this you can extrapolate where your weak spots are and what areas you need to prioritize.

Here’s a quick little overview of Sematext Experience:

Besides providing key information on how the website is performing across different locations, it will also provide intel on how the website loads on different devices running at different connection speeds.


Final thoughts

Understanding how the users’ experience is always going to be crucial as it will influence whether or not they will return to the site or not which in turn, will directly impact your bottom line.

Latency is always going to influence the performance of your website but with the proper tooling, you can mitigate its impact on user experience by addressing the main issues causing the latency in the first place.

If you’re interested in improving overall website health, you should keep an eye on more than latency. Learn what are top 10 metrics affecting site performance, as well as the most important UX metrics you should monitor to improve customer satisfaction. But tracking so many metrics can be overwhelming without proper tooling. Use RUM solutions and other types of website monitoring tools to support your monitoring strategy and ensure full visibility into your users’ digital experience.