There is a fundamental misunderstanding about how data is transferred on the Internet. Both technical and non-technical people are sometimes confused or combine two separate concepts of bandwidth and latency. Understanding these 2 concepts is critical to understand the core ideas behind web performance and best optimization techniques.
If you think of the Internet as a series of tubes, latency is the length of the tube between two points. Bandwidth is how wide the tube is. Indeed, it’s named bandwidth because it describes the width of the communications band. The wider the tube the more data you can send. Easy enough, right? It is important to remember though, no matter how much data you are sending, you still have to move it the distance from point A to point B. The time that takes is called the latency.
To better illustrate the concept we can take a look at an example. The physical distance from Boston, Massachusetts to Stanford University in is 4320 kilometers. In a perfect scenario it would take 21.6 ms to transport data between the two point (Data can not travel faster then the speed of light over a fiber cable). The round-trip time to Boston and back is 43.2 milliseconds. These are fundamental laws of nature. You will never get something to travel faster than that. There will always be a delay of at least 43.2 ms when Boston communicates with Stanford.
In reality, the latency delay is more than 43.2 ms. This is because we do not have a single continuous piece of fiber optic cable connecting Boston and Stanford. Instead, the path goes through several segments along the way where dozens of pieces of networking equipment add to the delay. Regardless of how big and powerful routers and equipment are, it is not functioning at the speed of light! Typically, the Boston to Stanford route is on the order of 75-85 milliseconds.
Remember, we have yet to talk about bandwidth, or connection speeds. You need to wrap your head around the fact that to send a single bit of data you will always have a delay caused by latency for the data to travel the physical distance to its destination. Having a 25 Mbps connection does not somehow allow data to travel that a distance any faster. A large bandwidth connection simply allows you to send or receive more data. The data still needs to travel to and from your computer.
In the late 1990′s and early 2000′s, the impact of latency was less visible due to the fact that personal internet connections were quite slow compared to today. The latency was masked because the delay in sending a request and waiting for a response was much smaller than the total time it took to download all of the response. However, that is no longer the case. It is not uncommon for a browser requesting a small image to wait 100 – 150 milliseconds before spending 5 milliseconds to download the image contents. This means that latency is accounting for 90-95% of the total time to request and download the resource. This is tremendously inefficient!
So why should we care about latency? Because it’s the basis for many of your performance optimizations! Whether a customer signs up for a 25Mb, 50Mb, or 100Mb bandwidth packages, Internet providers could have congested network with high latency, which will always bog down your connection. Next time your Internet is slow… ask about the latency!
Article reference; Bandwidth, Latency, and the “Size of your Pipe” By Billy Hoffman