I am struggling to draw a clear line between latency, bandwidth and throughput.
Can someone explain me in simple terms and with easy examples?
I am struggling to draw a clear line between latency, bandwidth and throughput.
Can someone explain me in simple terms and with easy examples?
Water Analogy:
Vehicle Analogy:
When a SYN
packet is sent using TCP it waits for a SYN+ACK
response, the time between sending and receiving is the latency. It's a function of one variable ie time.
If we're doing this on a 100Mbit connection this is the theoretical bandwidth that we have i.e. how many bits per second we can send.
If I compress a 1000Mbit file to 100Mbit and send it over the 100Mbit line then my effective throughput could be considered 1Gbit per second. Theoretical throughput and theoretical bandwidth are the same on this network but why am I saying the throughput is 1Gbit per second.
When talking about throughput I hear it most in relation to an application ie the 1Gbit throughput example I gave assumed compression at some layer in the stack and we measured throughput there. The throughput of the actual network did not change but the application throughput did. Sometimes throughput is talking about actual throughput
ie a 100Mbit connection is the theoretical bandwidth and also the theoretical throughput in bps but highly unlikely to be what you'll actually get.
Throughput is also used in terms of whole systems ie Number of Dogs washed per day or Number of Bottles filled per hour. You don't often use bandwidth in this way.
Note, bandwidth in particular has other common meanings, I've assumed networking because this is stackoverflow but if it was a maths or amateur radio forum I might be talking about something else entirely.
https://en.wikipedia.org/wiki/Bandwidth
https://en.wikipedia.org/wiki/Latency
This is worth reading on throughput.
Here is my bit in a language which I can understand
When you go to buy a water pipe, there are two completely independent parameters that you look at: the diameter of the pipe and its length. The diameter determines the throughput of the pipe and the length determines the latency, i.e., the time it will take for a water droplet to travel across the pipe. Key point to note is that the length and diameter are independent, thus, so are are latency and throughput of a communication channel.
More formally, Throughput is defined as the amount of water entering or leaving the pipe every second and latency is the average time required to for a droplet to travel from one end of the pipe to the other.
Let’s do some math:
For simplicity, assume that our pipe is a 4inch x 4inch square and its length is 12inches. Now assume that each water droplet is a 0.1in x 0.1in x 0.1in cube. Thus, in one cross section of the pipe, I will be able to fit 1600 water droplets. Now assume that water droplets travel at a rate of 1 inch/second.
Throughput: Each set of droplets will move into the pipe in 0.1 seconds. Thus, 10 sets will move in 1 second, i.e., 16000 droplets will enter the pipe per second. Note that this is independent of the length of the pipe. Latency: At one inch/second, it will take 12 seconds for droplet A to get from one end of the pipe to the other regardless of pipe’s diameter. Hence the latency will be 12 seconds.
I would like to supplement to the answers already written, another distinction of Latency and Throughput, relevant to the concept of pipelining. For that purpose I 'll use an example from the daily life, regarding the preparation of clothes: To get them ready, we have to (i) wash them, (ii) dry them (iii) iron them. Each of these tasks needs an amount of time, lets say A,B and C respectively. Every batch of clothes will need a total of A+B+C time until it is ready. This is the latency of the total process. However, since i, ii and iii are separate sub-processes you may start washing the 3rd batch of clothes, while the 2nd one is drying, the 1st batch is being ironed, etc (Pipeline). Then, every batch of clothes after the 1st, will be ready after max(A,B,C) time. Throughput would be measured in batches of clothes per time, equal to 1/[max(A,B,C)].
That being said, this answer tries to highlight that when we only know the latency of a system, we do not necessarily know its throughput. These are truly different metrics and not just another way to express the same information.
Latency: Elapsed time of an event.
eg. Walking from point A to B takes one minute, the latency is one minute.
Throughput: The number of events that can be executed per unit of time.
eg. Bandwidth is a measure of throughput.
We can increase bandwidth to improve throughput but it wont improve latency.
Take the RPC case — There are two components to latency of message communication in a distributed system, the first component is the hardware overhead and the second component is the software overhead.
The hardware overhead is dependent on how the network is interfaced with the computer, this is managed mostly by the network controller.
I wrote a blog about it :) https://medium.com/@nbosco/latency-vs-throughput-d7a4459b5cdb
Bandwidth is a measure of data per second, which is equal to the temporal speed of such data multiplied by the number of spatial multiplexing channels, so essentially in the water pipe analogy it is flow velocity * diameter. In digital signal processing, the temporal speed of the data is constrained by the frequency bandwidth of the channel and the SNR.
Latency is the physical length of the channel (in terms of the number of bits it can hold in flight) divided by the bandwidth. Latency increases when transmitter and receiver get further apart spatially, but bandwidth does not change because the transmitter layer 1 can still send at the same speed. It also increases when there's an intermittent node or receiving node that buffers, processes or delays the data, but still has the same bandwidth – it might take a while for the first packets of a download to come in, but when they do, it will hopefully be at full bandwidth. Of course, that assumes that the transmitter protocol stack doesn't need to wait around for control packets from the receiver like TCP ACK or layer 2 ACK.
© 2022 - 2024 — McMap. All rights reserved.