| technology | programming - David Chapman
Network Throughput vs Bandwidth: The Difference
When working with networks, particularly in regard to capacity planning or troubleshooting, understanding key terms are important. In this post, we will cover network throughput, bandwidth, latency, and testing. These are all important concepts to know — and can help you maintain an efficient and properly tuned network.
What is Latency?
Latency is probably the most understated factor in network performance. Latency is the amount of time it takes for a packet to get from point A to point B. In the networking world, many times use Return or Round Trip Time (RTT) in measuring latency. This is simply the time it takes for data to get from point A to point B and then back to A. This measurement is used because many applications that send data, use the other end to verify the data and acknowledge it and report back before sending more data.
Latency is typically measured in milliseconds due to how fast information travels over modern connections. For example, it is not uncommon for RTT to be sub 100ms between two endpoints in the same country. Local LAN connections may see sub 1ms while satellite connections may see latency nearing or above 1 second.
A business example of where latency is important is voice and video real time applications. They tend to be the primary use cases that notice when latency is not up to par. Have you ever been on a video or VoIP call where you kept talking over the other person or trying to talk at the same time? Very likely latency was to blame for this.
Latency is not just a factor in real time applications but also as simple as downloading files. Remember, much of the internet sends data in queues or windows and then waits for acknowledgement back before sending more. For this reason prompt responsiveness is key. Very poor latency can severely degrade an otherwise appropriate amount of throughput for that transfer.
What is Network Throughput?
Throughput is the actual speed of data transfer that is perceived by the end user or between two points. When you go to download a file, the rate at which it is downloading is the throughput you are seeing. It takes into account things like any packet errors, retries, line problems, and bottlenecks. This throughput can and will be different between different endpoints.
For example, you may get different download speeds from Google Drive versus Microsoft OneDrive because those systems are geographically in different locations and your path to one may be more efficient than another. It can also depend on which upstream links are more congested or have available bandwidth.
What is Bandwidth?
Bandwidth is more of a theoretical limit than actual. There are two main drivers as to what determines bandwidth. The primary driver is cost. An internet service provider (ISP) or carrier will charge more for faster connections than slower connections. This is typically controlled by either a traffic policy, forcing the circuit to negotiate at a set speed or a combination of the two.
Price is not the only factor, though. In some cases, particularly with sub par cabling or dated equipment, it may not be able to negotiate the full contracted bandwidth. For a digital subscriber line (DSL) this is fairly common if you are particularly distant from the central office (CO) or have line degradation and it will negotiate a slower speed so that it can reliably send and receive data.
The typical analogy is a water pipe. Only so much water can be pushed through the pipe per second. We can speed up the rate of flow on the water but ultimately the pipe has a max rating. If that pipe has some buildup inside it though, it may not be able to reach the full engineered speed of the pipe.
How is Network Throughput Measured?
Throughput can be measured in either bits per second or bytes per second. Often, since throughput is what the end user actually gets, bytes are more palatable to understand since files are measured in bytes. It is not uncommon to see speeds of many megabytes per second or Mbps.
On the other hand, if you're doing throughput testing, it is more common to see bits per second as that more closely resembles bandwidth and it’s easier to look at the numbers and compare. The idea is to try to tune the network to try to achieve the full bandwidth during a throughput test or get as close as possible.
Why Network Throughput Testing is Important
If you take provisioned bandwidth at face value, you may think you have a Gigabit connection but only see 200mbps to a remote endpoint. Throughput testing can help detect that early on. In the enterprise world, you may have remote/online backups or a VPN tunnel to a remote location. It may be easy to assume you have a dedicated gigabit internet connection only to find out that a fraction of that speed is actually usable between those endpoints.
With VPN tunnels you may be able to achieve the full line speed but only by using multiple streams. You may find during your throughput testing that each stream or download has a max that is a subset of your max throughput. In many cases this can help you tune how you do those transfers to maximize throughput.
What Affects Network Throughput?
Nearly every network anomaly can affect network throughput. At a high level, latency has a huge impact on it, particularly in bidirectional or stateful traffic. This is because a queue of packets is typically sent to the other end. Once it receives the queue, it replies back that it has received them. This is by design so that the sender knows whether it needs to resend data. For real time applications that use UDP, and are therefore stateless and unidirectional, this is less of an issue as the data is just streamed realtime.
Network congestion can play a huge part in this. Many ISPs oversubscribe their internet circuits. What that means is if for example they have 10 subscribers with 100mbps connections, they may size their upstream circuit to 500mbps. This is because rarely do all of the subscribers download 100% of their bandwidth constantly. When the upstream circuit starts to get saturated, packet errors and retries happen as the congestion occurs and many times the aggregate of those 10 subscribers will see much less than 500mbps.
How is Bandwidth Measured?
Bandwidth is typically measured in bits per second, but with line speeds so much faster today, we use prefixes like kilo, mega, giga and even tera to the word bit. For example, many internet throughput speeds today are measured in megabits per second (mbps) and some gigabits per second (gbps).
Throughput vs Bandwidth: What's the difference?
The easiest way to think of the difference is that bandwidth is the theoretical max speed of a connection. It is what it has been provisioned for and the equipment negotiated to. Under ideal circumstances without any line errors or bottlenecks to the destination, this is the maximum achievable speed.
Throughput on the other hand is how they say "when the rubber hits the road". This will be the actual usage under real world conditions between two endpoints.
How to Optimize Network Throughput
Throughput optimization builds upon bandwidth optimization. Starting with the lowest level of optimization, if you use a wired connection, ensure you have a very good ethernet cable. If it is more than a few years old, investigate replacing it. Over time and years of bending the wires and connectors can wear and the contact they make become less effective.
On the other hand, if you have a wireless connection, ensure proper WAP placement to minimize interference and adequate coverage. There are many free smartphone apps that can help you perform a site survey to determine adequate placement of the WAP.
Updating firmware and drivers for network interface cards (NICs) can go a long way. Often, there are bug fixes that also can help improve performance. Along with that, router and switch firmware can help resolve performance issues as well if you have any control over updating those.
As mentioned above, you may find that single streams have a limited throughput but that you can use multiple streams to surpass that limit. Sometimes uplinks are multiplexed. Depending on the technology this may be called etherchannel, LACP, LAG or a mux to name a few. Not all of these protocols are great at splitting up traffic so you may need multiple streams for them to split appropriately.