Ethernet/IP/TCP bitrate vs. packet size vs. segment size vs. efficiency vs. speed

June 26th, 2015 by bostjan

The shiny title says network interface supports 100 Mbit/s. Ok, I should get 100/8=12.5MB/s or 11.9MiB/s out of it, right? Right? Well, not exactly. As it turns out, running stacked protocols has its penalty. Let us explore what lies beneath.

Let’s start with basics. When searching on the interwebs, there are surprisingly few images that incorporate details from the whole relevant (bottom) part of OSI stack (L1 through L4). I will try to describe the whole stack and I will not use an image, but some really bad ASCII art diagram.

The full stack of signaling sequence consists of:

  1. interpacket gap,
  2. Ethernet packet, which contains Ethernet frame, which contains
  3. IP packet, which in turn contains
  4. TCP segment.

Here is a really bad diagram that contains everything (except Vlan .1Q tags, which are omitted for simplicity. Also, mostly only minimum header lengths are considered):

So, if network is 100Mbps, to which part does this apply? Or better question would be, what is the maximum usable TCP bandwidth that is available to me? Give it to me straight!


100Mbit/s applies to Layer 1 (signaling)!

Therefore, if we do a calculation that minimal headers and maximum TCP payload, we get 1538 bytes that are transferred over wire for each 1460 bytes of actual TCP data. That gives 94.93% efficiency. Or 94.93Mbits/s of useable TCP bandwidth. Or 11.87MB/s. Or 11.32MiB/s, which is maximum of what you get at SCP transfer, for example.

Additional note for telecommunications enthusiasts:
100Mbit/s / 1Gbit/s does not directly translate into 100Mhz / 1GHz frequency on the wire. Fast Ethernet uses 4B5B (8b/10b for Gigabit Ethernet) encoding for L1 transmission error detection. Therefore the frequency of data signaling on wire is actually 25% higher: 125Mhz / 1.25GHz.



Iperf uses 32 byte TCP header instead of minimal 20 byte, and is thus using corresponding maximum payload size – 1448 bytes. This results in maximum usable efficiency of 94.148% (1448/1538). Which is almost exactly what iperf shows on server side: 94.145Mbits/s.

However, on the client side iperf displays a little higher result: 94.422Mbits/s (0.3% difference). Currently I can not explain this difference.

I’ve repeated the test with 1Gbps network connection, and efficiency results are within 0.05% match (network was not totally quiet, some VRRP and DNS traffic was interfering).

Conclusion: iperf measures actual, usable bandwidth in bits/s.


Iptraf displays current situation on particular interface (or all of them). When iperf was running, these were the observed values metrics:
– Client side   : 97.53 Mbps (flow) / 98.38 Mbps (interface)
– Server side 1: 95.84 Mbps (flow) / 96.27 Mbps (interface) – default, GRO enabled
– Server side 2: 97.52 Mbps (flow) / 98.37 Mbps (interface) – GRO disabled

97.53 Mbps displayed by iptraf directly correlates with ethernet MTU: 1500/1538 is exactly 97.53%.

98.38 Mbps almost exactly corresponds to ratio 1500/1526 = 98.29 (omiting interpacket gap). This is a speculation.

Generic receive offload

On server side the report was initially skewed because certain packets are assembled together. I’ve done lots of research into this, only to find out that this is a kernel feature and that there is a simple command to disable it. Once disabled, client and server show identical results (matching within 0.1%). The feature is called “generic receive offload“, and command that disables it is:

ethtool -K ethX gro off

Just for quick reference: on my system, if GRO is enabled (by default, kernel 3.19.2), around 5% of packets are passing through to userspace unmodified, 83% are combined 2by2, and the rest (12%) are combined 3by3. So we have accounted for difference in results shown on client vs. on server when GRO is enabled.


As correctly figured out by fellow blogger:), iftop authors decided to use binary prefixes instead of SI prefixes when showing current interface traffic. Here are the results of iftop when iperf was running for extended period of time:
– Client side 1: 93.0 Mb
– Server side 1: 91.5 Mb
– Server side 1: 93.0 Mb (GRO disabled)

If you take number 93, and multiply it by 1024 squared, and divide by 1M, you get 97.52, which almost exactly corresponds with what iptraf is showing and thus with MTU/1538.


Additional notes

Does iperf display megabits/s or mebibits/s?
If you switch iperf to display bits per second, you see that it is displaying Mbits/s and not Mibits/s. Mibits/s (1024^2 bits/s) would be a nonstandard use, as in telecommunications binary prefixes are normally not used.

Jumbo frames…
Yeah, I know they exist, but they are not the focus of this article.


One Response to “Ethernet/IP/TCP bitrate vs. packet size vs. segment size vs. efficiency vs. speed”

  1. […] Frustration with Stack Exchange et al Ethernet/IP/TCP bitrate vs. packet size vs. segment size vs. efficiency vs. speed […]

Leave a Reply