Why micro seconds are not enough?

Since first release of packETH in 2003 the time resolution inside the sending thread (and also everything else concerning timing inside packETH) was always done in micro seconds. At that time nano seconds were (probably) not an option and based on how packETH was designed micro seconds were more than enough. Inside the Gen-b window where the user could specify the packet rate there was only the option to enter the desired gap between packets in micro seconds. Mostly hardware could handle this resolution and the actual output was quite precise compared to the desired one.

Later on people asked me if another more user friendly option could be added: to enter the desired traffic in kbit/s or Mbit/s. Based on the fact that Gen-b is sending same size packets at constant rate it is trivial to convert Mbit/s to required gap between packets. However one problem arises with this option that some people pointed at soon after the first implementation. The “problem” was that they could not achieve the exact bandwidth they entered even at pretty low speeds. Some times the actual output rate was lower and sometimes higher compared to the chosen one. At first glance the user might think there is a timing issue inside packETH, but actually it turns out that only our expectations are to high because the details are lost when we do the conversion (like in a couple conversations many times 🙂 ). Simple, micro seconds granularity is not enough if we want that the desired bandwidth is always close to the actual one. And here I’m not talking about max speeds at CPU limits but being as close as possible at bandwidths that hardware should handle. Why is this so?

The equations are simple:

Packet rate = Packet length * Packets per second

Gap between packets =1 / Packets per second

Gap between packets = Packet length / Packet rate

Let’s say user has built a packet with the size of 100 bytes (including L2 headers) and has set the desired bandwidth to 100 Mbit/s. If we want to get this output rate the desired gap between two packets should be:

Gap between packets = 100 bytes / 100 Mbit/s = 0,000008s = 0,008ms = 8us

So if we send one packet with the size of 100 bytes every 8us, that would result in sending 125000 packets every second, what brings a total output of 100 Mbit/s of traffic. Perfect. So the sending routine sends a packet on the wire, constantly checks in a loop if 8 micro seconds passed and once they do, it sends another packet.

But what if the user sets the bandwidth to 95Mbit/s?

Gap between packets = 100 bytes / 95 Mbit/s = 0,000008421s = 0,008421ms = 8,421us

In this case we need to send a packet every 8,421 micro seconds.  But since we only have micro seconds resolution, we can only send a packet either every 8us or every 9us. In first case we get 100Mbit/s of throughput and in second one we get 88,8Mbit/s. It’s clear that we need more than micro seconds resolution if we want to get close to 95 Mbit/s.

Below is a graph that shows what bandwidth we get for three different packet sizes (64, 500 and 1500 bytes) if we send this packets at constant rate, where gap between packets varies from 2 to 10 micro seconds.

bandwidth1

The number values are actually the only possible output rates we can get with micro seconds resolution. If we want to get any values in between we have to be more accurate and that is why nano seconds needs to be used.

Nano seconds were enabled in packETH-1.8.1. You must use a kernel version that supports them what shouldn’t be a problem with latest distributions.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s