Analyzing Link Performances with Iperf & Jperf.

Iperf is a tool to measure the bandwidth and the quality of a network link.
Jperf can be associated with Iperf to provide a graphical frontend (Java).

The quality of a link can be tested analyzing following aspects:
– Latency (Response Time or RTT): ICMP Tests.
– Jitter (Latency Variation): Iperf UDP Tests.
– Datagram Loss: Iperf UDP Tests.
– Bandwidth: Iperf TCP & UDP Tests.

The difference between TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) is that TCP “checks” that packets are correctly sent to the receiver.
In the other side, using UDP, packets are sent without any checks but with the advantage of being “quicker”.

Iperf uses both different capacities of TCP and UDP to provide statistics about network links.

Iperf can be installed very easily on most used OS Platforms, in basic configuration (with two hosts, identifying a link) one host must be set as client, the other one as server.

SYNOPSIS:
iperf -s [ options ]
iperf -c server [ options ]
iperf -u -s [ options ]
iperf -u -c server [ options ]
 
GENERAL OPTIONS:
-f [kmKM] format to report: Kbits, Mbits, KBytes, MBytes
-h print a help synopsis
-i n pause n seconds between periodic bandwidth reports
-l n[KM] set length read/write buffer to n (default 8 KB)
-m print TCP maximum segment size (MTU - TCP/IP header)
-o <filename> output the report or error message to this specified file
-p n set server port to listen on/connect to to n (default 5001)
-u use UDP rather than TCP
-w n[KM] TCP window size (socket buffer size)
-B <host> bind to <host>, an interface or multicast address
-C for use with older versions does not sent extra msgs
-M n set TCP maximum segment size (MTU - 40 bytes)
-N set TCP no delay, disabling Nagle's Algorithm
-v print version information and quit
-V Set the domain to IPv6
-x [CDMSV] exclude C(connection) D(data) M(multicast) S(settings) V(server) reports
-y C|c if set to C or c report results as CSV (comma separated values)
 
SERVER SPECIFIC OPTIONS:
-s run in server mode
-U run in single threaded UDP mode
-D run the server as a daemon
 
CLIENT SPECIFIC OPTIONS:
-b n[KM] set target bandwidth to n bits/sec (default 1 Mbit/sec).  This setting requires UDP (-u).
-c <host> run in client mode, connecting to <host>
-d Do a bidirectional test simultaneously
-n n[KM] number of bytes to transmit (instead of -t)
-r do a bidirectional test individually
-t n time in seconds to transmit for (default 10 secs)
-F <name> input the data to be transmitted from a file
-I input the data to be transmitted from stdin
-L n port to receive bidirectional tests back on
-P n number of parallel client threads to run
-T n time-to-live, for multicast (default 1)
-Z <algo> set TCP congestion control algorithm (Linux only)

To get a list of supported Linux Congestion Algorithms, it is possible to use:

sysctl net.ipv4.tcp_available_congestion_control

—–

Default Operations:
By default, Iperf “client” connects to Iperf “server” on TCP port 5001 and the bandwidth displayed by Iperf is the bandwidth from the client to the server.

Client side:

iperf -c <Server_IP>

Server side:

iperf -s

—–

Bi-directional bandwidth measurement:
Iperf server connects back to the client allowing the bi-directional bandwidth measurement.
By default, only the bandwidth from the client to the server is measured, to measure the bandwidth simultaneously, use the -d keyword.

Client side:

iperf -c <Server_IP> -r

Server side:

iperf -s

—–

TCP Window size:
The TCP window size is the amount of data that can be buffered during a connection without a validation from the receiver.

Client side:

iperf -c <Server_IP> -w 3000

Server side:

iperf -s -w 5000

—–

UDP tests:
The UDP tests with the will give invaluable information about the jitter and the packet loss.
Iperf uses TCP as default. To keep a good link quality, the packet loss should not be more than 1 %.
A high packet loss rate will generate a lot of TCP segment retransmissions which will affect the bandwidth.
The jitter is basically the latency variation and does not depend on the latency.
The jitter value is particularly important on network links supporting media streams.
The -b argument allows the allocation if the desired bandwidth (default value, without specifying -b argument for UDP Test is 1 Mbit/sec)

Client side:

iperf -c <Server_IP> -u -b 20m

Server side:

iperf -s -u

—–

Maximum Segment Size display:
The Maximum Segment Size (MSS) is the largest amount of data, in bytes, that a computer can support in a single, unfragmented TCP segment.
It can be calculated as follows:
MSS = MTU – TCP & IP headers
The TCP & IP headers are equal to 40 bytes.
The MTU or Maximum Transmission Unit is the greatest amount of data that can be transferred in a frame.
Generally, a higher MTU (and MSS) brings higher bandwidth efficiency

Client side:

iperf -c <Server_IP> -m

Server side:

iperf -s

Note: By adding to client side the -M argument, it is possible to change the MSS.

iperf -c <Server_IP> -M 1300 -m

—–

Parallel tests:
The -P argument allows to run parallel tests.

Client side:

iperf -c <Server_IP> -P 2

Server side:

iperf -s

Comments are closed.