Tuning Network Performance

Running Benchmarks

Best practice

Change one setting at a time, running benchmarking tools (such as iperf and netperf) to determine the impact of the change. 

Temporarily disable irqbalance and cpuspeed on the Linux-based test client(s) to maximize network throughput and allow the best results before starting any benchmarks:

Benchmark prep
service irqbalance stop service cpuspeed stop chkconfig irqbalance off chkconfig cpuspeed off

Change sysctl Settings

Run a benchmark so the performance impact can be confirmed before proceeding.

Optimize the core memory settings in the Linux kernel next.

RHEL 7

In RHEL 7, system tunables are set in the /etc/sysctl.d/ directory, and they may be specified in more than one configuration file in this directory. The ordering logic determines which is used, so verify changes are not being overridden. See https://access.redhat.com/solutions/800023.

Gateway components and clients

Here is a modified /etc/sysctl.conf, which can be applied as sysctl changes. Swarm-specific recommendations are grouped at the end. Follow the operating system's recommendations and instructions for modifying sysctl settings.

Modified /etc/sysctrl.conf
# -- tuning -- # # Increase system file descriptor limit fs.file-max = 65535 # Increase system IP port range to allow more concurrent connections net.ipv4.ip_local_port_range = 1024 65000 # -- 10gbe tuning from Intel ixgb driver README -- # # turn off selective ACK and timestamps net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0 # memory allocation min/pressure/max. # read buffer, write buffer, and buffer space net.ipv4.tcp_rmem = 10000000 10000000 10000000 net.ipv4.tcp_wmem = 10000000 10000000 10000000 net.core.rmem_max = 524287 net.core.wmem_max = 524287 net.core.optmem_max = 524287   # Caringo-specific recommended values: net.ipv4.tcp_mem = 134217728 134217728 134217728 net.core.rmem_default = 134217728 net.core.wmem_default = 134217728 net.core.netdev_max_backlog = 300000

These kernel updates are hardware/machine specific, so they are not saved to the cluster persistent settings.

Important

Swarm fails to start if an I/O error occurs while reading a sysctl setting.

Swarm storage nodes

There are per-chassis settings for network tuning available to set in the node.cfg file and can apply dynamically using SNMP. For sysctl-like and other buffer settings, it is possible to change them using SNMP, but these values are not stored in the persisted Settings object because they are specific to a given chassis.

Storage Setting

SNMP Name

Recommended

Type

Description

Storage Setting

SNMP Name

Recommended

Type

Description

sysctl.deviceWeight

deviceWeight

256

int

Value of /proc/sys/net/core/dev_weight.

sysctl.tcpMem

tcpMem

134217728 134217728 134217728

str

Value of /proc/sys/net/ipv4/tcp_mem, in form 'min default max'.

sysctl.coreRMemDefault

rMemDefault

134217728

int

Value of /proc/sys/net/core/rmem_default.

sysctl.coreWMemDefault

wMemDefault

134217728

int

Value of /proc/sys/net/core/wmem_default.

sysctl.netdevMaxBacklog

netdevMaxBacklog

300000

int

Value of /proc/sys/net/core/netdev_max_backlog.

cip.readBufferSize



33554432

int

(Node-specific) In bytes. The size of the multicast UDP socket read buffer.

Set Buffer Size

Run a benchmark so the performance impact can be confirmed before proceeding.

Swarm supports a limited number of tunable settings, such as buffer size.

Setting

Value

Type

Description

Setting

Value

Type

Description

network.wmemMax

262144

int

Maximum value of wmem, ≥ 16384

network.rmemMax

262144

int

Maximum value of rmem, ≥ 87380

network.rxQueueLength

0

int

Value of ethtool -G ethX rx. 0 is unset, leaving kernel default

Technical Notes for Swarm Tuning

Linux networking is configured to optimize reliability by default, not performance, which may require adjustment for high-speed networking (beyond 1 Gb Ethernet): the kernel’s send/receive buffers, TCP memory allocations, and packet backlog are generally too small. Tuning can significantly improve performance with gigabit Ethernet.

Intel includes a README for Linux outlining recommendations for each GbE controller. Refer to the documentation supplied by the manufacturer of the controller.

See Performance tuning: Intel 10-gigabit NIC, which incorporates the tuning recommendations cited in the Intel documentation.

© DataCore Software Corporation. · https://www.datacore.com · All rights reserved.