Document Identifier: | TechNote 2015003 |
Document Date: | July 24, 2015 |
Software Package: | SWARM, CSN |
Version: | Swarm 7 or later; CSN 3.0 and later |
Abstract
This technical note provides details for performing advanced Swarm configuration on a CSN (Cluster Services Node). These advanced configuration techniques involve changing the underlying files that the CSN software uses when managing the Swarm storage cluster.
The target audience for this paper is system administrators who are comfortable managing RHEL®/CentOS® systems and who are familiar the Swarm configurations as described in the Swarm Guide.
Storage Node Identification and Configuration
Finding the MAC Address
The CSN tracks storage nodes using the MAC address for the primary network interface. The primary interface is the one that was used for PXE booting the node. For the purposes of this discussion, chassis and node are used synonymously except where noted. Keep in mind that one physical server chassis can be assigned multiple IP addresses when Swarm multi-server mode is in use.
The ip-assignments
utility provided in the CSN software reports the mapping between the hexadecimal MAC address and all the IP addresses that are allocated to a chassis. The following example shows the MAC address in the first column and the IP address in the second column.
# /opt/caringo/csn/bin/ip-assignments
5254003bf075 10.100.201.49
525400545168 10.100.201.50
525400130fca 10.100.201.51
To create a unique list of all of the storage chassis, you can run the following command:
# /opt/caringo/csn/bin/ip-assignments --macs | sort | uniq
Global Configuration File
The CSN uses a global configuration file in order to store configuration values that are applied to all nodes within the storage cluster. This file uses the configuration format documented in the Swarm Guide and the file is located at:
/var/opt/caringo/netboot/content/cluster.cfg
All configuration values in this file are applied first in Swarm’s layered configuration mechanism.
Therefore, it is possible to override them with node-specific configuration items.
Overriding the number of processes per chassis
In version 3.1.2 and later CSNs, the following process can be used to override the default process count requested by Swarm with netboot config.
Info | ||
---|---|---|
| ||
This section does not apply to Swarm 10.x — there is a single process per chassis in Swarm 10 and above, so changing the process count is not supported. |
If /var/opt/caringo/netboot/content/nodeconfigs/pc<mac>.cfg
exists, then use the value found in the file to override the process config request. This allows setting the process config value on an individual per-node basis.
Create:
'echo 4 > /var/opt/caringo/netboot/content/nodeconfigs/pc<mac>.cfg'
If /var/opt/caringo/netboot/content/nodeconfigs/proc_count.cfg
exists, then the value found will be used to override process count for ALL storage nodes.
create:
'echo 4 > /var/opt/caringo/netboot/content/nodeconfigs/proc_count.cfg'
In the event that both the per storage node pc<mac>.cfg
and the proc_count.cfg
file exist, then the pc<mac>.cfg
takes precedence and will be used for a request from a matching <mac>
.
Node-specific Configuration Files
Although node-specific configuration files are not created by the CSN, the PXE booting infrastructure within the CSN will make use of any files that the administrator creates. Since the node-specific configuration files are applied after the global configuration file, they can override any settings made by the global file. They can also add new settings.
The node-specific configuration files must be located in the following directory.
/var/opt/caringo/netboot/content/nodeconfigs
The file names are a concatenation of: "sn" + MAC Address + ".cfg"
Using the previous list of node MAC addresses as an example, these would be the names for the three node-specific configuration files:
/var/opt/caringo/netboot/content/nodeconfigs/sn525400130fca.cfg
/var/opt/caringo/netboot/content/nodeconfigs/sn5254003bf075.cfg
/var/opt/caringo/netboot/content/nodeconfigs/sn525400545168.cfg
It is only necessary to create a particular node-specific configuration file if you need to provide configuration parameters for that node only. In other words, you can create node-specific configuration files just for those nodes and you don't have to create blank files for the other nodes. It is highly advisable to clone the static node config file for every MAC address that could be used to PXE boot.
Example: A dual-nic node where both nics could be used to PXE boot should each have a static node config file with identical content inside.
Network Bonding Mode
The bonding mode is a global configuration for all storage nodes managed by the CSN and is set within a file used by the PXE booting infrastructure. The file location is:
/etc/caringo/netboot/netboot.cfg
The kernelOptions
parameter is passed to the embedded Linux® kernel's command line at boot time.
The castor_net
sub-parameter within the value field controls the network bonding mode. The Swarm Guide provides full details for using the castor_net
field.
Example 1: Using the IEEE 802.3ad LACP Mode
kernelOptions = castor_net=802.3ad:
Example 2: Using a Static LAG Mode:
kernelOptions = castor_net=balance-rr:
Some bonding modes require switch support and configuration before they can be used. Please see the Swarm Guide and the Linux kernel's "bonding.txt" documentation file for details about the different bonding modes.
Info | ||
---|---|---|
| ||
Bonding mode 6 (balance-alb) is supported when using a single peer switch. If the requirement is multiple switch chassis to provide switch layer redundancy, configure Swarm to use either bonding mode 1 (active-backup) or bonding mode 4 (802.3ad). Bonding mode 1 (active-backup) is advised for multi-chassis link aggregation whereas bonding mode 4 (802.3ad) needs additional configuration on the switch side to function. Since failover and traffic balancing with link aggregation (using multiple switch chassis) are proprietary to switch vendors, it is necessary to review switch capability to determine the appropriate bonding mode for multi-chassis link aggregation (i.e., bonding mode 1 or 4). |
Jumbo Frames
When supported by the switch hardware and the network interfaces in the server, Swarm can benefit from the use of jumbo frames—those greater than Ethernet's default 1500-byte maximum transmission unit (MTU). Jumbo frames are configured using the Swarm network.mtu
parameter. You may choose to use the global or node-specific configuration files for setting this parameter.
This is an example of using a 9000 byte MTU in the Swarm configuration file.
network.mtu = 9000