This technical note provides details for performing advanced Swarm configuration on a CSN (Cluster Services Node). These advanced configuration techniques involve changing the underlying files that the CSN software uses when managing the Swarm storage cluster.
The target audience for this paper is system administrators who are comfortable managing RHEL®/CentOS® systems and who are familiar the Swarm configurations as described in the Swarm Guide.
Storage Node Identification and Configuration
Finding the MAC Address
The CSN tracks storage nodes using the MAC address for the primary network interface. The primary interface is the one that was used for PXE booting the node. For the purposes of this discussion, chassis and node are used synonymously except where noted. Keep in mind that one physical server chassis can be assigned multiple IP addresses when Swarm multi-server mode is in use.
The ip-assignments utility provided in the CSN software reports the mapping between the hexadecimal MAC address and all the IP addresses that are allocated to a chassis. The following example shows the MAC address in the first column and the IP address in the second column.
The CSN uses a global configuration file in order to store configuration values that are applied to all nodes within the storage cluster. This file uses the configuration format documented in the Swarm Guide and the file is located at:
All configuration values in this file are applied first in Swarm’s layered configuration mechanism.
Therefore, it is possible to override them with node-specific configuration items.
Overriding the number of processes per chassis
In version 3.1.2 and later CSNs, the following process can be used to override the default process count requested by Swarm with netboot config.
This section does not apply to Swarm 10.x — there is a single process per chassis in Swarm 10 and above, so changing the process count is not supported.
If /var/opt/caringo/netboot/content/nodeconfigs/pc<mac>.cfg exists, then use the value found in the file to override the process config request. This allows setting the process config value on an individual per-node basis.
In the event that both the per storage node pc<mac>.cfg and the proc_count.cfg file exist, then the pc<mac>.cfg takes precedence and will be used for a request from a matching <mac>.
Node-specific Configuration Files
Although node-specific configuration files are not created by the CSN, the PXE booting infrastructure within the CSN will make use of any files that the administrator creates. Since the node-specific configuration files are applied after the global configuration file, they can override any settings made by the global file. They can also add new settings.
The node-specific configuration files must be located in the following directory.
The file names are a concatenation of: "sn" + MAC Address + ".cfg"
Using the previous list of node MAC addresses as an example, these would be the names for the three node-specific configuration files:
It is only necessary to create a particular node-specific configuration file if you need to provide configuration parameters for that node only. In other words, you can create node-specific configuration files just for those nodes and you don't have to create blank files for the other nodes. It is highly advisable to clone the static node config file for every MAC address that could be used to PXE boot.
Example: A dual-nic node where both nics could be used to PXE boot should each have a static node config file with identical content inside.
Network Bonding Mode
The bonding mode is a global configuration for all storage nodes managed by the CSN and is set within a file used by the PXE booting infrastructure. The file location is:
The kernelOptions parameter is passed to the embedded Linux® kernel's command line at boot time.
The castor_net sub-parameter within the value field controls the network bonding mode. The Swarm Guide provides full details for using the castor_net field.
Example 1: Using the IEEE 802.3ad LACP Mode
kernelOptions = castor_net=802.3ad:
Example 2: Using a Static LAG Mode:
kernelOptions = castor_net=balance-rr:
Some bonding modes require switch support and configuration before they can be used. Please see the Swarm Guide and the Linux kernel's "bonding.txt" documentation file for details about the different bonding modes.
When supported by the switch hardware and the network interfaces in the server, Swarm can benefit from the use of jumbo frames—those greater than Ethernet's default 1500-byte maximum transmission unit (MTU). Jumbo frames are configured using the Swarm network.mtu parameter. You may choose to use the global or node-specific configuration files for setting this parameter.
This is an example of using a 9000 byte MTU in the Swarm configuration file.