SNMP Commands
Storage cluster nodes are controlled through the SNMP action commands. The following OIDs allow disabling nodes and volumes with nodes from a storage cluster:
castorShutdownAction: Disable nodes and volumes within nodes for servicing.
castorRetireAction: Disable nodes and volumes within nodes for retirement.
Shutdown Action for Nodes
Required
If you permanently remove a storage node's IP address from a storage cluster, you must also remove the reference to the storage node from the hosts
parameter in /etc/caringo/cloudgateway/gateway.cfg
on every Content Gateway. Once the storage node's IP address has been removed, restart the Content Gateway service to enable the change:
systemctl restart cloudgateway
To gracefully shut down a Swarm node, the string shutdown is written to the castorShutdownAction OID. Writing the string reboot to this OID causes a Swarm node to reboot.
A node initiates a graceful stop by unmounting all volumes and removing itself from the cluster when it receives a shutdown or reboot action. The node is powered off if the hardware supports this action for a shutdown. The node reboots, re-reads the node or cluster configuration files, and starts up Swarm for a reboot.
A graceful shutdown is required to perform a quick reboot. Performing an ungraceful shutdown forces the node to perform consistency checks on all volumes before rejoining the cluster.
Tip
Before shutting down or rebooting a node, check the node status page or the SNMP castorErrTable OID for critical error messages. Any logged critical messages are cleared upon reboot.
Note
Wait at least 10 seconds in between each node reboot if rebooting more than one node at a time, but not the whole cluster. This pause verifies that each node can communicate the rebooting state to the rest of the cluster, so other nodes do not initiate recovery for the rebooting node.
Retire Action for Nodes and Volumes
Required
If you permanently remove a storage node's IP address from a storage cluster, you must also remove the reference to the storage node from the hosts
parameter in /etc/caringo/cloudgateway/gateway.cfg
on every Content Gateway. Once the storage node's IP address has been removed, restart the Content Gateway service to enable the change:
systemctl restart cloudgateway
The Retire action is used to permanently remove a node or a volume within a node from the cluster. This action is intended for retiring legacy hardware or preemptively pushing content away from a volume with a history of I/O errors. Retired volumes and nodes are visible in the Swarm Admin Console until the cluster is rebooted.
See Retiring Volumes.
Note
The Retire action may take an extended amount of time to complete and requires at least three health processor cycles.
Single Volumes
All stored objects are moved to other nodes in the storage cluster when a volume is retired. The volume becomes a read-only volume and no additional objects can be stored on it after initiating a volume retirement. The volume is idled with no further read/write requests after all objects are moved to other locations in the cluster.
Each volume is given a unique name within the node – the device string from the vols line in the configuration file. To retire a volume, the name is written as a string to the castorRetireAction OID. The volume retirement process is initiated immediately upon receipt and the action cannot be aborted after it starts.
To manually retire a volume,
Open the Swarm UI (or legacy Admin Console).
Click the targeted chassis/node (IP address).
For the targeted disk/volume, select Retire.
Entire Node
Retiring a node means all volumes on the node are retired at the same time. After all volumes in the node are retired and the node data is copied elsewhere in the cluster, the node is permanently out of service and does not respond to further requests.
To retire a node and all volumes, the all string is written to the castorRetireAction OID. The node retirement process is initiated immediately upon receipt and the action cannot be aborted after it starts.
Warning
Verify the cluster has enough free space and nodes to store the objects from the retiring volume. For subclusters, this applies to the subcluster where the retiring volume resides. The retiring node cannot complete the retirement process until adding additional nodes if the number of nodes in the cluster or subcluster do not have enough space to store at least two replicas of all objects. The Retire action does not require the configured default replicas (policy.replicas default
) are maintained to complete retirement. Messages are logged indicating sufficient replicas cannot be created if there are not enough nodes to maintain the minimum number of replicas.