Important
This document is intended to provide upgrading steps for SCS 1.7.1 for CentOS only. If you are planning to migrate from CentOS 7 to Rocky Linux 8, please refer to the doc here.
Directly upgrading from SCS 1.2 to SCS 1.3 is not supported, but this limitation is removed starting with SCS 1.4. Swarm supports upgrading from SCS 1.1, 1.2, or 1.3 to SCS 1.4 and booting from Storage bundle v15.0 with multicast disabled.
Upgrade Notes
Any version before SCS 1.4 cannot boot versions of Swarm storage that include the optional multicast feature (version 15.0 and later). This is due to changes required in SCS to support this feature.
Complete the SCS upgrade first if upgrading both SCS and Swarm Storage simultaneously. Then add the new Swarm Storage component to the SCS repo (using
scsctl repo component add ...
). During this process,scsctl
will prompt as to whether or not to enable multicast for the storage cluster by asking to set a value forcip.multicastEnabled
. SelectTrue
to enable the use of multicast (this matches the behavior of prior versions of Swarm), orFalse
to disable it. If you are unsure which to choose, contact DataCore Support.
Refer to the following steps to upgrade SCS for CentOS 7:
Disable the SELinux.
Check if SELinux is enabled or disabled. In a default RL8 installation, it will be Enforcing. However, if it is already disabled, skip to step 2.
getenforce
Disable SELinux, by editing the
/etc/selinux/config
file, commenting out the lineSELINUX=enforcing
orSELINUX=permissive
, and adding the lineSELINUX=disabled
. Then reboot the server after saving the file.vi /etc/selinux/config ... #SELINUX=enforcing SELINUX=disabled ... reboot
If static IP addresses were previously assigned using instance level node.cfg overrides , this need to be removed prior to upgrading. This procedure is fully explained in Configuring Swarm for Static IPs with Swarm Cluster Services (SCS). Static IPs can be defined using the procedure explained in Configuring Swarm for Static IPs with Swarm Cluster Services (SCS)
Install the new RPM.
rpm -Uvh swarm-scs-[version].x86_64.rpm
Run the following command.
scsctl diagnostics upgrade_check
Note
Refer to the following step if an error occurs during the upgrade_check
command:
systemctl restart swarm-platform && sleep 90
Run the diagnostics check.
scsctl diagnostics config scan_missing
It shows the below output:
Running step [11/11]: Show the next steps after this wizard ******************************************************************************** Congratulations, your Swarm Cluster Services server has been upgraded! Please reboot, then run the following to ensure your system is fully configured:
Reboot the system.
Re-initialize DHCP.
scsctl init dhcp --dhcp-reserve-lower [integer]
Choose the SCS version to activate the latest version.
scsctl platform software activate
Info
A customer might see the below error in the output while upgrading to SCS 1.7 or above, but this is safe to ignore it.
Running step [4/11]: Re-enable systemd management of SCS Linux services Disabling service "swarm-platform.service" for pod "swarm-platform"... Failed to disable services: b'Failed to execute operation: No such file or directory\n' Deleting service file for pod "swarm-platform": /etc/systemd/system/swarm-platform.service [exists: False, symlink: True] Retry: Disabling service "swarm-platform.service" for pod "swarm-platform"...
We would need some additional necessary steps to upgrade directly from SCS 1.4 to 1.7.1 under the following conditions:
The current instance is using static IP assignments for storage nodes, using the node.cfg template method.
The storage cluster supported by the current instance is running a version of CAStor that does not support direct IP assignment, a version of CAStor that does not include the setting, “network.useStaticAddresses”.
When the above conditions are absent, upgrading SCS and the storage software will go smoothly as described above.
When these conditions are true, the upgrade process requires additional steps:
After SCS is upgraded to 1.7.1, the storage software must be upgraded to the most recent version or any version that supports direct IP assignment and includes the “network.useStaticAddresses” setting.
After the new storage software version is activated, the storage cluster must be rebooted before any direct IP assignment is performed.
After these additional steps, the direct assignment of static IPs and the unsetting of the node.cfg templates can proceed as usual. The cluster gets rebooted again once that process is completed.
When restarting an entire cluster all at once, verify nodes that were already part of the cluster are restarted first. Once they are fully online, it is safe to boot any new nodes being added into the cluster. Booting these new nodes too early can lead to IP address conflicts, and the new nodes will fail to come online.