Info |
---|
ImportantThis document is intended to provide upgrading steps for SCS 1.7.1 for CentOS only. If you are planning to migrate from CentOS 7 to Rocky Linux 8, refer to Migrating Swarm Components from CentOS 7 to Rocky Linux 8. |
Directly upgrading from SCS 1.2 to SCS 1.3 is not supported, but this limitation is removed starting with SCS 1.4. Swarm supports upgrading from SCS 1.1, 1.2, or 1.3 to SCS 1.4 and booting from Storage bundle v15.0 with multicast disabled.
Upgrade Notes
Any version before SCS 1.4 cannot boot versions of Swarm storage that include the optional multicast feature (version 15.0 and later). This is due to changes required in SCS to properly support this feature.
Complete the SCS upgrade first if upgrading both SCS and Swarm Storage simultaneously. Then add the new Swarm Storage component to the SCS repo (using
scsctl repo component add ...
). During this process,scsctl
will prompt as to whether or not to enable multicast for the storage cluster by asking to set a value forcip.multicastEnabled
. SelectTrue
to enable the use of multicast (this matches the behavior of prior versions of Swarm), orFalse
to disable it. If you are unsure which to choose, contact DataCore Support.Installing a new Swarm Storage component version does not automatically activate the version for PXE booting; the new version must be explicitly activated. Run the below command and choose the new version to activate:
Code Block scsctl storage software activate
Refer to the following steps to upgrade SCS:Refer to the following steps to upgrade SCS for CentOS 7:
Disable the SELinux.
Check if SELinux is enabled or disabled. In a default RL8 installation, it will be Enforcing. However, if it is already disabled, skip to step 2.
Code Block getenforce
Disable SELinux, by editing the
/etc/selinux/config
file, commenting out the lineSELINUX=enforcing
orSELINUX=permissive
, and adding the lineSELINUX=disabled
. Then reboot the server after saving the file.Code Block vi /etc/selinux/config ... #SELINUX=enforcing SELINUX=disabled ... reboot
If static IP addresses were previously assigned using instance level node.cfg overrides , this need to be removed prior to upgrading. This procedure is fully explained in Configuring Swarm for Static IPs with Swarm Cluster Services (SCS). Static IPs can be defined using the procedure explained in Configuring Swarm for Static IPs with Swarm Cluster Services (SCS) .
Install the new RPM.
Code Block rpm -Uvh swarm-scs-[version].x86_64.rpm
Run the following command.
Code Block scsctl diagnostics upgrade_check
Note
Refer to the following step if an error occurs during the upgrade_check
command:
Code Block |
---|
systemctl restart swarm-platform && sleep 90 |
Run the diagnostics check.
Code Block scsctl diagnostics config scan_missing
It shows the below output:
Code Block Running step [11/11]: Show the next steps after this wizard ******************************************************************************** Congratulations, your Swarm Cluster Services server has been upgraded! Please reboot, then run the following to ensure your system is fully configured:
Reboot the system.
Re-initialize DHCP
...
.
Code Block scsctl init dhcp --dhcp-reserve-lower [integer]
...
Troubleshooting Notes
Refer to the following steps if an error occurs during the upgrade_check
command:
...
Choose the SCS version to activate the latest version.
Code Block scsctl platform software activate
Choose the version that matches the recently installed RPM.
...
Proceed with re-initializing DHCP as listed above.
Info | ||
---|---|---|
InfoA customer might see the below error in the output while upgrading to SCS 1.7 or above, but this is safe to ignore it.
|
...
When these conditions are true, the upgrade process requires an additional couple of steps.:
After SCS is upgraded to 1.7.1, the storage software must be upgraded to the most recent version or to any version that supports direct IP assignment and includes the “network.useStaticAddresses” setting.
After the new storage software version is activated, the storage cluster must be rebooted before any direct IP assignment is performed.
...
When restarting an entire cluster all at once, verify that nodes that were already part of the cluster are restarted first. Once they are fully online, it is safe to boot any new nodes being added into the cluster. Booting these new nodes too early can lead to IP address conflicts, and the new nodes will fail to come online.
...