Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 15 Next »

Prerequisites

Before proceeding with the recovery, have the following items on hand:

  • Verify that the correct version of SCS software is activated (and do the activation if needed).

  • The SCS 1.5.1 RPM and the related files required SCS installation, as described here

  • A copy of your most recent SCS backup. A full backup is ideal: https://perifery.atlassian.net/wiki/spaces/public/pages/1223491667/SCS+Administration#Full-Backup

  • A copy of the Swarm storage bundle that matches the version of the Swarm storage software running on your storage cluster. This is not necessary if a full backup is used.

  • The administrator user name and administrator password of your storage cluster

  • The DHCP initialization parameters used for the storage cluster (reserved IP address pool ranges).

VERY IMPORTANT

Make sure the newly configured SCS has the same private side SCS network address. There are parameters stored in the Persistent Settings Stream that point to this address. If you are restoring to a new SCS while the old SCS is still online, you may shut down the private side network interface on the old SCS so that there are no duplicate IP addresses.

Steps for Recovery

Step 1

Install SCS by following the instructions in https://perifery.atlassian.net/wiki/spaces/public/pages/3042345098/Online+SCS+Installation.

Step 2

Run the SCS initialization wizard by following the instructions in Run the Swarm Cluster Services (SCS) Initialization Wizard.

Step 3

Restore the SCS backup.

# scsctl backup restore [backup file name]

Step 4

Check the version of the Swarm storage component.

# scsctl storage software list

SCS will display a list of storage components. The active component will be identified in the list.

# scsctl storage software list
15.2.1 (15.2.1)
15.3.0 (15.3.0) (active)

Case 1

Proceed to Step 5 if the Swarm storage version that is currently running on the storage cluster is shown as the active software version.

Case 2

Make the currently running version the active version in SCS if the Swarm storage version that is currently running on the storage cluster is not shown as the active software version.

scsctl storage software activate [software version]

Note

The software version provided as a parameter to the command shown above must match the listed version exactly, character by character. (Include the parentheses and string inside them in the parameter.)

Case 3

If no components are listed or the software version running on the storage cluster is not shown in the list, add the version of the Swarm storage component to SCS that matches the version currently running on the storage cluster.

scsctl repo component add -f [storage component bundle file name]

During this process, you will be asked whether to use encryption-at-rest with disk volumes and a name to use for the storage cluster.

Important

It is important to use the same encryption settings and cluster name that are being used in the current storage cluster.

Step 5

Check the administrator user name and administrator password.

  • Check the administrator user name configured in SCS.

# scsctl platform config show -d admin.userName
  • Check the administrator password configured in SCS.

# scsctl platform config show -d admin.password
  • Proceed to Step 6 if both the administrator user name and the administrator password configured in SCS match the name and password configured in the current storage cluster.

  • Change the user name and/or password configured in SCS if either the administrator user name or the administrator password configured in SCS do not match the administrator user name and password configured in the current storage cluster.

# scsctl auth login --user [current admin name configured in SCS]:[current password configured in SCS]
logged in
# scsctl platform config set -d admin.userName=[admin user name in the running cluster] --allow-notification-failures
updated and pushed to instances
# scsctl platform config set -d admin.password=[admin password in the running cluster] --allow-notification-failures
updated and pushed to instances
# scsctl auth login --user [admin name]:[admin password]
logged in

Required

In order to ensure SCS and cluster settings are consistent, always use the “--allow-notification-failures” flag when changing administrator-related settings

Step 6

Initialize DHCP.

Subnet Layout:

   |                 |                                 |                 |
   |     reserve     |                                 |     reserve     |
   | <--  lower  --> | <---  storage pool range  ----> | <--  upper  --> |
   |      range      |                                 |      range      |
   |                 |                                 |                 |
 subnet              | <------ DHCP and Static ------> |             broadcast
 address                                                              address

Note that at least one of the reserved ranges must be set (lower or upper).

Some Considerations Relating to Storage Node IP Addresses

After restoring a current backup of an SCS 1.5.1 instance, storage node IP addresses should retain the same IP addresses when they are rebooted, as long as the SCS backup file was created after all storage nodes were booted from the SCS instance (and those nodes were up and running). This includes the case where static IP addresses were manually set for the nodes in the storage cluster.

The IP addresses of storage nodes added to the storage cluster subsequent to the creation of the backup file will not have been recorded in the backup, and those nodes may acquire a different IP address when they are rebooted. (Although IP addresses will come from the same range of the address pool as long as DHCP was initialized with the same parameters.)

  • No labels