SCS Administration
- 1 Getting Help
- 2 Listing Components
- 3 Listing Groups
- 4 Instance Management
- 5 Defining the Storage Cluster
- 6 Assigning a Storage Node to a Subcluster
- 7 Updating a Cluster Setting
- 8 Updating a Storage Node Setting
- 9 Resetting a Setting
- 9.1 Instance Level
- 9.2 Group Level
- 10 Updating Network Settings
- 10.1 DNS Servers
- 10.2 DNS Domain
- 10.3 NTP Servers
- 10.4 Swarm (Internal) Network MTU
- 10.5 Swarm (Internal) Network Gateway
- 11 Updating Network Bonding in Swarm Storage
- 12 Updating Trusted Root Certificates
- 13 Updating Client-Facing IP Address
- 14 Administrative Credentials
- 15 Upgrading Swarm Storage
- 16 Removing an Installed Version of Swarm Storage
- 17 Backing Up SCS
- 17.1 Full Backup
- 17.2 Lightweight Backup
- 17.3 Backup Restore
SCS’s CLI (command-line interface) is installed by default on the SCS server and supports common administrative tasks.
In any of the examples, any time an instance ID is used, an instance name may also be used (if one had previously been defined for the instance).
Important: In the below examples, values that require user-provided values are wrapped in curly braces ({}
). Please replace these with the values required for the command.
Getting Help
Every command within the CLI offers help. Some examples:
scsctl help
scsctl init dhcp help
scsctl repo component add help
Listing Components
List the components registered with SCS:
scsctl repo component list
The result list displays active components (including the version that has been marked as active) as well as inactive components (in which no version has been marked as active).
Listing Groups
List the groups for a component:
scsctl {component} group list
Instance Management
Listing Instances
List the instances within the default group of a component (most common usage):
scsctl {component} instance list -d
List the instances within a specific group of a component:
List the nodes in the Swarm Storage cluster (-d
is used to refer to the default group rather than referring to it by name):
This listing will also include instances that SCS knows about that are currently offline. If any of these instances will remain offline (such as decommissioned hardware, etc), consider removing them from SCS.
Assuming an Old Instance Identity
Whenever the identity of an instance changes (typically when a storage node has a change to its networking cards), SCS will recognize it as an entirely new instance, even if the change is not substantial. The former identity still exists in SCS, but will never be used including any instance-specific setting or template overrides. However, there is a method to instruct SCS to associate the former identity with the new instance ID, which will also clear out the old identity.
Removing an Instance
If SCS ever needs to “forget” an instance, use the following command to fully remove it from SCS. The example below uses the default group, but you can use the -g {group name}
form of the command as needed.
Defining the Storage Cluster
The storage
component within SCS only allows a single group/cluster to be defined for that site. The name of that cluster is governed by the name assigned to its group within SCS.
Create the Cluster
To create a group for Swarm Storage:
Assigning a Storage Node to a Subcluster
Each node forms a de-facto subcluster if no explicit subcluster assignments are made in the Swarm Storage configuration. The Swarm Storage component (storage
) provides the node.subcluster
setting as a free-form name that can be assigned to one or more nodes.
The storage process groups nodes based on their assigned names, which are then used to manage object replica distribution and protection. Nodes that are grouped using subclusters can be configured in any way necessary to achieve the desired replica/fail-over strategy.
Update the subcluster for a storage node:
Updating a Cluster Setting
Update a cluster setting for Swarm Storage:
Some specific examples:
Updating a Storage Node Setting
Update a cluster setting for Swarm Storage:
Some specific examples:
Resetting a Setting
Removing a setting override means that the value for the setting is inherited from a higher scope. Removing an instance-level override means that the value for the setting is obtained from either the group (if a group-level override exists) or the component level. Removing a group-level override does not affect any existing instance-level overrides within that group.
Instance Level
Reset an instance-level override. Either the default-group or specific-group form of the command may be used:
or
Group Level
Reset a group-level override. Either the default-group or specific-group form of the command may be used:
or
Updating Network Settings
Shared network settings, such as DNS information and NTP time sources, can be updated as per requirement.
DNS Servers
Update the list of DNS servers (specified as comma- or space-delimited list).
The example output is:
This also requires updating the DHCP server so the setting can be made available to booting Storage nodes. See https://perifery.atlassian.net/wiki/spaces/public/pages/2917138525.
It is recommended to check the bash history on the SCS to view the prior command and its settings. Run the below command on the SCS to view the last DHCP setting command used.
DNS Domain
Update the DNS domain (specified as comma- or space-delimited list):
The example output is:
This also requires updating the DHCP server so the setting can be made available to booting Storage nodes. See https://perifery.atlassian.net/wiki/spaces/public/pages/2917138525.
It is recommended to check the bash history on the SCS to view the prior command and its settings. Run the below command on the SCS to view the last DHCP setting command used.
NTP Servers
Update the list of NTP servers (specified as comma- or space-delimited list):
This also requires updating the DHCP server so the setting can be made available to booting Storage nodes. See https://perifery.atlassian.net/wiki/spaces/public/pages/2917138525.
It is recommended to check the bash history on the SCS to view the prior command and its settings. Run the below command on the SCS to view the last DHCP setting command used.
Swarm (Internal) Network MTU
Network MTU for the entire Swarm storage cluster is governed by the MTU set on the internal network interface of SCS. This value is put into DHCP configuration during the init dhcp
process, and served to all storage nodes on boot.
Update the MTU on the internal network interface.
List the interface details on the SCS to ensure that the change is correct.
Re-initialize DHCP to apply to changes to any future booting storage nodes:
Swarm (Internal) Network Gateway
The network gateway for the entire Swarm storage cluster is governed by a setting available in SCS as of version 1.5. The setting is provided by the network_boot
component, and is called network.gateway
. By default, this setting points to the IP address of the internal network interface of SCS, but may be overridden by normal means using the CLI. This value is put into DHCP configuration during the init dhcp
process and served to all storage nodes on boot.
Update the
network.gateway
setting.Re-initialize DHCP to apply changes to any future booting storage nodes. See https://perifery.atlassian.net/wiki/spaces/public/pages/2917138525.
It is recommended to check the bash history on the SCS to view the prior command and its settings. Run the below command on the SCS to view the last DHCP setting command used.
Updating Network Bonding in Swarm Storage
Swarm Storage supports customizing network bonding for NICs and bonding mode. Additionally, a sysctl file may be specified for storage nodes. Refer to the following sections for bonding NICs and/or mode. In either case, the setting(s) must be applied to the PXE boot system before the new values are available to booting storage nodes.
Relevant bonding information can be found at https://perifery.atlassian.net/wiki/spaces/public/pages/2443808659.
Bonding NICs
Update the bonding NICs setting in SCS:
The list of NICs should look like: eth0,eth1
, with whatever values are appropriate.
Confirm the new setting value:
Bonding Mode
Update the bonding mode setting in SCS:
Confirm the new setting value:
Apply the Setting to the PXE Boot System
Restart the SCS services to apply this setting:
Once all services have fully come back online (may take 2-3 minutes), storage nodes will receive the new bonding mode the next time they boot up.
Support for kernel.sysctlFileUrl
When a blob/static_file named SYSCTL is present for a node, a URL will be injected into node.cfg for kernel.sysctlFileUrl.
If a different URL is used, then this blob must NOT be present. A different URL provided for the kernel.sysctlFileUrl is allowed in the storage component.
Updating Trusted Root Certificates
When communicating with remote servers that use TLS, custom trusted root (CA) certificates may be specified. These certificates must be PEM-formatted, with all newlines replaced with a literal \n
. For example:
…would become:
Once the certificate string is properly formatted (denoted as CERT_STRING
in the example below), apply it to SCS:
Updating Client-Facing IP Address
Best practice for SCS is to use a static IP address for the interface that will be receiving client requests. If that IP address changes, SCS may have issues starting up under certain circumstances. To resolve this, run the following commands on the SCS server:
scsctl init config_update --external-interface {interface name}
(to obtain a list of interfaces, use ip addr show
)
scsctl init wizard --build-platform-pod
scsctl init config_update --finalize
Administrative Credentials
The SCS server maintains an administrator user, “admin”, that has full rights within the Swarm site. This user also serves as the administrative user within the Swarm Storage management API. Credentials may be updated at any time, and updates are pushed to the Storage cluster to guarantee the two use the same credentials.
Setting the Administrative Password
Update the administrative password:
Updating CLI Credentials
The CLI requires knowing the administrative credentials to perform operations against the SCS server. To set these credentials:
The CLI then securely prompts for the administrative password and proceeds with authentication.
Upgrading Swarm Storage
Obtain the component bundle for the latest version from DataCore Downloads to upgrade the Swarm Storage software of a running cluster. Transfer the bundle to the SCS server and run the following commands to register it with SCS:
Use the command below to get the list of Storage software versions that have been registered with SCS.
If the desired version already present in the list, continue with step 5 when ready to boot Swarm Storage nodes to the desired version.
Unpack the downloaded Swarm Storage software bundle
Navigate to the Storage directory within that bundle and run the following command to register the Swarm storage software with SCS (please insert the correct version string where needed).
Verify the desired version is present in the list of available versions.
Activate this desired version to complete the upgrade.
Choose the desired version in the menu. The activated version will be used the next time that storage nodes reboot.
Removing an Installed Version of Swarm Storage
The following version texts are examples only. Verify the list of installed versions, and note which version is currently marked as active.
To mark another version as active:
Verify activation if desired:
Remove the desired version, using the entire version string:
Verify removal if desired:
Backing Up SCS
SCS allows a full backup of all components, configurations, settings overrides, and binaries for support and maintenance purposes. The CLI must be logged in since this backup includes values for settings marked as “secure”.
This backup allows for SCS settings to be restored in the event that the SCS needs to be rebuilt. A full backup allows for this kind of restoration; a lightweight backup is useful for support purposes only.
Full Backup
Obtain a full backup of all data:
Lightweight Backup
Obtain a “lightweight” backup that excludes repo data (binaries, etc.):
Backup Restore
Perform backup restore of all data:
© DataCore Software Corporation. · https://www.datacore.com · All rights reserved.