SCS’s CLI (command-line interface) is installed by default on the SCS server and supports common administrative tasks.
Every command within the CLI offers help. Some examples:
scsctl help
scsctl init dhcp help
scsctl repo component add help
List the components registered with SCS:
scsctl repo component list |
The result list displays active components (including the version that has been marked as active) as well as inactive components (in which no version has been marked as active).
List the groups for a component:
scsctl {component} group list |
List the instances within a given group of a component:
scsctl {component} instance list --group "{group name}" |
List the nodes in the Swarm Storage cluster (-d
is used to refer to the default group rather than referring to it by name):
scsctl storage instance list -d |
Any time an instance identity changes (typically when a storage node has a change to its networking cards), it will appear to SCS as an entirely new instance, even if nothing really substantial has changed. Also, the former identity still exists in SCS, but will never be used (including any instance-specific setting or template overrides). There is a way to tell SCS to associate the former identity with the new instance ID, clearing out the old identity in the process.
CautionAt the time of writing, the CLI contains a bug when issuing this command to the API. Instead, use the curl command provided below to access the API directly. |
scsctl {component} instance rename "{new instance ID}" "{former instance ID}" --force |
curl -X PATCH --data-binary '{"name": "{former instance ID}"}' "http://{SCS IP address}:8095/platform/components/{component name}/groups/{group name}/instances/{new instance ID}/?force=yes" |
The storage
component within SCS only allows a single group/cluster to be defined for that site. The name of that cluster is governed by the name assigned to its group within SCS.
To create a group for Swarm Storage:
scsctl storage group add "{cluster name}" |
Each node forms a de-facto subcluster if no explicit subcluster assignments are made in Swarm Storage configuration. The Swarm Storage component (storage
) provides the node.subcluster
setting as a free-form name that may be assigned to one or more nodes.
The storage process looks at all names assigned to the different nodes and forms them into groups, which can then be used to determine how object replica distribution and protection are handled. The nodes may be grouped using subclusters in any way needed to achieve the desired replica/fail-over paradigm.
Update the subcluster for a storage node:
scsctl storage config set -d --instance "{instance name/ID}" "node.subcluster={subcluster name}" |
Update a cluster setting for Swarm Storage:
scsctl storage config set -d "{setting name}={setting value}" |
Some specific examples:
scsctl storage config set -d "policy.versioning=allowed" scsctl storage config set -d "policy.eCEncoding=4:2" |
Update a cluster setting for Swarm Storage:
scsctl storage config set -d --instance "{instance name/ID}" "{setting name}={setting value}" |
Some specific examples:
scsctl storage config set -d --instance "{instance name/ID}" "ec.protectionLevel=node" scsctl storage config set -d --instance "{instance name/ID}" "feeds.maxMem=500000" |
Removing a setting override means that the value for the setting is inherited from a higher scope. Removing an instance-level override means that the value for the setting is obtained from either the group (if a group-level override has been set) or component level. Removing a group-level override has no influence on any existing instance-level overrides that may exist within that group.
Reset an instance-level override:
scsctl {component} config unset --group "{group name}" --instance "{instance name/ID}" "{setting name}" |
Reset a group-level override:
scsctl {component} config unset --group "{group name}" "{setting name}" |
Shared network settings, such as DNS information and NTP time sources, may be updated as the need arises.
Update the list of DNS servers (specified as comma- or space-delimited list):
scsctl network_boot config set -d "network.dnsServers={new DNS servers}" |
This also requires that the DHCP server be updated so the setting can be made available to booting Storage nodes.
scsctl init dhcp {reserved ranges} |
Update the list of NTP servers (specified as comma- or space-delimited list):
scsctl platform config set -d "network.ntpServers={new NTP servers}" |
This also requires that the DHCP server be updated so the setting can be made available to booting Storage nodes.
scsctl init dhcp {reserved ranges} |
Network MTU for the entire Swarm storage cluster is governed by the MTU set on the internal network interface of SCS. This value is put into DHCP configuration during the init dhcp
process, and served to all storage nodes on boot.
CautionIt is important that this is done after the |
Update the MTU on the internal network interface.
List the interface details on the SCS to ensure that the change is correct.
Re-initialize DHCP to apply to changes to any future booting storage nodes:
scsctl init dhcp {reserved ranges} |
The network gateway for the entire Swarm storage cluster is governed by a setting available in SCS as of version 1.5. The setting is provided by the network_boot
component, and is called network.gateway
. By default, this setting points to the IP address of the internal network interface of SCS, but may be overridden by normal means using the CLI. This value is put into DHCP configuration during the init dhcp
process, and served to all storage nodes on boot.
CautionIt is important that this is done AFTER the |
Update the network.gateway
setting.
scsctl network_boot config set -d "network.gateway={gateway_ip_address}" |
Re-initialize DHCP to apply to changes to any future booting storage nodes:
scsctl init dhcp {reserved ranges} |
This list of supported network bonding modes can be found at Network Devices and Priority.
RequiredBoth of the below steps are required. If the setting is not properly applied to the PXE boot system, then storage nodes will not receive the updated bonding mode during the boot process. |
Update the bonding mode setting in SCS:
scsctl network_boot config set -d "kernel.bondingMode={new bonding mode}" |
Confirm the new setting value:
scsctl network_boot config show -d "kernel.bondingMode" |
Restart the SCS services to apply this setting:
systemctl restart swarm-platform |
Once all services have fully come back online (may take 2-3 minutes), storage nodes will receive the new bonding mode the next time they boot up.
When a blob/static_file named SYSCTL is present for a node, a URL will be injected into node.cfg for kernel.sysctlFileUrl.
scsctl storage static_file set -d -f {path to local file on disk} SYSCTL |
If a different URL is used, then this blob must NOT be present. A different URL provided for the kernel.sysctlFileUrl is allowed in the storage component.
scsctl storage config set -d “kernel.sysctlFileUrl={url to file}” |
Only one approach but either option is fine based on customer environment requirements |
When communicating with remote servers that use TLS, custom trusted root (CA) certificates may be specified. These certificates must be PEM-formatted, with all newlines replaced with a literal \n
. For example:
line1 line2 |
…would become:
line1\nline2 |
Once the certificate string is properly formatted (denoted as CERT_STRING
in the example below), apply it to SCS:
scsctl platform config set -d 'organization.certificates=CERT_STRING' |
Best practice for SCS is to use a static IP address for the interface that will be receiving client requests. If that IP address changes, SCS may have issues starting up under certain circumstances. To resolve this, run the following commands on the SCS server:
scsctl init config_update --external-interface {interface name}
(to obtain a list of interfaces, use ip addr show
)
scsctl init wizard --build-platform-pod
scsctl init config_update --finalize
The SCS server maintains an administrator user, “admin”, that has full rights within the Swarm site. This user also serves as the administrative user within the Swarm Storage management API. Credentials may be updated at any time, and updates are pushed to the Storage cluster to guarantee the two use the same credentials.
RequiredLogging into the CLI is required to perform these operations if administrative credentials have already been set within SCS. The CLI credentials need to be updated once either the user name or password has changed. |
Update the administrative password:
scsctl platform config set -d "admin.password={new password}" |
The CLI requires knowing the administrative credentials to perform operations against the SCS server. To set these credentials:
scsctl auth login --user "{administrative user name}" |
The CLI then securely prompts for the administrative password and proceeds with authentication.
WarningSCS may need to be upgraded before Swarm Storage. Verify the Swarm Storage version and SCS version are compatible before upgrading Swarm Storage. See Upgrading to the Latest SCS Version for more details. |
Obtain the component bundle for the desired version from DataCore Downloads to upgrade the Swarm Storage software of a running cluster. Transfer the bundle to the SCS server and run the following commands to register it with SCS.
Replace new-version with the version being installed:
unzip Swarm-new-version.zip |
Navigate to the Storage directory and tun the following:
cd Swarm-new-version/Storage/ scsctl repo component add -f swarm-scs-storage-new-version.tgz |
Verify the the new version is present in the list of available versions:
scsctl storage software list old-version (old-version) (active) new-version (new-version) |
ImportantIt should be automatically marked as active if this is the first time Swarm Storage software has been registered with SCS. The following step may be skipped otherwise proceed with activation if so. |
It has been successfully registered if the new version is in the list. It is not used for booting nodes and the current active version is used. Mark it as active to complete the upgrade:
scsctl storage software activate "new-version (new-version)" activated |
Verify activation if desired:
scsctl storage software list old-version (old-version) new-version (new-version) (active) |
CautionActivating a version means that any nodes that reboot use the binaries for the new version. Do not complete this step until ready to proceed with the upgrade. |
InfoThis removes all software, published setting defaults, and configuration files associated with a specific version of Swarm storage software. The software binaries, settings, and configuration files for other installed versions (and other installed components) remain unaffected by this action. |
The following version texts are examples only. Verify the list of installed versions, and note which version is currently marked as active.
scsctl storage software list 14.0.1 (14.0.1) (active) 14.1.2 (14.1.2) |
ImportantIf the version to be removed is currently the active version, it is strongly recommended that another version (if available) be marked active prior to removing the desired version. |
To mark another version as active:
scsctl storage software activate "14.1.2 (14.1.2)" activated |
Verify activation if desired:
scsctl storage software list 14.0.1 (14.0.1) 14.1.2 (14.1.2) (active) |
Remove the desired version, using the entire version string:
ImportantIf the version being removed is the only installed version, or if another version cannot be activated for any reason, then the |
scsctl repo component delete storage "14.0.1 (14.0.1)" removed |
Verify removal if desired:
scsctl storage software list 14.1.2 (14.1.2) (active) |
SCS allows a full backup of all components, configurations, settings overrides, and binaries for support and maintenance purposes. The CLI must be logged in since this backup includes values for settings marked as “secure”.
This backup allows for SCS settings to be restored in the event that the SCS needs to be rebuilt. A full backup allows for this kind of restoration; a lightweight backup is useful for support purposes only.
|
Obtain a full backup of all data:
scsctl backup create --output "{path to output backup file}" |
Obtain a “lightweight” backup that excludes repo data (binaries, etc.):
scsctl backup create --no-repo --output "{path to output backup file}" |
Perform backup restore of all data:
scsctl backup restore "{path to output backup file}" |