Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Version History

« Previous Version 52 Next »

SCS’s CLI (command-line interface) is installed by default on the SCS server and supports common administrative tasks.

In any of the examples, any time an instance ID is used, an instance name may also be used (if one had previously been defined for the instance).

Important: In the below examples, values that require user-provided values are wrapped in curly braces ({}). Please replace these with the values required for the command.

Getting Help

Every command within the CLI offers help. Some examples:

  • scsctl help

  • scsctl init dhcp help

  • scsctl repo component add help

Listing Components

List the components registered with SCS:

scsctl repo component list

The result list displays active components (including the version that has been marked as active) as well as inactive components (in which no version has been marked as active).

Listing Groups

List the groups for a component:

scsctl {component} group list

Instance Management

Listing Instances

List the instances within the default group of a component (most common usage):

scsctl {component} instance list -d

List the instances within a specific group of a component:

scsctl {component} instance list --group "{group name}"

List the nodes in the Swarm Storage cluster (-d is used to refer to the default group rather than referring to it by name):

scsctl storage instance list -d

This listing will also include instances that SCS knows about that are currently offline. If any of these instances will remain offline (such as decommissioned hardware, etc), consider removing them from SCS.

Assuming an Old Instance Identity

Whenever the identity of an instance changes (typically when a storage node has a change to its networking cards), SCS will recognize it as an entirely new instance, even if the change is not substantial. The former identity still exists in SCS, but will never be used including any instance-specific setting or template overrides. However, there is a method to instruct SCS to associate the former identity with the new instance ID, which will also clear out the old identity.

Caution

As of this writing, there is a bug in the CLI when issuing this command to the API. Instead, use the curl command provided below to access the API directly."

scsctl {component} instance rename "{new instance ID}" "{former instance ID}" --force
curl -X PATCH --data-binary '{"name": "{former instance ID}"}' "http://{SCS IP address}:8095/platform/components/{component name}/groups/{group name}/instances/{new instance ID}/?force=yes"

Removing an Instance

If SCS ever needs to “forget” an instance, use the following command to fully remove it from SCS. The example below uses the default group, but you can use the -g {group name} form of the command as needed.

Caution

Removing an instance deletes all associated custom configuration (settings), configuration file templates, and static files. Use this command with caution!

scsctl {component} instance remove -d {instance ID}

Defining the Storage Cluster

The storage component within SCS only allows a single group/cluster to be defined for that site. The name of that cluster is governed by the name assigned to its group within SCS.

Create the Cluster

To create a group for Swarm Storage:

scsctl storage group add "{cluster name}"

Assigning a Storage Node to a Subcluster

Each node forms a de-facto subcluster if no explicit subcluster assignments are made in the Swarm Storage configuration. The Swarm Storage component (storage) provides the node.subcluster setting as a free-form name that can be assigned to one or more nodes.

The storage process groups nodes based on their assigned names, which are then used to manage object replica distribution and protection. Nodes that are grouped using subclusters can be configured in any way necessary to achieve the desired replica/fail-over strategy.

Update the subcluster for a storage node:

scsctl storage config set -d --instance "{instance ID}" "node.subcluster={subcluster name}"

Updating a Cluster Setting

Update a cluster setting for Swarm Storage:

scsctl storage config set -d "{setting name}={setting value}"

Some specific examples:

scsctl storage config set -d "policy.versioning=allowed"
scsctl storage config set -d "policy.eCEncoding=4:2"

Updating a Storage Node Setting

Update a cluster setting for Swarm Storage:

scsctl storage config set -d --instance "{instance ID}" "{setting name}={setting value}"

Some specific examples:

scsctl storage config set -d --instance "{instance ID}" "ec.protectionLevel=node"
scsctl storage config set -d --instance "{instance ID}" "feeds.maxMem=500000"

Resetting a Setting

Removing a setting override means that the value for the setting is inherited from a higher scope. Removing an instance-level override means that the value for the setting is obtained from either the group (if a group-level override exists) or the component level. Removing a group-level override does not affect any existing instance-level overrides within that group.

Instance Level

Reset an instance-level override. Either the default-group or specific-group form of the command may be used:

scsctl {component} config unset -d --instance "{instance ID}" "{setting name}"

or

scsctl {component} config unset --group "{group name}" --instance "{instance ID}" "{setting name}"

Group Level

Reset a group-level override. Either the default-group or specific-group form of the command may be used:

scsctl {component} config unset -d "{setting name}"

or

scsctl {component} config unset --group "{group name}" "{setting name}"

Updating Network Settings

Shared network settings, such as DNS information and NTP time sources, can be updated as per requirement.

DNS Servers

Update the list of DNS servers (specified as comma- or space-delimited list).

scsctl network_boot config set -d "network.dnsServers={new DNS servers}"

This also requires that the DHCP server be updated so the setting can be made available to booting Storage nodes.

scsctl init dhcp {reserved ranges}

NTP Servers

Update the list of NTP servers (specified as comma- or space-delimited list):

scsctl platform config set -d "network.ntpServers={new NTP servers}"

This also requires that the DHCP server be updated so the setting can be made available to booting Storage nodes.

scsctl init dhcp {reserved ranges}

Swarm (Internal) Network MTU

Network MTU for the entire Swarm storage cluster is governed by the MTU set on the internal network interface of SCS. This value is put into DHCP configuration during the init dhcp process, and served to all storage nodes on boot.

Caution

It is important that this is done after the init wizard has been run. The wizard may modify the internal network interface definition and overwrite any MTU updates. If the wizard is run again, then MTU updates will need to be re-applied.

  1. Update the MTU on the internal network interface.

  2. List the interface details on the SCS to ensure that the change is correct.

  3. Re-initialize DHCP to apply to changes to any future booting storage nodes:

    scsctl init dhcp {reserved ranges}

Swarm (Internal) Network Gateway

The network gateway for the entire Swarm storage cluster is governed by a setting available in SCS as of version 1.5. The setting is provided by the network_boot component, and is called network.gateway. By default, this setting points to the IP address of the internal network interface of SCS, but may be overridden by normal means using the CLI. This value is put into DHCP configuration during the init dhcp process and served to all storage nodes on boot.

Caution

Verify that this is done AFTER the init wizard has been run. The wizard may modify the internal network interface definition and overwrite the internal interface IP address. If the wizard is run again, then any custom gateway definition may need to be re-applied.

  1. Update the network.gateway setting.

    scsctl network_boot config set -d "network.gateway={gateway IP address}"
  2. Re-initialize DHCP to apply changes to any future booting storage nodes. See Configure DHCP.

    scsctl init dhcp {reserved ranges}

    It is recommended to check the bash history on the SCS to view the prior command and its settings. Run the below command on the SCS to view the last DHCP setting command used.

    history | grep "scsctl init dhcp" | tail -1

Updating Network Bonding in Swarm Storage

Swarm Storage supports customizing network bonding for NICs and bonding mode. Additionally, a sysctl file may be specified for storage nodes. Refer to the following sections for bonding NICs and/or mode. In either case, the setting(s) must be applied to the PXE boot system before the new values are available to booting storage nodes.

Relevant bonding information can be found at Network Devices and Priority.

Required

The “Apply the Setting to the PXE Boot System” step is required. If the setting is not properly applied to the PXE boot system, then storage nodes will not receive the updated bonding mode during the boot process.

Bonding NICs

Update the bonding NICs setting in SCS:

scsctl network_boot config set -d "kernel.bondingNics={comma-delimited list of NICs}"

The list of NICs should look like: eth0,eth1, with whatever values are appropriate.

Confirm the new setting value:

scsctl network_boot config show -d "kernel.bondingNics"

Bonding Mode

Update the bonding mode setting in SCS:

scsctl network_boot config set -d "kernel.bondingMode={new bonding mode}"

Confirm the new setting value:

scsctl network_boot config show -d "kernel.bondingMode"

Apply the Setting to the PXE Boot System

Restart the SCS services to apply this setting:

systemctl restart swarm-platform

Once all services have fully come back online (may take 2-3 minutes), storage nodes will receive the new bonding mode the next time they boot up.

Support for kernel.sysctlFileUrl

When a blob/static_file named SYSCTL is present for a node, a URL will be injected into node.cfg for kernel.sysctlFileUrl.

scsctl storage static_file set -d -f {path to local file on disk} SYSCTL

If a different URL is used, then this blob must NOT be present. A different URL provided for the kernel.sysctlFileUrl is allowed in the storage component.

scsctl storage config set -d “kernel.sysctlFileUrl={url to file}”

Only one approach but either option is fine based on customer environment requirements

Updating Trusted Root Certificates

When communicating with remote servers that use TLS, custom trusted root (CA) certificates may be specified. These certificates must be PEM-formatted, with all newlines replaced with a literal \n. For example:

line1
line2

…would become:

line1\nline2

Once the certificate string is properly formatted (denoted as CERT_STRING in the example below), apply it to SCS:

scsctl platform config set -d 'organization.certificates=CERT_STRING'

Updating Client-Facing IP Address

Best practice for SCS is to use a static IP address for the interface that will be receiving client requests. If that IP address changes, SCS may have issues starting up under certain circumstances. To resolve this, run the following commands on the SCS server:

scsctl init config_update --external-interface {interface name} (to obtain a list of interfaces, use ip addr show)

scsctl init wizard --build-platform-pod

scsctl init config_update --finalize

Administrative Credentials

The SCS server maintains an administrator user, “admin”, that has full rights within the Swarm site. This user also serves as the administrative user within the Swarm Storage management API. Credentials may be updated at any time, and updates are pushed to the Storage cluster to guarantee the two use the same credentials.

Required

Logging into the CLI is required to perform these operations if administrative credentials have already been set within SCS. The CLI credentials need to be updated once either the user name or password has changed.

Setting the Administrative Password

Update the administrative password:

scsctl platform config set -d "admin.password={new password}"

Updating CLI Credentials

The CLI requires knowing the administrative credentials to perform operations against the SCS server. To set these credentials:

scsctl auth login --user "{administrative user name}"

The CLI then securely prompts for the administrative password and proceeds with authentication.

Upgrading Swarm Storage

Warning

SCS may need to be upgraded before Swarm Storage. Verify the Swarm Storage version and SCS version are compatible before upgrading Swarm Storage. See Upgrading to the Latest SCS Version for CentOS 7 for more details.

Obtain the component bundle for the latest version from DataCore Downloads to upgrade the Swarm Storage software of a running cluster. Transfer the bundle to the SCS server and run the following commands to register it with SCS:

  1. Use the command below to get the list of Storage software versions that have been registered with SCS.

    scsctl storage software list

If the desired version already present in the list, continue with step 5 when ready to boot Swarm Storage nodes to the desired version.

  1. Unpack the downloaded Swarm Storage software bundle

  2. Navigate to the Storage directory within that bundle and run the following command to register the Swarm storage software with SCS (please insert the correct version string where needed).

    cd Swarm-{desired version}/Storage/
    scsctl repo component add -f swarm-scs-storage-{desired version}.tgz
  3. Verify the desired version is present in the list of available versions.

    scsctl storage software list

The software has been successfully registered if the latest version appears in the list. It will not be used for booting nodes unless it is also marked as (active) in the list.

Caution

Activating a version means that any nodes that reboot will use the binaries for the activated version. Do not proceed with the next step until ready for storage nodes to boot with the changed version.

  1. Activate this desired version to complete the upgrade.

    scsctl storage software activate

    Choose the desired version in the menu. The activated version will be used the next time that storage nodes reboot.

Removing an Installed Version of Swarm Storage

Info

This removes all software, published setting defaults, and configuration files associated with a specific version of Swarm storage software. The software binaries, settings, and configuration files for other installed versions (and other installed components) remain unaffected by this action.

The following version texts are examples only. Verify the list of installed versions, and note which version is currently marked as active.

scsctl storage software list
14.0.1 (14.0.1) (active)
14.1.2 (14.1.2)

Important

If the version to be removed is currently the active version, it is strongly recommended that another version (if available) be marked active prior to removing the desired version.

To mark another version as active:

scsctl storage software activate "14.1.2 (14.1.2)"
activated

Verify activation if desired:

scsctl storage software list
14.0.1 (14.0.1)
14.1.2 (14.1.2) (active)

Remove the desired version, using the entire version string:

Important

If the version being removed is the only installed version, or if another version cannot be activated for any reason, then the --force flag will need to be added to the command in order to remove the version.

scsctl repo component delete storage "14.0.1 (14.0.1)"
removed

Verify removal if desired:

scsctl storage software list
14.1.2 (14.1.2) (active)

Backing Up SCS

SCS allows a full backup of all components, configurations, settings overrides, and binaries for support and maintenance purposes. The CLI must be logged in since this backup includes values for settings marked as “secure”.

This backup allows for SCS settings to be restored in the event that the SCS needs to be rebuilt. A full backup allows for this kind of restoration; a lightweight backup is useful for support purposes only.

Full Backup

Obtain a full backup of all data:

scsctl backup create --output "{path to output backup file}"

Lightweight Backup

Obtain a “lightweight” backup that excludes repo data (binaries, etc.):

scsctl backup create --no-repo --output "{path to output backup file}"

Backup Restore

Perform backup restore of all data:

scsctl backup restore "{path to output backup file}"

  • No labels