Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Platform's CLI (command-line interface) is installed by default on the Platform server and supports common administrative tasks.

Table of Contents
maxLevel2

Rebooting a Cluster

There are two CLI options for rebooting a cluster: full versus rolling.

  • Full reboot notifies every chassis in the cluster to reboot itself at the same time, meaning that the . The entire cluster will is temporarily be offline as the chassis reboot.

    Full reboot

    Code Block
    title
    languagebashFull reboot
    platform restart storagecluster --full
  • Rolling reboot is a long-running process that keeps the cluster operational by rebooting the cluster one chassis at a time, until the entire cluster has been is rebooted. A rolling reboot includes several options, such as to limit the reboot to one or more chassis:

    Rolling reboot

    Code Block
    languagebash
    titleRolling reboot
    platform restart storagecluster --rolling 	
    	[--chassis <comma-separated system IDs>]
    	[--skipConnectionTest]
    	[--skipUptimeTest]
    	[--continueWithOfflineChassis]
    	[--stopOnNodeError]
Info

...

Requirements

Before a rolling reboot can begin, these conditions must be met:

  1. All chassis targeted for rebooting must be running and reachable. If

...

  1. chassis are offline

...

  1. ,

...

  1. set a flag to have them ignored:

    • To skip the connection check altogether, add the flag --skipConnectionTest

    • To have the reboot process ignore currently offline chassis, add the flag --continueWithOfflineChassis

  2. All chassis must have an uptime greater than 30 minutes. To skip this requirement, add the flag --skipUptimeTest

Managing Rolling Reboots

You have 10 seconds are allotted to cancel a rolling reboot before it begins. Once a rolling reboot has started, it will stop stops and report reports an error if any of these the following occur:

  1. A chassis is offline when it is selected for reboot. To have the reboot process ignore currently offline chassis, add the flag --continueWithOfflineChassis.

  2. The reboot process

    will continue

    continues if the volumes come up but a node goes into an error state. To have the reboot process stop, add the flag --stopOnNodeError.

  3. If the chassis boots with a number of volumes that

    doesn't

    does not match the number present before the chassis was rebooted. A volume is considered up if it has a state of: ok, retiring, retired, or unavailable

  4. The chassis does not come back online after 3 hours has passed.

If a rolling reboot has stopped due to an error, you can resume resume the reboot using the resume command below after you have resolved the error is resolved .

Status check —  To retrieve the status of a rolling reboot task, use the following commands for reboots remaining and reboots completed:

Rolling reboots remaining
Code Block
title
languagebashRolling reboots remaining
platform status rollingreboot
Rolling reboots completed
Code Block
title
languagebashRolling reboots completed
platform status rollingreboot --completed

Global states — When viewing the status for a rolling reboot, a rolling reboot task can have the following global states:

  • in-progress: The rolling reboot is currently running.

  • paused: The rolling reboot

    has been

    is paused (using the pause command).

  • completed: The rolling reboot finished successfully.

  • cancelled: The rolling reboot was caused per a user request.

  • error: The reboot

    has been

    is stopped due to an error of some kind.

Chassis states — The status listing will also show shows the status for each chassis that is processed by the rolling reboot task. Each chassis can have one of the following states:

  • pending: The rolling reboot task has

    yet to process

    not processed the chassis.

  • in-progress: The rolling reboot task is in the process of rebooting the chassis.

  • completed: The chassis was successfully rebooted.

  • removed: The chassis was removed from the list of chassis to process after the rolling reboot was started (using the delete rolling reboot command).

  • error: The chassis encountered an error of some kind.

  • abandoned: The chassis was currently being processed when a user cancelled the rolling reboot.

  • dropped: The rolling reboot was in the process of waiting for the chassis to reboot when a user request was made to move to the next chassis (using the --skip flag).

  • offline: The chassis was already offline when the reboot task attempted to reboot the chassis.

Cancel reboot — To cancel (not pause) an active rolling reboot, issue the delete command, which the reboot process at the earliest moment and thus cannot be restarted later.

...

Exclude from reboot — To exclude from a currently running rolling reboot one or more chassis that have not yet been rebooted:

Code Block
languagebash
platform delete rollingreboot --chassis <comma-separated system IDs>

Pause reboot — To pause the current rolling reboot process so that it can be restarted later:

...

No-wait reboot —  Normally, the rolling reboot process will wait waits up to 3 hours for a rebooted chassis to come back online before proceeds to the next. To force the process to stop waiting and move to the next chassis, use the --skip flag:

...

  1. Create a node.cfg file and add any node-specific Swarm settings to apply, or leave it blank to accept all current settings.

  2. Power on the chassis for the first time.

  3. Wait until the chassis enlists and powers off.

  4. Deploy the new server:

    Code Block
    languagebash
    platform deploy storage -n 1 -v <#.#.#-version-to-deploy>

To Use the following process to deploy an individual chassis by system ID, use this process::

  1. Create a node.cfg file and add any node-specific Swarm settings to apply , or leave it blank to accept all current settings.

  2. Get a list of chassis that are available for deployment by using the following command:

    Code Block
    languagebash
    platform list nodes --state New
  3. Choose a System ID to deploy a single chassis using a command like the following:

    Code Block
    languagebash
    platform deploy storage -y 4y3h7p -v 9.2.1

Service Proxy

If Restart the service so it picks up the new chassis list if the Service Proxy is running on the Platform Server when you add or remove chassis, be sure to restart the service so that it can pick up the new chassis listadding or removing chassis:

Code Block
languagebash
platform restart proxy

Reconfiguring the Cluster

You can modify your Modify the cluster-wide Swarm configuration at anytime using the CLI and a configuration file. The reconfiguration process is additive: all existing settings that are not referenced in your the file are preserved. That is, if you define only two settings, Platform overwrites or adds only those two settings .if two settings are defined.

  1. Create a supplemental .cfg file (such as changes.cfg) and specify any new or changed Swarm settings to apply.

  2. To upload your the configuration changes, use the following CLI command:

    Code Block
    languagebash
    platform upload config -c {Path to .cfg}

The CLI will parse parses the uploaded configuration file for changes to make to Platform.

If Swarm was running during the upload, Platform Server attempts to communicate the new configuration to Swarm if Swarm was running during the upload. Any settings that cannot be communicated to Swarm will require requires a reboot of the Swarm cluster in order to take effect. For each setting contained in the file, the CLI will indicate The CLI indicates if the setting was communicated to the Storage cluster and if a reboot is required for each setting contained in the file. The Swarm UI also indicates which settings require rebooting.

Example: Increase Swarm processes

title
Info

Swarm 10

Swarm Storage 10 has a single-process architecture, so the configuration setting chassis.processes is no longer used and cannot be increased.

...

To set all chassis throughout the cluster to a higher number of processes, you would create a configuration file and upload it to Platform Server. 

  1. Create a text file, such as update.cfg, containing only the setting to be changed.

    Code Block
    languagebash
    chassis.processes = 6
  2. To upload your the configuration changes, use the following CLI command:

    Code Block
    languagebash
    platform upload config -c {Path to update.cfg}
    Info
    titleNote

    Include the -m <mac-address> parameter if you want to target the update to specific chassis.

...

  1. Add the configuration change directly:

    Code Block
    languagetext
    platform add config --name "chassis.processes" --value 6

Reconfiguring a Chassis

You can modify Modify the node-specific settings for a single chassis by the same process, but you need to specify the the MAC address of any valid NIC on that chassis needs to be specified.

  1. Create a .cfg file (such as changes.cfg) and specify any new or changed node-specific settings to apply.

  2. To upload your the configuration changes, use the following CLI command:

    Code Block
    languagebash
    platform upload config -c {Path to .cfg} -m {mac address}

The CLI will parse parses the uploaded configuration file for changes to make to that chassis.

Releasing a Chassis

There may be times when you need to release a chassis needs to be released from the Swarm cluster, either for temporary maintenance or for permanent removal.

title
Info

Important

To

ensure

guarantee a clean shut down, power off the chassis through the UI or SNMP before

you run

running release commands.

Temporary release — Temporary release of a chassis assumes that the chassis will be is added back into the cluster at a later time. Releasing a chassis lets you unallocate its allows deallocating the cluster resources, such as IP Addresses, or wipe and reset its the configuration.

Once the chassis is powered off, you can release the chassis from the Swarm cluster:

Temporary removal
Code Block
title
languagebashTemporary removal
platform release storagechassis -y <system-id>

Permanent removal — Permanent removal is for retiring a chassis altogether or changing the chassis' main identifying information, such as changing a NIC. Removing the chassis from management will cause causes the chassis to start the provisioning life cycle as if it were is a brand new chassis, if it is powered on again.

Once Remove the chassis is powered off, you can remove the chassis from Platform Server management permanently:from Platform Server management permanently once the chassis is powered off:

Permanent removal
Code Block
languagebash
titlePermanent removal
platform release storagechassis -y <system-id> --remove

Resetting to Defaults

If you would like Issue the following commands to clear out all existing setting customizations from a given chassis or the entire cluster, you can issue the following commands.

Info
title

Note

These commands require a cluster reboot because the reset is not communicated to the Storage network dynamically.

Delete All Default Chassis Settings
Code Block
title
languagebashDelete All Default Chassis Settings
platform delete allchassisconfig
Delete All Cluster Settings
Code Block
languagebashtitleDelete All Cluster Settings
platform delete allclusterconfig

Managing Subclusters

After Assign chassis to subclusters after all the chassis have been are deployed and are running, you can assign chassis to subclusters.

To Use the list command to see the current subcluster assignments, use the list command::

List subclusters
Code Block
languagebashtitleList subclusters
platform subcluster list

To assign a chassis to a subcluster, use the assign command:

Add to subcluster
Code Block
languagebashtitleAdd to subcluster
platform subcluster assign -y <system-id> --subcluster <subcluster-name>
Info
title

Note

Reassignment is not immediate. Allow time for every node on the chassis to be migrated to the new subcluster.

To Use the unassign command to remove a chassis from a subcluster, use the unassign command::

Remove from subcluster
Code Block
languagebashtitleRemove from subcluster
platform subcluster unassign -y <system-id>
Info
title

Important

Reboot the chassis for the subcluster removal to take effect.

Changing the Default Gateway

By default, the The Platform Server configures Swarm Storage to use the Platform Server as its the default gateway by default.

To override this behavior, either Either add a "network.gateway" to your the cluster configuration file or issue the following command to override this behavior:

Code Block
languagebash
platform add config --name "network.gateway" --value "<ip-of-gateway>"

Managing Administrators

With one exception, modifying the admin users for the Storage cluster requires the Storage cluster to be up and running before the operations can be done. The one exception to this is the "snmp" user

...

which can have

...

the password set while the cluster is down or before the cluster

...

is booted for the first time.

Info

...

Important

Changing the password for the "snmp" user requires a full cluster reboot for the change to take effect.

Adding or Updating Users

Info

...

Important

Modifying passwords for the admin user

...

requires restarting the Service Proxy, if

...

installed. It

...

can also require updates to Gateway configuration.

...

Use the following CLI command to add a new admin user:

Add admin user

...

Code Block
languagebash

...

platform add adminuser 	
	[--askpassword]
	[--username <username>]
	[--password <user password>]
	[--update]

The --askpassword flag

...

allows avoiding specifying a password

...

using the command line by providing the password

...

using stdin. When this flag is used,

...

a prompt displays to enter a new/updated password for the user.

...

The Linux pipe functionality can be used:

Code Block
languagebash
cat password.txt | platform add adminuser --askpassword --username admin --update
Info

...

Important

If

...

updating the password for an existing user

...

use the --update flag. 

...

Use the following CLI command to delete an admin user from the cluster

...

:

Delete admin user
Code Block
languagebash

...

...

platform delete adminuser --username <username>

Upgrading Swarm Storage

...

Swarm Storage

Use the CLI to upload the version and then take steps to deploy it to your running nodes to upgrade Swarm Storage in a live cluster, either by restarting the entire cluster or each chassis in turn.

title
Info

Note

The deploy storage --upgrade command is used for both upgrades and downgrades of Storage versions.

  1. Upload the new version of the Swarm Storage software to Platform server, making sure that verifying the <version-name> matches the version of Swarm Storage being uploaded:

    Code Block
    languagebash
    platform upload storageimages -i <path-to-zip> -v <version-name>
    
    platform upload storageimages -i ./storage-9.6.0-x86_64.zip -v 9.6
    Info
    title


    Note

    : The zip file

    to use

    above is contained

    with

    within the Swarm-{version}-{date}.zip file

    that was downloaded

    . Inside this zip,

    there is

    a folder called Storage

    which

    contains a file called storage-{version}-x86_64.zip.

    This is the zip file to use for the command above.

  2. Get a full listing of all

    of the

    nodes

    and their

    along with IPs, MAC addresses, and system IDs:

    Code Block
    languagebash
    platform list nodes --state Deployed 
  3. Using the list of system IDs, deploy the upgrade on each of the nodes. If you want to restart Run that command as well if restarting the node immediately after upgrade, run that command as well:

    Code Block
    languagebash
    platform deploy storage --upgrade -v 9.2.1 -y <system-id>
    platform restart storagenode -y <system-id>
  4. If you did not restart Restart the cluster now if each node individually, restart the cluster nowis not restarted individually, either full or rolling:

    Code Block
    languagebash
    platform restart storagecluster --full
    or
    platform restart storagecluster --rolling [<options>]

Managing Service Proxy

StatusTo Use this command to check the status of the Service Proxy, use this command:

Code Block
languagebash
platform status proxy

UpgradeTo upgrade the Service Proxy on the Platform server, use the Use the CLI to upload the version and deploy it to upgrade the Service Proxy on the Platform server:

Code Block
languagebash
platform deploy proxy -b <path-to-zip> --upgrade
title
Info

Note

After a Service Proxy upgrade, it will take takes several minutes for the UI to come back up.

Configuring DNS

You The Storage nodes may need to have the Storage nodes resolve names for outside resources, such as Elasticsearch or Syslog. To do so, configure Configure the DNS server on the Platform Server to communicate with outside domains to perform this

Option 1:

...

Forwarding

A Slave/Backup DNS zone is a read-only copy of the DNS records; it can only receive receives updates from the Master zone of the DNS server.

If you have no DNS master/slave relationships configured, you can do simple Perform forwarding by having the domain managed by the Platform server forward all lookups to outside domains if no DNS master/slave relationships are configured:

  1. Edit /etc/bind/named.conf.options and add the following line after the "listen-on-v6" line

    Code Block
    forwarders {172.30.0.202;};
  2. Run the following command to restart bind9 on the Platform Server:

    Code Block
    languagebash
    sudo systemctl restart bind9

Option 2: Configuring a Slave DNS Zone

...

Slave DNS Zone

Have the Platform Server become a slave DNS of that zone if an external DNS Zone is configured; the reverse can be done to allow other systems to resolve names for servers managed by the Platform server.

This process assumes that the external DNS server has been is configured to allow zone transfers to the Platform server. The DNS server on the Platform server is not configured to restrict zone transfers to other DNS slaves.

  1. Edit /etc/bind/named.conf.local and add the following line at this location:

    Code Block
    languagebash
    // slave other local zones
    include "/etc/bind/named.conf.slaves";
  2. Create a new file called /etc/bind/named.conf.slaves and add your the settings in this format:

    Code Block
    languagebash
    // local slave zones
    zone "example.com" in {
        type slave;
        masters {172.30.0.100; };
        file "/var/cache/bind/slave/zone-example.com";
    };
  3. Run the following command to restart bind9 on the Platform Server:

    Code Block
    languagebash
    sudo systemctl restart bind9

Configuring Docker Bridge

To Edit the file /etc/docker/daemon.json to configure or modify the network information that is used by the default Docker (docker0) bridge, edit the file /etc/docker/daemon.json. You can add . Add networking properties as properties to the root JSON object in the file:

...

The bip property sets the IP address and subnet mask to use for the default docker0 bridge. For See the Docker documentation for details on the different properties, see the Docker documentation.