Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Full reboot notifies every chassis in the cluster to reboot itself at the same time. The entire cluster is temporarily offline as the chassis reboot.

    Full reboot

    Code Block
    languagebash
    platform restart storagecluster --full
  • Rolling reboot is a long-running process that keeps the cluster operational by rebooting the cluster one chassis at a time, until the entire cluster has been is rebooted. A rolling reboot includes several options, such as to limit the reboot to one or more chassis:

    Rolling reboot

    Code Block
    languagebash
    platform restart storagecluster --rolling 	
    	[--chassis <comma-separated system IDs>]
    	[--skipConnectionTest]
    	[--skipUptimeTest]
    	[--continueWithOfflineChassis]
    	[--stopOnNodeError]
Info

Requirements

Before a rolling reboot can begin, these conditions must be met:

  1. All chassis targeted for rebooting must be running and reachable. If chassis are offline, set a flag to have them ignored:

    • To skip the connection check altogether, add the flag --skipConnectionTest

    • To have the reboot process ignore currently offline chassis, add the flag --continueWithOfflineChassis

  2. All chassis must have an uptime greater than 30 minutes. To skip this requirement, add the flag --skipUptimeTest

Managing Rolling Reboots

You have 10 seconds are allotted to cancel a rolling reboot before it begins. Once a rolling reboot has started, it stops and reports an error the following occur:

...

If a rolling reboot has stopped due to an error, resume the reboot using the resume command below after you have resolved the error is resolved .

Status check —  To retrieve the status of a rolling reboot task, use the following commands for reboots remaining and reboots completed:

...

  • in-progress: The rolling reboot is currently running.

  • paused: The rolling reboot has been is paused (using the pause command).

  • completed: The rolling reboot finished successfully.

  • cancelled: The rolling reboot was caused per a user request.

  • error: The reboot has been is stopped due to an error of some kind.

...

  1. Create a node.cfg file and add any node-specific Swarm settings to apply, or leave it blank to accept all current settings.

  2. Power on the chassis for the first time.

  3. Wait until the chassis enlists and powers off.

  4. Deploy the new server:

    Code Block
    languagebash
    platform deploy storage -n 1 -v <#.#.#-version-to-deploy>

To Use the following process to deploy an individual chassis by system ID, use this process:

  1. Create a node.cfg file and add any node-specific Swarm settings to apply , or leave it blank to accept all current settings.

  2. Get a list of chassis that are available for deployment by using the following command:

    Code Block
    languagebash
    platform list nodes --state New
  3. Choose a System ID to deploy a single chassis using a command like the following:

    Code Block
    languagebash
    platform deploy storage -y 4y3h7p -v 9.2.1

Service Proxy

If Restart the service so it picks up the new chassis list if the Service Proxy is running on the Platform Server when adding or removing chassis, restart the service so it can pick up the new chassis list:

Code Block
languagebash
platform restart proxy

Reconfiguring the Cluster

You can modify Modify the cluster-wide Swarm configuration at anytime using the CLI and a configuration file. The reconfiguration process is additive: all existing settings that are not referenced in the file are preserved. That is, if you define only two settings, Platform overwrites Platform overwrites or adds only those two settings if two settings are defined.

  1. Create a supplemental .cfg file (such as changes.cfg) and specify any new or changed Swarm settings to apply.

  2. To upload the configuration changes, use the following CLI command:

    Code Block
    languagebash
    platform upload config -c {Path to .cfg}

The CLI parses the uploaded configuration file for changes to make to Platform.

If Swarm was running during the upload, Platform Server attempts to communicate the new configuration to Swarm if Swarm was running during the upload. Any settings that cannot be communicated to Swarm requires a reboot of the Swarm cluster in order to take effect. For each setting contained in the file, the The CLI indicates if the setting was communicated to the Storage cluster and if a reboot is required for each setting contained in the file. The Swarm UI also indicates which settings require rebooting.

...

  1. Add the configuration change directly:

    Code Block
    languagetext
    platform add config --name "chassis.processes" --value 6

Reconfiguring a Chassis

You can modify Modify the node-specific settings for a single chassis by the same process, but you need to specify the MAC address of any valid NIC on that chassis .needs to be specified.

  1. Create a .cfg file (such as changes.cfg) and specify any new or changed node-specific settings to apply.

  2. To upload the configuration changes, use the following CLI command:

    Code Block
    languagebash
    platform upload config -c {Path to .cfg} -m {mac address}

...

Releasing a Chassis

There may be times when you need to release a chassis needs to be released from the Swarm cluster, either for temporary maintenance or for permanent removal.

Info

Important

To guarantee a clean shut down, power off the chassis through the UI or SNMP before you run running release commands.

Temporary release — Temporary release of a chassis assumes that the chassis is added back into the cluster at a later time. Releasing a chassis allows deallocating the cluster resources, such as IP Addresses, or wipe and reset the configuration.

...

Permanent removal — Permanent removal is for retiring a chassis altogether or changing the chassis' main identifying information, such as changing a NIC. Removing the chassis from management causes the chassis to start the provisioning life cycle as if it were is a brand new chassis, if it is powered on again.

Once Remove the chassis is powered off, remove the chassis from Platform Server management permanently once the chassis is powered off:

Permanent removal
Code Block
languagebash
platform release storagechassis -y <system-id> --remove

...

Code Block
languagebash
platform delete allclusterconfig

Managing Subclusters

After Assign chassis to subclusters after all the chassis have been are deployed and are running, assign chassis to subclusters.

To Use the list command to see the current subcluster assignments, use the list command:

List subclusters
Code Block
languagebash
platform subcluster list

...

Info

Note

Reassignment is not immediate. Allow time for every node on the chassis to be migrated to the new subcluster.

To Use the unassign command to remove a chassis from a subcluster, use the unassign command:

Remove from subcluster
Code Block
languagebash
platform subcluster unassign -y <system-id>

...

Changing the Default Gateway

By default, the The Platform Server configures Swarm Storage to use the Platform Server as the default gateway by default.

To override this behavior, either Either add a "network.gateway" to the cluster configuration file or issue the following command to override this behavior:

Code Block
languagebash
platform add config --name "network.gateway" --value "<ip-of-gateway>"

...

With one exception, modifying the admin users for the Storage cluster requires the Storage cluster to be up and running before the operations can be done. The one exception to this is the "snmp" user which which can have the password set while the cluster is down or before the cluster has been is booted for the first time.

...

Info

Important

Modifying passwords for the admin user requires you restarting the Service Proxy, if installed. It can also require updates to Gateway configuration.

To Use the following CLI command to add a new admin user, use the following CLI commanduser:

Add admin user
Code Block
languagebash
platform add adminuser 	
	[--askpassword]
	[--username <username>]
	[--password <user password>]
	[--update]

...

  1. Upload the new version of the Swarm Storage software to Platform server, verifying the <version-name> matches the version of Swarm Storage being uploaded:

    Code Block
    languagebash
    platform upload storageimages -i <path-to-zip> -v <version-name>
    
    platform upload storageimages -i ./storage-9.6.0-x86_64.zip -v 9.6


    Note: The zip file above is contained within the Swarm-{version}-{date}.zip file. Inside this zip, a folder called Storage contains a file called storage-{version}-x86_64.zip.

  2. Get a full listing of all of the nodes along with IPs, MAC addresses, and system IDs:

    Code Block
    languagebash
    platform list nodes --state Deployed 
  3. Using the list of system IDs, deploy the upgrade on each of the nodes. Run that command as well if restarting the node immediately after upgrade, :

    Code Block
    languagebash
    platform deploy storage --upgrade -v 9.2.1 -y <system-id>
    platform restart storagenode -y <system-id>
  4. Restart the cluster now if each node is not restarted individually, either full or rolling:

    Code Block
    languagebash
    platform restart storagecluster --full
    or
    platform restart storagecluster --rolling [<options>]

Managing Service Proxy

StatusTo Use this command to check the status of the Service Proxy, use this command:

Code Block
languagebash
platform status proxy

UpgradeTo upgrade the Service Proxy on the Platform server, use the Use the CLI to upload the version and deploy it to upgrade the Service Proxy on the Platform server:

Code Block
languagebash
platform deploy proxy -b <path-to-zip> --upgrade

...

A Slave/Backup DNS zone is a read-only copy of the DNS records; it receives updates from the Master zone of the DNS server.

If you have no DNS master/slave relationships configured, you can do Perform forwarding by having the domain managed by the Platform server forward all lookups to outside domainsoutside domains if no DNS master/slave relationships are configured:

  1. Edit /etc/bind/named.conf.options and add the following line after the "listen-on-v6" line

    Code Block
    forwarders {172.30.0.202;};
  2. Run the following command to restart bind9 on the Platform Server:

    Code Block
    languagebash
    sudo systemctl restart bind9

Option 2: Configuring a Slave DNS Zone

If you have an external DNS Zone configured, have Have the Platform Server become a slave DNS of that zone if an external DNS Zone is configured; the reverse can be done to allow other systems to resolve names for servers managed by the Platform server.

This process assumes that the external DNS server has been is configured to allow zone transfers to the Platform server. The DNS server on the Platform server is not configured to restrict zone transfers to other DNS slaves.

...

Configuring Docker Bridge

To Edit the file /etc/docker/daemon.json to configure or modify the network information that is used by the default Docker (docker0) bridge, edit the file /etc/docker/daemon.json. Add networking properties as properties to the root JSON object in the file:

...

The bip property sets the IP address and subnet mask to use for the default docker0 bridge. For See the Docker documentation for details on the different properties, see the Docker documentation.