Deprecated
The Legacy Admin Console (port 90) is still available but has been replaced by the Swarm Storage UI. (v10.0)
This section describes how to manage and maintain your cluster using the legacy Admin Console.
The Cluster Status page appears when you log in to the legacy Admin Console, giving you a comprehensive view of the cluster as a whole and letting you perform cluster-wide actions, such as restarting and shutting down the cluster and modifying the cluster settings.
Authenticating Cluster-wide Actions
Shutting down and restarting the cluster are cluster-wide actions that require authentication. Authentication is also required when changing the cluster settings to:
- Manage domains
- Modify the logging host configuration
- Change the replication multicast value
- Suspend or resume volume recovery
- Set the power-saving mode
To commit your cluster-wide actions,
- Open the node and/or cluster configuration file with the appropriate user credentials.
- Modify the security.administrators parameter.
By default, an admin user is predefined in the parameter that lets you authenticate with Basic authentication (your password is sent in clear text from your browser to the legacy Admin Console).
For added security, take the following precautions:
- Change the admin password immediately.
- Implement real user names.
- Encrypt user passwords using Digest authentication.
See Encrypting Swarm passwords.
Note
Shutting Down or Restarting the Cluster
To shut down or restart all nodes in the cluster,
- Log into the legacy Admin Console with admin credentials.
- Click Shutdown Cluster or Restart Cluster in the console.
- When prompted, verify the procedure.
Note
Finding Nodes in the Cluster
Listing of nodes
The cluster node list provides a high-level view of the active nodes in the system. Click the maximize button next to each IP address in the Node IP column to view the storage volumes within each cluster node.
The status information is transmitted periodically to the legacy Admin Console, requiring up to two minutes before the node data in the Cluster Status window is updated. Because of the status propagation delay, the data for each node may vary in comparison. For best results, remain connected to the same node to avoid confusion.
The volume labels next to each Swarm node are listed in arbitrary order. While the legacy Admin Console labels do not correspond to physical drive slots in node chassis, the volume names match the physical drives in the machine chassis. If the cluster is configured to use subclusters, you must expand each subcluster name to display the corresponding volume information.
Prior versions
If you have a large cluster, you can search for nodes by IP address and by status.
Finding nodes by IP
For large clusters with multiple nodes, you can search for a node using the Node IP search field in the legacy Admin Console.
To locate the targeted node, enter the node IP address in the field and click Search .
Finding nodes by Status
To display nodes or volumes with a specific status, select a Status from the drop-down menu.
Note
Statuses include:
- OK: The node is working and there are no errors.
- Alert, Warning: The node or volume has experienced one or more errors. Click the IP Address link to drill down to the node and view the related error.
- Initializing: The short state after a node boots when it is reading cluster persisted settings and is not quite ready to accept requests.
- Maintenance: The node has been shut down or rebooted by an administrator from either SNMP or the legacy Admin Console and should not be considered missing for recovery purposes. By default a node can be in a Maintenance state for 3 hours before it transitions to Offline and the cluster starts recovery of its content. Maintenance mode is not initialized when the power is manually cycled on the node outside of Swarm (either physically on the hardware or via a remote shutdown mechanism like iDRAC) or if there is a disk error; in both these instances recovery processes will be started for the node unless recovery is suspended.
- Mounting: The node is mounting one or more volumes, including formatting the disk if it is new and reading all objects on the volume into the RAM index for faster access.
- Offline: The node or volume was previously but is no longer present in the cluster.
- Retiring: The node or volume is in the process of retiring, making sure all its objects are fully protected elsewhere in the cluster and then removing them locally.
- Retired: The node or volume has completed the retiring process and may be removed from the cluster.
- Idle: The nodes or volumes are in power-saving mode due during a period of configurable inactivity. (See Configuring Power Management.)
Only matching results will appear on the console when you select a value in the drop-down menu. When you are finished looking at the searched node(s), select View All to redisplay all nodes in the cluster.
Percent Used Indicator
The % Used indicator provides a helpful computation of cluster availability and licensed and total physical space for monitoring purposes. Space used is calculated against the lesser of the total physical space or the licensed space.
For example, in a cluster with 4 TB of physical space but only 2 TB of licensed space where 1.5 TB of space is used, the console would report 75% Space Used.
The indicators include color highlighting, as described below.
Logical Threshold | Color * | Description | Default Threshold Value |
---|---|---|---|
OK | Green | Used space is less than the console.spaceWarnLevel configurable threshold. | At or above 75% |
Warning | Yellow | Used space is less than the console.spaceErrorLevel and more than the spaceWarnLevel configurable thresholds. | Above 75% but at or below |
Error | Red | Used space is greater than or equal to the console.spaceErrorLevel configurable threshold. | Above 90% |
* You can modify these default colors using custom style sheets.
Displaying Subcluster Information
If your cluster contains subclusters, the Node List will be grouped first by subcluster name and then by node IP address. (If no subcluster name is specified in the node or cluster configuration file, the subcluster name is an IP address.) The first row of each subcluster includes a roll up of the status for the nodes in the subcluster.
Example of two subclusters expanded to show member nodes:
The status information is transmitted periodically to the legacy Admin Console, requiring up to two minutes before the node data in the Cluster Status page is updated. Because of the status propagation delay, the data for each node may vary in comparison.
Tip