...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
Info |
---|
Warning |
DeprecatedThe Legacy Admin Console (port 90) is still available but has been replaced by the Swarm Storage UI. (v10.0) |
Table of Contents | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
This section describes how to manage and maintain your the cluster using the legacy Admin Console.
The Cluster Status page appears when you log logging in to the legacy Admin Console, giving you providing a comprehensive view of the cluster as a whole and letting you perform enabling cluster-wide actions, such as restarting and shutting down the cluster and modifying the cluster settings.
Authenticating Cluster-
...
Wide Actions
Shutting down and restarting the cluster are cluster-wide actions that require authentication. Authentication is also required when changing the cluster settings to:
Manage domains
Modify the logging host configuration
Change the replication multicast value
Suspend or resume volume recovery
Set the power-saving mode
To commit your the cluster-wide actions,:
Open the node and/or cluster configuration file with the appropriate user credentials.
Modify the security.administrators parameter.
By default, an An admin user is predefined in the parameter that lets you authenticate with allows Basic authentication (your by default. The password is sent in clear text from your the browser to the legacy Admin Console).
For added security, take Take the following precautions for added security:
Change the admin password immediately.
Implement real user names.
Encrypt user passwords using Digest authentication.
See Encrypting Swarm passwords.
...
See Encrypting Defining Swarm Admins, Swarm Users, and Swarm Passwords.
Note
After 60 seconds in most browsers,
...
the user name and password are no longer valid. Use caution when entering
...
the name and password in Safari and Chrome browsers, where information may not time out after 60 seconds.
Shutting Down or Restarting the Cluster
To shut down or restart all nodes in the cluster,
Log
...
in to the legacy Admin Console with admin credentials.
Click Shutdown Cluster or Restart Cluster in the console.
When prompted, verify the procedure.
...
Note
Allow several minutes for the nodes to shut down or restart.
Finding Nodes in the Cluster
Listing of
...
Nodes
The cluster node list provides a high-level view of the active nodes in the system. Click the maximize button next to each IP address in the Node IP column to view the storage volumes within each cluster node.
...
The status information is transmitted periodically to the legacy Admin Console, requiring up to two minutes before the node data in the Cluster Status window is updated. Because of the status propagation delay, the data for each node may vary in comparison. For best results, remain Remain connected to the same node to avoid confusion for best results.
The volume labels next to each Swarm node are listed in arbitrary order. While the legacy Admin Console labels do not correspond to physical drive slots in node chassis, the volume names match the physical drives in the machine chassis. If the cluster is configured to use subclusters, you must expand each Each subcluster name must be expanded to display the corresponding volume information if the cluster is configured to use subclusters.
Info | |
---|---|
Prior | versionsVersionsNodes running legacy software versions (up to version 3.0) in a mixed version configuration may not display all data in the legacy Admin Console, such as object counts. |
If you have a large cluster, you can search Search for nodes by IP address and by status .
...
if the cluster is large.
Finding
...
Nodes by IP
For large clusters with multiple nodes, you can search Search for a node using the Node IP search field in the legacy Admin Console for large clusters with multiple nodes.
To locate the targeted node, enter Enter the node IP address in the field and click Search.
...
to locate the targeted node.
Finding
...
Nodes by Status
To Select a Status from the drop-down menu to display nodes or volumes with a specific status, select a Status from the drop-down menu.
...
Note
The overall cluster status is a roll-up of the statuses from cluster nodes.
Statuses include:
OK: The node is working and there are no errors.
Alert, Warning: The node or volume has experienced one or more errors. Click the IP Address link to drill down to the node and view the related error.
Initializing: The short state after a node
bootsboot when it is reading cluster persisted settings and is not quite ready to accept requests.
Maintenance: The node has been shut down or rebooted by an administrator from either SNMP or the legacy Admin Console and should not be considered missing for recovery purposes. By default, a node can be in a Maintenance state for 3 hours before it transitions to Offline and the cluster starts recovery of its content. Maintenance mode is not initialized when the power is manually cycled on the node outside of Swarm (either physically on the hardware or via a remote shutdown mechanism like iDRAC) or if there is a disk error; in both these instances, recovery processes
will beare started for the node unless recovery is suspended.
Mounting: The node is mounting one or more volumes, including formatting the disk if it is new and reading all objects on the volume into the RAM index for faster access.
Offline: The node or volume was previously but is no longer present in the cluster.
Retiring: The node or volume is in the process of retiring,
making sureverifying all
itsobjects are fully protected elsewhere in the cluster and then removing them locally.
Retired: The node or volume has completed the retiring process and may be removed from the cluster.
Idle: The nodes or volumes are in power-saving mode due
duringto a period of configurable inactivity. (See Configuring Power Management.)
Only matching results will appear on the console when you select a value in the drop-down menu . When you are is selected. Select View All to redisplay all nodes in the cluster when finished looking at the searched node(s), select View All to redisplay all nodes in the cluster.
Percent Used Indicator
The % Used indicator provides a helpful computation of cluster availability and licensed and total physical space for monitoring purposes. Space used is calculated against the lesser of the total physical space or the licensed space.
For example, in a cluster with 4 TB of physical space but only 2 TB of licensed space where 1.5 TB of space is used, the console would report reports 75% Space Used.
The indicators include color highlighting, as described below.
Logical Threshold | Color* | Description | Default Threshold Value |
---|---|---|---|
OK | Green | Used space is less than the console.spaceWarnLevel configurable threshold. | At or above 75% |
Warning | Yellow | Used space is less than the console.spaceErrorLevel and more than the spaceWarnLevel configurable thresholds. | Above 75% but at or below |
90% | |||
Error | Red | Used space is greater than or equal to the console.spaceErrorLevel configurable threshold. | Above 90% |
* You can modify these These default colors can be modified using custom style sheets.
Displaying Subcluster Information
If your cluster contains subclusters, the The Node List will be is grouped first by subcluster name and then by node IP address . (If if the cluster contains subclusters. The subcluster name is an IP address if no subcluster name is specified in the node or cluster configuration file, the subcluster name is an IP address. ) The first row of each subcluster includes a roll-up of the status for the nodes in the subcluster.
Example of two subclusters expanded to show member nodes:
...
The status information is transmitted periodically to the legacy Admin Console, requiring up to two minutes before the node data in the Cluster Status page is updated. Because of the status propagation delay, the The data for each node may vary in comparison because of the status propagation delay.
Info |
---|
TipFor best results, remainRemain connected to the same node to avoid confusion for best results. |
...