Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Symptom

Action

A volume device failed.

Allow the node to continue running in a degraded state (lowered storage)
OR
Replace the volume at the earliest convenience.

See Replacing Failed Drives.

A node failed.

Repair the hardware and return it to service within 14 days if a node fails but the volume storage devices are functioning properly.

All volumes are considered stale and cannot be used if a node is down for more than 14 days. Force a volume to be remounted by modifying the volume specification and adding the:k (keep) policy option after 14 days.

See Managing Volumes.

In the UI, all remaining cluster nodes
are consistently or intermittently offline.

Viewing the legacy Admin Console from different nodes, other nodes appear offline and unreachable.

Check the Swarm network configuration setting in each node (particularly the group parameter) to verify all nodes are configured as part of the same cluster and connected to the same subnet if a new node cannot see the remaining nodes in the cluster.

Verify IGMP Snooping is enabled on the network switch if the network configuration appears to be correct. An IGMP querier must be enabled in the same network (broadcast domain) if enabled. In multicast networks, this is normally enabled on the router leading to the storage cluster, which is usually the default gateway for the nodes.

See IGMP Snooping.

Read-only access to the UIs even though
the user is listed in security.administrators.

Cannot view the Swarm UI.

Added an operator (a read-only user) to security.operators but did not add the administrator user name and password to security.operators as well. The Swarm UI cannot be accessed as an administrator.

Add all administrator users to the security.operators parameter in the node or cluster configuration file to resolve this issue.

See Defining Swarm Admins and Users.

The network does not connect to a node configured with multiple NIC ports.

Verify the network cable is plugged into the correct NIC. Depending on the bus order and the order the kernel drivers are loaded, the network ports may not match the external labeling.

A node automatically reboots.

This issue may indicate a software problem if the node is plugged into a reliable power outlet and the hardware is functioning properly.

The Swarm system includes a built-in fail safe that reboots itself if something goes wrong. Contact DataCore Support for guidance.

A node is unresponsive to network requests.

Perform the following steps until the node responds to network requests.

  • Verify the client network settings are correct.

  • Ping the node.

  • Open the legacy Admin Console on the node by entering the IP address in a browser window (http://{ip-address}:90).

  • Attach a keyboard to the failed node and press Ctrl-Alt-Delete to force a graceful shutdown.

  • Press the hardware reset button on the node or power cycle the node.

The cluster is using more data than expected.

Using Elasticsearch, enumerate the CAStor-Application field to determine how much data is being written by which application. Many Swarm applications use this metadata header, and having it indexed allows analyzing which application created which content.

A node is not performing as expected.

In the castor.log, view the node statistics, which include periodic logging of CPU utilization for each process:

Code Block
languagebash
2015-11-05 16:13:22,
	898 NODE INFO: system utilization stats: 
		pid_cpusys: 0.06, 
		pid_cputot: 1.67, 
		pid_cpuusr: 1.61, 
		sys_contexts_rate: 5728.00, 
		sys_cpubusy: 0.91, 
		sys_cpubusy0: 0.37, 
		sys_cpubusy1: 1.46, 
		sys_cpuio: 0.02, 
		sys_cpuirq: 0.01, 
		sys_cpusys: 0.06, 
		sys_cpuusr: 0.82


...