Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Symptom

Action

A volume device failed.

Allow the node to continue running in a degraded state (lowered storage)
OR
Replace the volume at your earliest convenience.

See Replacing Failed Drives.

A node failed.

If a node fails but the volume storage devices are functioning properly, you can repair the hardware and return it to service within 14 days.

If a node is down for more than 14 days, all of its volumes are considered stale and cannot be used. After 14 days, you can force a volume to be remounted by modifying the volume specification and adding the :k (keep) policy option.

See Managing Volumes.

In the UI, all remaining cluster nodes
are consistently or intermittently offline.

Viewing the legacy Admin Console from different nodes, other nodes appear offline and unreachable.

If a new node cannot see the remaining nodes in the cluster, check the Swarm network configuration setting in each node (particularly the group parameter) to ensure that all nodes are configured as part of the same cluster and connected to the same subnet.

If the network configuration appears to be correct, verify that IGMP Snooping is enabled on your network switch. If enabled, an IGMP querier must be enabled in the same network (broadcast domain). In multicast networks, this is normally enabled on the router leading to the storage cluster, which is usually the default gateway for the nodes.

See IGMP Snooping.

You have read-only access to the UIs even though
you are listed in security.administrators.

You cannot view the Swarm UI.

You added an operator (a read-only user) to security.operators but did not add your administrator user name and password to security.operators as well. As a result, you cannot access the Swarm UI as an administrator.

To resolve this issue, add all of your administrator users to the security.operators parameter in the node or cluster configuration file.

See Defining Swarm Admins and Users.

The network does not connect to a node configured with multiple NIC ports.

Ensure that the network cable is plugged into the correct NIC. Depending on the bus order and the order that the kernel drivers are loaded, the network ports may not match their external labeling.

A node automatically reboots.

If the node is plugged into a reliable power outlet and the hardware is functioning properly, this issue may indicate a software problem.

The Swarm system includes a built-in fail safe that will reboot itself if something goes wrong. Contact Support for guidance.

A node is unresponsive to network requests.

Perform the following steps until the node responds to network requests.

  • Ensure that your client network settings are correct.
  • Ping the node.
  • Open the legacy Admin Console on the node by entering its IP address in a browser window (http://{ip-address}:90).
  • Attach a keyboard to the failed node and press Ctrl-Alt-Delete to force a graceful shutdown.
  • Press the hardware reset button on the node or power cycle the node.

The cluster is using more data than expected.

Using Elasticsearch, enumerate the CAStor-Application field to determine how much data is being written by which application. Many Swarm applications use this metadata header, and having it indexed lets you analyze allows analyzing which application created which content.

A node is not performing as expected.

In the castor.log, view the node statistics, which include periodic logging of CPU utilization for each process:

Code Block
languagebash
2015-11-05 16:13:22,
	898 NODE INFO: system utilization stats: 
		pid_cpusys: 0.06, 
		pid_cputot: 1.67, 
		pid_cpuusr: 1.61, 
		sys_contexts_rate: 5728.00, 
		sys_cpubusy: 0.91, 
		sys_cpubusy0: 0.37, 
		sys_cpubusy1: 1.46, 
		sys_cpuio: 0.02, 
		sys_cpuirq: 0.01, 
		sys_cpusys: 0.06, 
		sys_cpuusr: 0.82


...