Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

This section describes how to set up the network services for your storage cluster.

Platform Server

If you use Platform Server, skip this section: your network services are set up.

Setting up NTP for time synchronization

The Network Time Protocol (NTP) server provides time synchronization between the cluster nodes, which is critical for many Swarm components. For best results, configure multiple NTP servers in close proximity to your cluster. For example, you can use the NTP Pool Project's continental zones, which are pools of NTP servers.

One or more trusted NTP servers, such as dedicated hardware solutions on your internal network or publicly available NTP servers, are required in your storage cluster. This configuration is required, ensuring that the internal clocks in all nodes are synchronized with each other.

If trusted NTP servers are available, you can add these servers to your cluster by adding their IP addresses or host names in the network.timeSource parameter located in the node configuration files. The parameter value is a list of one or more NTP servers (either host names or IP addresses) separated by spaces. For example, to add a second NTP server IP address, use the following syntax: 

network.timeSource = 10.20.40.21 10.20.50.31

To add an NTP server host name, the node must be able to resolve host names using a DNS server. Use this syntax: 

network.timeSource = ntp1.example.com ntp2.example.com

See Configuring an External Time Server.

NTP 3.0

NTP 3.0 included a design limitation that causes the time value to wrap in the year 2036. If the BIOS clock in a cluster node is set beyond this wrap point, NTP cannot correct the time. Before you boot Swarm in your cluster, ensure that the BIOS clocks in all nodes are set to a year prior to 2036. This issue was resolved in NTP 4.0.

If the configured NTP server(s) cannot be reached, the node will not boot. If the cluster nodes cannot access an external or internal NTP server, see Configuring a Node without NTP.

Setting up DHCP for IP address administration

The Dynamic Host Configuration Protocol (DHCP) server provides IP addresses to the cluster nodes and other devices that are enabled as DHCP clients. While Swarm nodes are not required to have static IP addresses to discover and communicate with each other, administrators might find it easier to manage and monitor a cluster where each node receives a predetermined IP address.

To configure this option using DHCP:

  1. Map the Ethernet media access control (MAC) address of each node to a static IP address.
  2. Configure your DHCP server to provide each node with an IP address for each of these:
    • network mask
    • default gateway
    • DNS server

Setting up DNS for name resolution

The Domain Name Service (DNS) is used to resolve host names into IP addresses. While DNS is not required for Swarm nodes to communicate with each other, DNS can be very useful for client applications to reach the cluster. If you use named objects, DNS is one method you can use to enable access to objects over the Internet.

Best practice

Although client applications can initiate first contact with any node in the storage cluster – even choosing to access the same node every time – best practice is for the node of first contact to be distributed evenly around the cluster.

For example, you can:

  • Define multiple DNS entries ("A" or "CNAME" records) that specify the IP address for the same Swarm first contact node.
  • Use multiple IP addresses for a DNS entry to create a DNS round-robin that provides client request balancing.

See your DNS software documentation for how to use "A" records and "CNAME" (alias) records.

Swarm requires a DNS server to resolve host names in the configuration file. For example, you can add a host name to the NTP list or the log host (such as ntp.pool.org) for name resolution. The DNS server needs to be set in the Swarm configuration file. In contrast, applications must resolve Swarm domain names to find the storage cluster. These unique requirements will most likely be addressed using different DNS servers.

The following example shows the entries in the Internet Systems Consortium (ISC) BIND DNS software configuration file for three node IP addresses tied to one name.

Swarm 0 IN A 192.168.1.101 
      0 IN A 192.168.1.102
      0 IN A 192.168.1.103

In this example, the Time To Live (TTL) value for each of the records in the round-robin group is very small (0-2 seconds). This configuration is necessary so that clients who cache the resolution results will quickly flush them. This process allows the first contact node to be distributed and allows a client to move quickly to another node if the first contact node is unavailable.

Best practice

Applications should implement robust mechanisms such as Zero Configuration Networking for distributing the node of first contact and skipping failed nodes, but an administrator can use DNS to assist with simpler applications.

Preparing for domains

To allow clients to access named objects over the Internet, enable incoming HTTP requests to resolve to the correct domain. (A cluster can contain many domains, each of which can contain many buckets, each of which can contain many named objects.) Cluster and domain names should both be Internet Assigned Numbers Authority (IANA) compatible host names, such as cluster.example.com.

For example, a client application can create an object with a name such as:

cluster.example.com/marketing/photos/ads/object-naming.3gp

In this example, cluster.example.com is the domain name, marketing is the name of a bucket, and photos/ads/object-naming.3gp is the name of an object. Set up your network so the host name in the HTTP request maps correctly to the object's domain name. The cluster name is not required.

To enable clients to access a named object:

  1. Set up your hosts file to map domain names to IP address(es) of the first contact node.
    • For a Linux system, configure the /etc/hosts file.
    • For a Windows system, configure the %SystemRoot%\system32\drivers\etc\hosts file.
      Example of a configured hosts file: 

      192.168.1.111 cluster.example.com 
      192.168.1.112 vault.example.com
  2. Define multiple DNS entries ("A" or "CNAME" records) that identify the IP address(es) of the first contact node in the storage cluster. This process creates a DNS round-robin that provides client request load balancing.
    • For help setting up DNS for Swarm, see Setting up DNS for name resolution, above.
    • For information about setting up your DNS server, see your DNS software documentation.

Setting up a Syslog Server for Critical Alerts

You must set up a syslog server to capture critical operational alerts from the nodes in a storage cluster. The server captures messages sent by the Swarm nodes on UDP port 514.

See Configuring External Logging on configuring an rsyslog server and the log.host and log.level parameters used to send Swarm messages to a syslog server.

Setting up SNMP for monitoring

Swarm provides monitoring information and administrative controls using the Simple Network Management Protocol (SNMP). Using an SNMP console, an administrator can monitor a storage cluster from a central location.

Disabling SNMP

If you need to disable SNMP cluster-wide, such as for a security need or using Swarm in containers, disable the Swarm Storage setting snmp.enabled. (v12.0)

Swarm uses an SNMP management information base (MIB) definition file to map SNMP object identifiers (OIDs) to logical names. The MIB can be located in one of two locations, depending on your configuration:

  • If your cluster nodes boot from a Platform Server, the aggregate MIB for the entire cluster is located at /usr/share/snmp/mibs.
  • If your cluster nodes do not boot from a Platform Server, the MIB is located in the root directory of your Swarm software distribution.

See Using SNMP with Swarm.

Setting up network load balancing

Although the Swarm Storage Cluster nodes interact with client applications using the HTTP communication protocol, the nodes operate differently from traditional web servers. As a result, placing storage nodes behind an HTTP load balancer is usually an unnecessary configuration. A properly configured load balancer can add value-added services like SSL off-load and centralized certificate management.

During normal operations, a storage node routinely redirects a client to another node within the cluster. When this process occurs, the client must initiate another HTTP request to the redirected node. Any process that virtualizes the storage node IP addresses or attempts to control the nodes connected to the client will generate communication errors.

Setting up the network interfaces

Gigabit Ethernet or faster NICs provide the recommended 1000 Mbps data communications speed between your storage cluster nodes. Swarm automatically uses multiple NICs to provide a redundant network connection.

To implement this feature, connect the NICs to one or more interconnected switches in the same subnet.

See Switching Hardware.

  • No labels