SwarmFS Deployment

Swarm Software Requirements

Following are the Swarm packages that work with and comprise SwarmFS. Download the latest bundle from the Downloads section on the DataCore Support Portal.

Best Practice for Upgrades

Upgrade all Swarm components to the versions included in the Swarm bundle unless older versions of Elasticsearch and Content Gateway need to remain.

This is how packages are named for the components in the table below:

  • Storage: caringo-storage-*.rpm

  • Storage UI: caringo-storage-webui-*.rpm

  • Elasticsearch: elasticsearch-*.rpm, elasticsearch-curator-*.rpm

  • Search: caringo-elasticsearch-search-*.rpm

  • Gateway: caringo-gateway-*.rpm

  • Content UI: caringo-gateway-webui-*.rpm

  • Swarm FS: caringo-nfs-*.rpm, caringo-nfs-libs-*.rpm

Component

Configuration Requirements

Component

Configuration Requirements

Storage

Enable Erasure Coding (EC) - see Configuring Cluster Policies

Enable Overlay Index: index.overlayEnabled= true  - see Configuring the Overlay Index

Enable Replicate on Write (ROW)  - see Configuring ROW Replicate On Write

Set ec.segmentConsolidationFrequency=100 (Caution: Do not allow the cluster to run near capacity.)

Set health.parallelWriteTimeout (v10.0+) to a non-zero value, such as 1209600 (2 weeks).

Storage UI

Set the Swarm Search feed to have a 1 second Batch Timeout

Elasticsearch

In the Configuring Elasticsearch (config/elasticsearch.yml), make changes needed for SwarmFS:

  • Remove: filter: lowercase 
    (case-insensitive metadata searching in Swarm is incompatible with SwarmFS)

  • Remove: script.indexed: true (use with ES 2.3.3)

  • Add: script.inline: true (use with ES 2.3.3 and 5.6.12)

Search

Upgrade to the latest Search RPM when upgrading Storage.

Gateway

SwarmFS can be deployed onto the same Linux server as the Content Gateway.

Content UI

Recommended for viewing and managing objects in the Swarm cluster.

NFS

Do not install SwarmFS server on the same host as Elasticsearch.

Implementing SwarmFS

Important

Complete SwarmFS Planning before proceeding

For SwarmFS, do the following:

  1. Install one or more SwarmFS servers for NFS 4 on designated hardware. See SwarmFS Server Installation

  2. Create the exports needed for the implementation. See SwarmFS Export Configuration

Tip

The same bucket can be exported more than once, each with values (such as Read buffer size) optimized for a type of usage. Then point clients and applications to the share best matching the workload.

  1. For functional verification and troubleshooting, create a test domain and bucket and then create an export for that bucket.

  2. Conduct basic testing of read, write, and delete using the NFS client mounts for each of the SwarmFS exports.

  3. Implement HTTPS in front of the service proxy port to help protect the credentials used to access the Ganesha config file and the file itself: see Replication Feeds over Untrusted Networks

Mounting the Exports

Follow these guidelines when mounting the SwarmFS exports:

Linux

Mount the exports as normal, with these explicit options:

  • Timeout - Increase the timeout, timeo, to 9000.

  • Version - To verify mounting using the correct protocol, add the "-t nfs" and "vers=<nfsvers>" options.
    Best practice: Mount using NFS v4.1. Fall back to 4.0 if the client does not support 4.1.

NFS v4.1
mount -t nfs -o timeo=9000,vers=4.1 SwarmNFSserver:/ /mnt/SwarmNFS
NFS v4.0
mount -t nfs -o timeo=9000,vers=4 SwarmNFSserver:/ /mnt/SwarmNFS

Adjust the mount command as needed for the OS version. Specify version this way on Ubuntu 10.04: 

mount -t nfs4 -o timeo=9000 SwarmNFSserver:/ /mnt/SwarmNFS

macOS

Not supported.

Windows

Not supported; Windows has no NFS 4.x client.

© DataCore Software Corporation. · https://www.datacore.com · All rights reserved.