Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
maxLevel3

...

Info

Important

You must create the storage cluster's default domain before configuring SwarmFS. This domain has the same name as the cluster.name setting's value. The domain can be created with the Content UI or an HTTP utility like curl (see Manually Creating and Renaming Domains).

...

You can create separate groups (sets) of SwarmFS that are configured in pools; this enables support for different clients and optimization for different roles. You can also set some configuration settings locally, to override global configuration settings.

...

Server Groups are created with the + Add button at top right.

...

Info

Best practice

Before creating a Server Group, be sure that your Verify the default domain is specified and to verify the existence of the domain and bucket that will be are defined in the scope before creating a Server Group.


The resulting group is a container for exports that will share a common configuration:

...

Name

When you add a Server Group, you only supply a name, which is a description; the unique identifier is the count (such as /2, above) at the end of the Configuration URL.

The new group appears at or near the end of the listing, ready to be configured with exports.

Configuration URL

Each NFS Server Group has a unique Configuration URL, which you can click to view the current export definitions. These are the auto-generated and auto-maintained JSON settings being stored by Swarm for the group.

The configuration is empty until you add one or more exports.

Info

Note

An sptid parameter is the encrypted form of a Swarm node IP address, which Gateway uses for request routing. Remove the parameter when pasting the URL elsewhere, such as in Ganesha.


Info

Important

Although group configurations may be shared across NFS servers, each server must be configured with only one group.

Adding Exports

Listing service: Each export is specific to one and only one Swarm bucket, but clients viewing the mounted directory will be able to view, create, and use virtual directories within it via the prefix feature of Swarm named objects (myvirtualdirectory/myobjectname.jpg).

...

Name


Unique name for your export, to distinguish it from the others in Swarm UI listings.

Storage IP(s) or DNS name(s)


The IP address(es) or DNS-resolvable hostname(s) for one or more Swarm Storage nodes.

Search host(s)


(For backwards compatibility) Optional as of version 3.0. The IP addresses or DNS-resolvable hostnames for one or more Swarm Elasticsearch servers.

Note: Both Gateway and SwarmFS use the Primary (default) search feed. If a new feed is made Primary, these servers must be restarted.

Search index


(For backwards compatibility) Optional as of version 3.0. The unique alias name of the Primary (default) search feed. Locate this value as the Alias field in the primary search feed's definition. 

Export path


Case-sensitive. Unique pseudo filesystem path for the NFS export. 

Cannot be set to a single slash ("/"), which is reserved.

Scope

Domain
Bucket

Specifies where the data written via the export will be associated: which domain and bucket to use.

Important: Be sure to verify Verify the existence of the domain and bucket that you specify specified here.

Info

Quick Setup

For the remaining setup sections, few changes are usually needed:

  • Cloud Security — Each export can have different security, to fit its usage.

  • Client Access — Keep the defaults unless you need to customize access control.

  • Permissions — Change nobody to x-owner-meta.

  • Logging — Keep the defaults unless directed by Support.

  • Advanced Settings — Keep the defaults unless unless directed by Support.

Cloud Security

In a Gateway (Cloud) environment, you can use pass-through authentication, which means authenticating to Gateway using the same login and password that was provided for authentication by the client to SwarmFS. You also have the choice of session tokens (with various expirations) and single user authentication, by login credentials or token.

...

Transport protocol

TCP

Supported transport protocol (TCP/UDP | TCP | UDP)

Storage port

80

Required. Network port for traffic to Swarm Storage nodes

Search port

9200

Required. Network port for traffic to Swarm Search nodes

Security

sys 

Remote Procedure Call (RPC) security type (sys | krb5 | krb5i | krb5p)

Maximum storage connections

100

Maximum number of open connections to Swarm Storage. (v2.0)

Retries

5

(positive integer) How many times SwarmFS will retry unsuccessful requests to Swarm and Swarm Search before giving up.

Retries timeout

90

(seconds) How long SwarmFS will wait before timing out Swarm retries.

Request timeout

90

(seconds) How long SwarmFS will wait before timing out Swarm requests.

For best results, set this timeout to at least twice the value of the Storage setting scsp.keepAliveInterval.

Pool timeout

300

(seconds) How long discovered Swarm storage nodes are remembered.

Write timeout

90

(seconds) How long SwarmFS will wait for a write to Swarm to complete before retrying.

Read buffer size

128000000

(bytes) Defaults to 128 MB, for general workloads. The amount of data to be read each time from Swarm. If the read size buffer is larger than the client request size, then the difference will be cached by SwarmFS, and the next client read request will be served directly from cache, if possible. Set to 0 to disable read-ahead buffering.

Improving performance — Set each export's Read Buffer Size to match the workload that you expect on that share

  • Lower the read-ahead buffer size if most reads will be small and non-sequential.

  • Increase the read-ahead buffer size if most reads will be large and sequential.

Parallel read buffer requests

4

(positive integer) Adjust to tune the performance of large object reads; the default of 4 reflects the optimal number of threads, per performance testing. (v2.3)

Maximum part size

64000000

(bytes) How large each part of erasure-coded (EC) objects may be. Increase (such as to 200 MB, or 200000000) to create smaller EC sets for large objects and so increase throughput for high volumes of large files. (v2.3)

Collector sleep time

1000

(milliseconds) Increase to minimize object consolidation time by directing SwarmFS to collect more data before pushing it to Swarm, at the expense of both RAM and read performance, as SwarmFS slows clients when running out of cache. You might increase this value if your implementation is sensitive to how quickly the Swarm health processor will consolidate objects, which cannot be guaranteed. (v2.3)

Maximum buffer memory

2000000000

(bytes) Defaults to 2 GB. Maximum limit that can be allocated for the export's export buffer pool. Once exceeded, client requests will temporary be blocked until total buffers falls back below this number. (v2.0)

Buffer high watermark

1500000000

(bytes) Once the allocated export buffers reach this watermark, SwarmFS will start to free buffers in an attempt to stay below “Maximum Memory Buffers”. During this time, client requests may be delayed. (v2.0)

File access time policy

"relatime"

Policy for when to update a file's access time stamp (atime). (v2.0)

  • “noatime”: Disables atime updates.

  • “relatime”: Updates atime only if it is earlier than last modified time, so that it updates only once after each write.

  • “strictatime”: Updates atime on every read and close.

Elasticsearch buffer refresh time

60

(seconds) How rapidly non-SwarmFS object updates are reflected in SwarmFS listings. Lower to reduce the wait for consistency, at the cost of increased load on Elasticsearch. (v2.3)

Child pages (Children Display)

...