Table of Contents | ||||
---|---|---|---|---|
|
...
|
NFS server groups and exports can be created and managed via the Swarm Storage UI allows creating and managing NFS server groups and exports's NFS page.
Info |
---|
ImportantThe storage cluster's default domain must be created before configuring SwarmFS. This domain has the same name as the |
Create separate groups (sets) of SwarmFS that are configured in pools; this enables support for different clients and optimization for different roles. Set some configuration settings locally , to override global configuration settings.
Why have different server groups?
These are situations for which it may be ideal to keep groups separate:
...
Info |
---|
ImportantRestart NFS services after making any configuration changes. The NFS server does not support dynamic updates to the running configuration. |
Adding Server Groups
Server Groups are created with the + Add button at top right.
...
Infotip |
---|
BestpracticePracticeVerify the default domain is specified and the existence of the domain and bucket are defined in the scope before creating a Server Group. |
The resulting group is a container for exports sharing a common configuration:
...
Name | Supply a name, which is a description, when adding a Server Group; the unique identifier is the count (such as / The new group appears at or near the end of the listing, ready to be configured with exports. |
---|---|
Configuration URL | Each NFS Server Group has a unique Configuration URL, which can be clicked to view the current export definitions. These are the auto-generated and auto-maintained JSON settings being stored by Swarm for the group. The configuration is empty until one or more exports is added. |
NoteAn |
Info |
---|
ImportantAlthough group configurations may be shared across NFS servers, each server must be configured with only one group. |
Adding Exports
Listing
...
Service
Each export is specific to one and only one Swarm bucket, but clients viewing the mounted directory are able to view, create, and use virtual directories within it via the prefix feature of Swarm named objects (myvirtualdirectory/myobjectname.jpg
).
...
Name | Unique name for the export, to distinguish it from the others in Swarm UI listings. | |
---|---|---|
Storage IP(s) or DNS |
Name(s) | The IP address(es) or DNS-resolvable hostname(s) for one or more Swarm |
---|
Gateways. |
Search |
---|
Host(s) | (For backwards compatibility) Optional as of version 3.0. The IP addresses or DNS-resolvable hostnames for one or more Swarm Elasticsearch servers. Note: Both Gateway and SwarmFS use the Primary (default) search feed. If a new feed is made Primary, these servers must be restarted. | |
---|---|---|
Search |
Index | (For backwards compatibility) Optional as of version 3.0. The unique alias name of the Primary (default) search feed. Locate this value as the Alias field in the primary search feed's definition. | |
---|---|---|
Export |
Path | Case-sensitive. Unique pseudo filesystem path for the NFS export. Cannot be set to a single slash (" | |
---|---|---|
Scope | Domain | Specifies where the data written via the export is associated: which domain and bucket to use. Important: Verify the existence of the domain and bucket specified here. |
Info |
---|
Quick SetupFor the remaining setup sections, few changes are usually needed:
|
Cloud Security
In a Gateway (Cloud) environment pass-through authentication can be used. Authenticating to Gateway can use the same login and password provided for authentication by the client to SwarmFS. Session tokens (with various expiration times) and single user authentication are available, by login credentials or token.
...
Info |
---|
TipEach SwarmFS export created to use the Content Gateway can have an entirely different security method, as needed by the use case. |
Session Token | Token Admin Credentials by Login Token Admin Credentials by Token | User, Password, Expiration Token, Expiration |
---|---|---|
Single User | Authenticate by Login Authenticate by Token | User, Password Token |
Pass- |
Through / None |
---|
N/ |
A |
Client Access
This optional section allows access control customization both globally (for this export) and for specific clients.
Access |
---|
Type | Defaults to full read/write access. These other access restrictions are available:
|
---|---|
Squash | Defaults to no squashing (allows all user IDs).
|
Squash |
User id (uid) |
---|
Mapping Squash id (uid) |
---|
Mapping | User ID and Group ID can be set when the NFS server is authenticating users from a different authentication sources and/or it is desired all files have a consistent user/group. Typical situations:
|
---|
| |
Client(s) | As needed, customize the access for one or more specific clients. Note |
---|
These override the settings specified above, if any. |
Permissions
Files and directories in a SwarmFS system support standard Unix-style read/write/execute permissions based on the user ID (uid
) and group ID (gid
) asserted by the mounting NFS client. The numeric forms of uid
and gid
have equivalent human-readable ASCII forms, as given by the Linux 'id
' command:
...
Info |
---|
Using x-owner-metaThe export's interface and access method selected determines whether |
Logging
Enable additional logging as directed by DataCore Support, but keep this logging disabled for normal production usage. (Swarm UI 2.3)
...
Performance | Performance logging for SwarmFS, which reduces the noise in the ganesha log file. When enabled, logs PERF warnings to Elasticsearch query result dumps. |
---|---|
Elasticsearch | Performance logging for Elasticsearch, for use while troubleshooting issues such as partial listings. When enabled, sends the Elasticsearch query results to the debug log file. |
Advanced Settings
Info |
---|
ImportantUse these recommended defaults for Advanced Settings unless otherwise advised by DataCore Support. |
Transport |
---|
Protocol | TCP | Supported transport protocol (TCP/UDP | TCP | UDP) |
---|---|---|
Storage |
Port | 80 | Required. Network port for traffic to Swarm Storage nodes |
---|---|---|
Search |
Port | 9200 | Required. Network port for traffic to Swarm Search nodes |
---|---|---|
Security | sys | Remote Procedure Call (RPC) security type (sys | krb5 | krb5i | krb5p) |
Maximum |
Storage Connections | 100 | Maximum number of open connections to Swarm Storage. (v2.0) |
---|---|---|
Retries | 5 | (positive integer) How many times SwarmFS retries unsuccessful requests to Swarm and Swarm Search before giving up. |
Retries |
Timeout | 90 | (seconds) How long SwarmFS waits before timing out Swarm retries. |
---|---|---|
Request |
Timeout | 90 | (seconds) How long SwarmFS waits before timing out Swarm requests. For best results, set this timeout to at least twice the value of the Storage setting scsp.keepAliveInterval. |
---|---|---|
Pool |
Timeout | 300 | (seconds) How long discovered Swarm storage nodes are remembered. |
---|---|---|
Write |
Timeout | 90 | (seconds) How long SwarmFS waits for a write to Swarm to complete before retrying. |
---|---|---|
Read |
Buffer Size | 128000000 | (bytes) Defaults to 128 MB, for general workloads. The amount of data to be read each time from Swarm. If the read size buffer is greater than the client request size, then the difference is cached by SwarmFS, and the next client read request is served directly from cache, if possible. Set to 0 to disable read-ahead buffering. Improving performance |
---|
- Set each export's Read Buffer Size to match the workload expected on that share.
|
Parallel |
---|
Read Buffer Requests | 4 | (positive integer) Adjust to tune the performance of large object reads; the default of 4 reflects the optimal number of threads, per performance testing. (v2.3) |
---|---|---|
Maximum |
Part Size | 64000000 | (bytes) How large each part of erasure-coded (EC) objects may be. Increase (such as to 200 MB, or 200000000) to create smaller EC sets for large objects and so increase throughput for high volumes of large files. (v2.3) |
---|---|---|
Collector |
Sleep Time | 1000 | (milliseconds) Increase to minimize object consolidation time by directing SwarmFS to collect more data before pushing it to Swarm, at the expense of both RAM and read performance, as SwarmFS slows clients when running out of cache. Increase this value if the implementation is sensitive to how quickly the Swarm health processor consolidates objects, which cannot be guaranteed. (v2.3) |
---|---|---|
Maximum |
Buffer Memory | 2000000000 | (bytes) Defaults to 2 GB. Maximum limit that can be allocated for the export's export buffer pool. Once exceeded, client requests are temporary blocked until total buffers falls back below this number. (v2.0) |
---|---|---|
Buffer |
High Watermark | 1500000000 | (bytes) Once the allocated export buffers reach this watermark, SwarmFS starts to free buffers in an attempt to stay below “Maximum Memory Buffers”. During this time, client requests may be delayed. (v2.0) |
---|---|---|
File |
Access Time Policy | "relatime" | Policy for when to update a file's access time stamp (atime). (v2.0)
|
---|---|---|
Elasticsearch |
Buffer Refresh Time | 60 | (seconds) How rapidly non-SwarmFS object updates are reflected in SwarmFS listings. Lower to reduce the wait for consistency, at the cost of increased load on Elasticsearch. (v2.3) |
---|
Child pages (Children Display) |
---|
...