Table of Contents | ||
---|---|---|
|
The NFS page in the Swarm Storage UI allows you to create and manage your creating and managing NFS server groups and exports.
Info |
---|
ImportantYou must create the The storage cluster's default domain must be created before configuring SwarmFS. This domain has the same name as the |
You can create Create separate groups (sets) of SwarmFS that are configured in pools; this enables support for different clients and optimization for different roles. You can also set Set some configuration settings locally, to override global configuration settings.
Why have different server groups? These are situations for which you it may want be ideal to keep groups separate groups:
Include DEBUG level logging
Change the log file location
Add local resource restrictions
Change interface or IP address bindings
Reduce maximum threads or open/concurrent client connections
While every SwarmFS server will retrieve retrieves the global configuration file stored within Swarm, each server group can optionally override the global settings with a separate configuration file.
...
The resulting group is a container for exports that will share sharing a common configuration:
...
Name | When you add a Server Group, you only supply Supply a name, which is a description, when adding a Server Group; the unique identifier is the count (such as / The new group appears at or near the end of the listing, ready to be configured with exports. | ||
---|---|---|---|
Configuration URL | Each NFS Server Group has a unique Configuration URL, which you can click be clicked to view the current export definitions. These are the auto-generated and auto-maintained JSON settings being stored by Swarm for the group. The configuration is empty until you add one or more exports is added.
|
...
Listing service: Each export is specific to one and only one Swarm bucket, but clients viewing the mounted directory will be are able to view, create, and use virtual directories within it via the prefix feature of Swarm named objects (myvirtualdirectory/myobjectname.jpg
).
...
Name | Unique name for your the export, to distinguish it from the others in Swarm UI listings. | |
---|---|---|
Storage IP(s) or DNS name(s) | The IP address(es) or DNS-resolvable hostname(s) for one or more Swarm Storage nodes. | |
Search host(s) | (For backwards compatibility) Optional as of version 3.0. The IP addresses or DNS-resolvable hostnames for one or more Swarm Elasticsearch servers. Note: Both Gateway and SwarmFS use the Primary (default) search feed. If a new feed is made Primary, these servers must be restarted. | |
Search index | (For backwards compatibility) Optional as of version 3.0. The unique alias name of the Primary (default) search feed. Locate this value as the Alias field in the primary search feed's definition. | |
Export path | Case-sensitive. Unique pseudo filesystem path for the NFS export. Cannot be set to a single slash (" | |
Scope | Domain | Specifies where the data written via the export will be is associated: which domain and bucket to use. Important: Verify the existence of the domain and bucket specified here. |
Info |
---|
Quick SetupFor the remaining setup sections, few changes are usually needed:
|
...
Info |
---|
TipEach SwarmFS export that you create created to use the Content Gateway can have an entirely different security method, as needed by its the use case. |
Session Token | Token Admin Credentials by Login Token Admin Credentials by Token | User, Password, Expiration Token, Expiration |
---|---|---|
Single User | Authenticate by Login Authenticate by Token | User, Password Token |
Pass-through / None | n/a |
...
This optional section allows you to customize access control customization both globally (for this export) and for specific clients.
Access type | Defaults to full read/write access. These other access restrictions are available:
|
---|---|
Squash | Defaults to no squashing (allows all user IDs).
|
Squash user id (uid) mapping Squash id (uid) mapping | User ID and Group ID can be set when you have the NFS server is authenticating users from a different authentication sources and/or you want it is desired all the files to have a consistent user/group. Typical situations:
|
Client(s) | As needed, customize the access for one or more specific clients. Note: These override the settings specified above, if any. |
...
When users attempt to access files and directories, SwarmFS checks the IDs to verify that they have permission to access the objects, and it uses these IDs as the owner and group owner for any new files and directories that they create.
For each export, you can customize the default User, Group and ACL Mode can be customized the for the export mount, directories, and files. These settings only apply for externally created objects and synthetic folders that do not already have without POSIX permissions attached to the object as standardized metadata. User and Group values must be entered as ASCII text, not numeric IDs.
Info |
---|
TipThe ACL mode must be entered as an octal, such as 664 or 0664. Use http://permissions-calculator.org Chmod Calculator to generate the octal code that corresponds to the read/write/execute permissions that you want to apply. |
...
Info |
---|
Using x-owner-metaWhat you select for the The export's interface and access method selected determines whether you should use |
...
Info |
---|
ImportantUse these recommended defaults for all of the Advanced Settings unless otherwise advised by DataCore Support. |
Transport protocol | TCP | Supported transport protocol (TCP/UDP | TCP | UDP) |
---|---|---|
Storage port | 80 | Required. Network port for traffic to Swarm Storage nodes |
Search port | 9200 | Required. Network port for traffic to Swarm Search nodes |
Security | sys | Remote Procedure Call (RPC) security type (sys | krb5 | krb5i | krb5p) |
Maximum storage connections | 100 | Maximum number of open connections to Swarm Storage. (v2.0) |
Retries | 5 | (positive integer) How many times SwarmFS will retry retries unsuccessful requests to Swarm and Swarm Search before giving up. |
Retries timeout | 90 | (seconds) How long SwarmFS will wait waits before timing out Swarm retries. |
Request timeout | 90 | (seconds) How long SwarmFS will wait waits before timing out Swarm requests. For best results, set this timeout to at least twice the value of the Storage setting scsp.keepAliveInterval. |
Pool timeout | 300 | (seconds) How long discovered Swarm storage nodes are remembered. |
Write timeout | 90 | (seconds) How long SwarmFS will wait waits for a write to Swarm to complete before retrying. |
Read buffer size | 128000000 | (bytes) Defaults to 128 MB, for general workloads. The amount of data to be read each time from Swarm. If the read size buffer is larger than the client request size, then the difference will be is cached by SwarmFS, and the next client read request will be is served directly from cache, if possible. Set to 0 to disable read-ahead buffering. Improving performance — Set each export's Read Buffer Size to match the workload that you expect expected on that share.
|
Parallel read buffer requests | 4 | (positive integer) Adjust to tune the performance of large object reads; the default of 4 reflects the optimal number of threads, per performance testing. (v2.3) |
Maximum part size | 64000000 | (bytes) How large each part of erasure-coded (EC) objects may be. Increase (such as to 200 MB, or 200000000) to create smaller EC sets for large objects and so increase throughput for high volumes of large files. (v2.3) |
Collector sleep time | 1000 | (milliseconds) Increase to minimize object consolidation time by directing SwarmFS to collect more data before pushing it to Swarm, at the expense of both RAM and read performance, as SwarmFS slows clients when running out of cache. You may increase Increase this value if your the implementation is sensitive to how quickly the Swarm health processor will consolidate consolidates objects, which cannot be guaranteed. (v2.3) |
Maximum buffer memory | 2000000000 | (bytes) Defaults to 2 GB. Maximum limit that can be allocated for the export's export buffer pool. Once exceeded, client requests will are temporary be blocked until total buffers falls back below this number. (v2.0) |
Buffer high watermark | 1500000000 | (bytes) Once the allocated export buffers reach this watermark, SwarmFS will start starts to free buffers in an attempt to stay below “Maximum Memory Buffers”. During this time, client requests may be delayed. (v2.0) |
File access time policy | "relatime" | Policy for when to update a file's access time stamp (atime). (v2.0)
|
Elasticsearch buffer refresh time | 60 | (seconds) How rapidly non-SwarmFS object updates are reflected in SwarmFS listings. Lower to reduce the wait for consistency, at the cost of increased load on Elasticsearch. (v2.3) |
Child pages (Children Display) |
---|
...