SwarmFS 2 Releases

SwarmFS 2.4

With the 2.4 release, SwarmFS adds support for Swarm 11 and Elasticsearch 6.

  • Added support for Elasticsearch 6.8.6. (NFS-808)

  • Swarm NFS 2.4 supports Swarm Storage 11.0 and higher. (NFS-804)

Use the supported versions of Swarm components for the target version of Elasticsearch:

SwarmFS 2.4

Elasticsearch 6.8.6

Swarm Storage 11.1

Gateway 6.3

Elasticsearch 5.6.12

Swarm Storage 10.0 - 11.1

Gateway 6.0 - 6.3

SwarmFS 2.1

Elasticsearch 2.3.3

Swarm Storage 10.0 - 11.1

Gateway 5.4

Upgrading

  1. Follow the guidance in for what specific configuration is required across components.

  2. Complete the section for SwarmFS when migrating Elasticsearch: .

Known Issues

  • If, instead of updating, perform a yum remove of SwarmFS and also remove the artifacts ("rm -rf /etc/ganesha"), the configuration (/etc/ganesha/ganesha.conf) is not recreated on install, causing the SwarmFS-config script to fail. Workaround: Save the ganesha.conf and restore it to that directory. (NFS-778)

  • 'silly' files (of form .nfsXXXX) may persist in directories, consuming space if application file handling fails to clean up after unlinked files. Workaround: Add a cron job that periodically looks for and removes such files. (NFS-764)

  • Do not use SwarmFS with a bucket that has versioning enabled. File writes can commit the object multiple times, resulting in an excessive number of versions. (NFS-753)

  • Externally-written custom headers may not appear in :metadata reads. Workaround: To trigger ES to pick up an external update, also set the X-Data-Modified-Time-Meta header to the current time (in seconds since epoch). (NFS-692)

  • Exports defined with different domains but the same bucket name do not operate as unique exports. (NFS-649)

  • An invalid bucket name entered for an export in the UI fails silently in SwarmFS (config reads, export generates, client mounts, 0-byte writes and directory operations appear to succeed) but fails on requests to Swarm Storage. (NFS-613)

  • The SwarmFS configuration script does not work with config URLs using HTTPS and contain auth credentials for accessing Swarm through Gateway. (NFS-406)

  • On startup, SwarmFS may generate erroneous and harmless WARN level messages for configuration file parameters, such as config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:17): Unknown parameter (Path) (NFS-289)

  • SwarmFS supports exclusive opens of a file (O_EXCL and O_CREATE) but does not support exclusive reopens (EXCLUSIVE4). (NFS-69)

  • To prevent problems resulting from SwarmFS disconnects or shutdowns, the Storage setting health.parallelWriteTimeout must be set to a non-zero value, such as 1209600 (2 weeks). (NFS-63)

SwarmFS 2.3

With the 2.3 release, SwarmFS includes several fixes. This release requires Gateway 6.0 with Elasticsearch 5.6, on Swarm 10. Remain on version 2.1 if still using Gateway 5.4 with Elasticsearch 2.3.3.

  • Credentials in the JSON file for exports are now handled via HTTPS, so they are encrypted during transmission. Note: credentials within the ganesha.conf file must be protected at the file-system level. (NFS-790)

  • SwarmFS has improved support for Windows clients by allowing empty directories to be created and immediately renamed, as happens with Windows File Explorer. (NFS-789)

  • SwarmFS now has a mechanism to prevent shares from mounting before content can be served. To enable this feature, add the new parameter, ExportAfterGrace = TRUE;, to the ganesha.conf file. (NFS-787)

  • Fixed: RHEL/CentOS 7.6 clients exhibited problems mounting SwarmFS 2.2. (NFS-781) 

  • Fixed: For export configurations, the defaultrootowner / defaultrootgroup and permission mode in octal are not being set correctly in the UI, and the link count is incorrect in the export directory inode. (NFS-783)

Known Issues

  • Do not use Swarm NFS 2.3 with Swarm 11.0. (NFS-804)

  • If, instead of updating, perform a yum remove of SwarmFS and also remove the artifacts ("rm -rf /etc/ganesha"), the configuration (/etc/ganesha/ganesha.conf) is not recreated on install, causing the SwarmFS-config script to fail. Workaround: Save the ganesha.conf and restore it to that directory. (NFS-778)

  • 'silly' files (of form .nfsXXXX) may persist in directories, consuming space if application file handling fails to clean up after unlinked files. Workaround: Add a cron job that periodically looks for and removes such files. (NFS-764)

  • Do not use SwarmFS with a bucket that has versioning enabled. File writes can commit the object multiple times, resulting in an excessive number of versions. (NFS-753)

  • Externally-written custom headers may not appear in :metadata reads. Workaround: To trigger ES to pick up an external update, also set the X-Data-Modified-Time-Meta header to the current time (in seconds since epoch). (NFS-692)

  • Exports defined with different domains but the same bucket name do not operate as unique exports. (NFS-649)

  • An invalid bucket name entered for an export in the UI fails silently in SwarmFS (config reads, export generates, client mounts, 0-byte writes and directory operations appear to succeed) but fails on requests to Swarm Storage. (NFS-613)

  • The SwarmFS configuration script does not work with config URLs using HTTPS and contain auth credentials for accessing Swarm through Gateway. (NFS-406)

  • On startup, SwarmFS may generate erroneous and harmless WARN level messages for configuration file parameters, such as config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:17): Unknown parameter (Path) (NFS-289)

  • SwarmFS supports exclusive opens of a file (O_EXCL and O_CREATE) but does not support exclusive reopens (EXCLUSIVE4). (NFS-69)

  • To prevent problems resulting from SwarmFS disconnects or shutdowns, the Storage setting health.parallelWriteTimeout must be set to a non-zero value, such as 1209600 (2 weeks). (NFS-63)

SwarmFS 2.2

With the 2.2 release, SwarmFS now fully supports and requires Gateway 6.0 with Elasticsearch 5.6, on Swarm 10.

Required

Remain on version 2.1 while using Gateway 5.4 with Elasticsearch 2.3.3.

Known Issues

  • RHEL/CentOS 7.6 clients exhibit problems mounting SwarmFS. Do not upgrade to this version until this issue is resolved. (NFS-781)

  • If, instead of updating, perform a yum remove of SwarmFS and also remove the artifacts ("rm -rf /etc/ganesha"), the configuration (/etc/ganesha/ganesha.conf) is not recreated on install, causing the SwarmFS-config script to fail. Workaround: Save the ganesha.conf and restore it to that directory. (NFS-778)

  • 'silly' files (of form .nfsXXXX) may persist in directories, consuming space if application file handling fails to clean up after unlinked files. Workaround: Add a cron job that periodically looks for and removes such files. (NFS-764)

  • Do not use SwarmFS with a bucket that has versioning enabled. File writes can commit the object multiple times, resulting in an excessive number of versions. (NFS-753)

  • Externally-written custom headers may not appear in :metadata reads. Workaround: To trigger ES to pick up an external update, also set the X-Data-Modified-Time-Meta header to the current time (in seconds since epoch). (NFS-692)

  • Exports defined with different domains but the same bucket name do not operate as unique exports. (NFS-649)

  • An invalid bucket name entered for an export in the UI fails silently in SwarmFS (config reads, export generates, client mounts, 0-byte writes and directory operations appear to succeed) but fails on requests to Swarm Storage. (NFS-613)

  • On startup, SwarmFS may generate erroneous and harmless WARN level messages for configuration file parameters, such as config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:17): Unknown parameter (Path) (NFS-289)

  • SwarmFS supports exclusive opens of a file (O_EXCL and O_CREATE) but does not support exclusive reopens (EXCLUSIVE4). (NFS-69)

  • To prevent problems resulting from SwarmFS disconnects or shutdowns, the Storage setting health.parallelWriteTimeout must be set to a non-zero value, such as 1209600 (2 weeks). (NFS-63)

SwarmFS 2.1

New Features and Changes

  • To generate performance data, SwarmFS now has profile logging, which is a configuration option disabled by default and hidden from the UI. Enable this logging as directed by DataCore Support: once logs are generated, send them to DataCore Support, which have tools to analyze the read performance. (NFS-719)

  • SwarmFS has significantly improved the performance of sequential reads. (NFS-714)

  • Logging for audit purposes is improved. Open, delete, and rename operations generate NIV_EVENT-level messages in the standard SwarmFS log. (NFS-684)

  • Define default Owner, Group, and ACL to apply to any objects and synthetic folders created externally without preset POSIX permissions attached via metadata when configuring SwarmFS exports. (NFS-610)

  • SwarmFS now has a global hard/soft memory limit to work in conjunction with each export's own configured limits, to make better use of NFS server resources. Multiple exports on a single server now share the globally allotted buffer memory, rather than each carving out a separate private buffer memory. (NFS-511)

  • SwarmFS supports the Linux cp command for copying metadata (cp file1:metadata file2:metadata) and data (cp file1:data file2:data), creating a new destination file with 0 bytes if needed. (NFS-469)

Known Issues

  • Externally-written custom headers may not appear in :metadata reads. Workaround: To trigger ES to pick up an external update, also set the X-Data-Modified-Time-Meta header to the current time (in seconds since epoch). (NFS-692)

  • Exports defined with different domains but the same bucket name do not operate as unique exports. (NFS-649)

  • An invalid bucket name entered for an export in the UI fails silently in SwarmFS (config reads, export generates, client mounts, 0-byte writes and directory operations appear to succeed) but fails on requests to Swarm Storage. (NFS-613)

  • On startup, SwarmFS may generate erroneous and harmless WARN level messages for configuration file parameters, such as config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:17): Unknown parameter (Path) (NFS-289)

  • SwarmFS supports exclusive opens of a file (O_EXCL and O_CREATE) but does not support exclusive reopens (EXCLUSIVE4). (NFS-69)

  • To prevent problems resulting from SwarmFS disconnects or shutdowns, the Storage setting health.parallelWriteTimeout must be set to a non-zero value, such as 1209600 (2 weeks). (NFS-63)

SwarmFS 2.0.2

  • Fixed: Issues existed with directories including spaces in names. (NFS-593)

SwarmFS 2.0.1

SwarmFS 2.0.1 must be used with a Swarm cluster running Storage 9.5+ and with Storage UI 1.2.4.

New Features and Changes

  • Performance is improved for how quickly external object updates appear in SwarmFS listings.

Known Issues

  • An invalid bucket name entered for an export in the UI fails silently in SwarmFS (config reads, export generates, client mounts, 0-byte writes and directory operations appear to succeed) but fails on requests to Swarm Storage. (NFS-613)

  • Cloud Security Authentication type Session Token is not available, although it appears as an option in the export definition.

  • Reading metadata over NFS using {filename}:metadata is supported, but editing of object metadata over NFS is not supported.

  • To prevent problems resulting from SwarmFS disconnections or shutdowns, the Storage setting health.parallelWriteTimeout must be set to a non-zero value, such as 1,209,600 (2 weeks). (NFS-63)

    • Note: changing this setting affects S3, which defaults to keeping uncompleted multipart uploads indefinitely.

  • To use SwarmFS with Storage 9.5.0, set scsp.keepAliveInterval = 45. For best results, set Request timeout for each export to 90, so it is at least twice the value of scsp.keepAliveInterval. (NFS-535, SWAR-7917)

SwarmFS 2.0.0

SwarmFS 2.0.0 must be used with a Swarm cluster running Storage 9.5+ and with Storage UI 1.2.3.

New Features and Changes

  • Swarm Content Gateway is now supported. The SwarmFS export configuration in Storage UI now supports Content Gateway in addition to Direct to Swarm. The Cloud Security section of each export configuration allows setting up the method that best fits the situation: Session Token (token admin credentials with expiration), Single User (user, password, and token), or Pass-through. See .

  • The defaults for NFS timeouts are shortened to improve error handling. See . (UIS-775)

Known Issues

  • The default timeouts must be increased when creating an export in the UI: in the Advanced Settings, set the Retries Timeout, Request Timeout, and Write Timeout all to 90 seconds.

  • Cloud Security Authentication type Session Token is not available, although it appears as an option in the export definition.

  • Reading metadata over NFS using {filename}:metadata is supported, but editing of object metadata over NFS is not supported.

  • To prevent problems resulting from SwarmFS disconnections or shutdowns, the Storage setting health.parallelWriteTimeout must be set to a non-zero value, such as 1,209,600 (2 weeks). (NFS-63)

  • To use SwarmFS with Storage 9.5.0, set scsp.keepAliveInterval = 45. For best results, set Request timeout for each export to 90, so it is at least twice the value of scsp.keepAliveInterval. (NFS-535, SWAR-7917)

  • Issues exist with feeds defined to use a non-default admin password. (UIS-759)

  • Accessing unnamed objects is not supported.

© DataCore Software Corporation. · https://www.datacore.com · All rights reserved.