...
Productivity | Store, access, and manage files | Data portability — multi-protocol in/out through NFSv4, S3, HDFS, or HTTP; read data written through FileFly Streams data directly to and from Swarm (no local gateway storage staging or spooling) Brings rich custom object metadata to file through NFS Mount domains, buckets, or views that are filtered by custom object metadata
|
---|
Less Risk | Security and scale with no single point of failure | Limitless scale Rapid scale through physical servers, VMs or appliance No storage or protocol silos No read performance performance latency through data staging
Multi SwarmFS instance managed through a single pane of glass Security settings in Swarm propagate through all protocols Builtin active/active HA that requires no local disk and no clustering Auto client resume — if there is a communication issue between client and SwarmFS, the client restarts up where it left off
|
---|
Lower TCO | Leverage Swarm scale-out storage | High availability and data protection are standard automated features Continuous protection with seamless movement between replication and erasure coding Eliminates the need for backups Leverage Swarm’s automated, policy-based data management Automatically replicate content to a remote site for distribution Manage files from creation to expiration WORM, Legal Hold, and Integrity seals are standard
|
---|
...
Traditional gateways and connectors must stage objects as files locally on the gateway. Disk space on the gateway server is a limitation. SwarmFS does not cache or stage files/objects on local disk space; rather, it streams data directly to and from Swarm. Escape the performance overhead of writing complete objects to a local gateway staging disk, so there is no risk of losing data if the Gateway crashes before data is spooled off the gateway to the object store.
...
Info |
---|
Note Given the stateless nature of Swarm and SwarmFS, file locking exists only within a single SwarmFS server. |
...