/
Large files in Swarm

Large files in Swarm

Even in an empty cluster, there is a limit to the file size that you can store. The largest file can be a maximum of 1/5 the size of available disk space on the volume. The USED space will reflect an extra 2 times this single largest file size. The space reserved is 2x the largest object ever seen on a per-volume basis- no matter if that large stream has been subsequently deleted or not. In 6.1.3, the space reserved is only 2X the size of the largest stream on the disk at the present time which can mean a large savings of Used space.

This space is reserved for defragmentation of the volume.

The above information does not take erasure coding into account. If you have erasure coding configured, the above information is not relevant. Please see the current Swarm documentation Working with Large Objects for more information.

Related content

Working with Large Objects
Working with Large Objects
Read with this
Used capacity vs Licensed capacity in Swarm
Used capacity vs Licensed capacity in Swarm
More like this
Multipart Write
Multipart Write
Read with this
Used space shows as 0 bytes for my Swarm VM
Used space shows as 0 bytes for my Swarm VM
More like this
Supported Amazon S3 Features
Supported Amazon S3 Features
Read with this
How do I fix trapped space?
How do I fix trapped space?
More like this

© DataCore Software Corporation. · https://www.datacore.com · All rights reserved.