Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

The amount of RAM and the disk spool size that Elasticsearch needed depend on the block size chosen for Veeam. By default, Veeam uses 64KB-1MB block sizes.

Based on block sizes, for example, backing up 1.2TBs consumes 4.2 million objects on Swarm. It means that the default configuration does not use Erasure Coding since it applies to objects larger than 1MB.

The block size can be increased at the expense of a lower data reduction ratio (deduplication and compression) on the Performance Tier. This reduces the RAM and disk spooler size requirements for Elasticsearch.

Ultimately, the choice depends on the customer use case, trading storage deduplication/compression ratio against the capacity footprint of using Erasure Coding on larger objects on the capacity extent.

Refer to the Swarm Documentation for more information on sizing Elasticsearch.

  • No labels