Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

The amount of RAM and the disk spool size that Elasticsearch needed depend on the block size chosen for Veeam. By default, Veeam uses 64KB-1MB block sizes.

Based on block sizes, for example, backing up 1.2TBs will consume 4.2 million objects on Swarm. It also means that the default configuration will not use Erasure Coding since it applies to objects larger than 1MB by default.

The block size can be increased at the expense of a lower data reduction ratio (deduplication and compression) on the Performance Tier. This reduces the RAM and disk spooler size requirements for Elasticsearch.

Ultimately, the choice depends on the customer use case, trading storage deduplication/compression ratio against the capacity footprint of using Erasure Coding on larger objects on the capacity extent.

Refer to the Swarm Documentation for more information on sizing Elasticsearch.

  • No labels