Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Based on block sizes, for example, backing up 1.2TBs consumes 4.2 million objects on Swarm. It means that the default configuration does not use Erasure Coding since it applies to objects larger greater than 1MB.

The block size can be increased at the expense of a lower data reduction ratio (deduplication and compression) on the Performance Tier. This reduces the RAM and disk spooler size requirements for Elasticsearch.

...