...
Based on block sizes, for example, backing up 1.2TBs will consume consumes 4.2 million objects on Swarm. It also means that the default configuration will does not use Erasure Coding since it applies to objects larger greater than 1MB by default.
The block size can be increased at the expense of a lower data reduction ratio (deduplication and compression) on the Performance Tier. This reduces the RAM and disk spooler size requirements for Elasticsearch.
Ultimately, the choice depends on the customer use case, trading storage deduplication/compression ratio against the capacity footprint of using Erasure Coding on larger objects on the capacity extent.
Refer to the Swarm Documentation Scaling Elasticsearch for more information on sizing Elasticsearch.
...