Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.

An Elasticsearch index is divided into a set of shards – primary shards and replica (backup) shards. The shard count is set configured when an the index is created. Elastic recommends shard sizes not be larger than about 50GB, to make it them faster to update them or to shuffle them between nodes when necessary. They also recommend a 32GB-heap node only store up to 600 shards. Although there are typically hundreds of metrics- and csmeter- shards they shouldn’t have much effect on performance as they are small, with time-based indices.

Swarm 12 allows you to set search.numberOfShards (default is 5) to a larger value like 20 if you know you will have a very large search feed index (e.g. you will store be storing a billion objects or have a large amount of metadata).

Before Elasticsearch 6 you were unable to increase the number of shards after an index was created but now . Now you can use the _split api This is faster than creating a new search feed with the correct number of shards and waiting for it to populate. But note it requires downtime to complete these steps and for the new split index to be ready. It also requires that you have enough Elasticsearch disk space for a copy of the current index.