Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Elasticsearch requires configuration and settings file changes to be made consistently across your the Elasticsearch cluster.

Table of Contents
maxLevel2

...

  • Upgrading Elasticsearch in place (using the same index) if it detects that a supported version (6.8.6) is already installed and configured

  • Editing /etc/elasticsearch/elasticsearch.yml (except for changing the path.data variable to use a different data directory)

  • Editing /etc/elasticsearch/log4j2.properties

  • Editing /usr/lib/systemd/system/elasticsearch.service

  • Editing /etc/sysconfig/elasticsearch 

  • Creating the override file for Systemd: /etc/systemd/system/elasticsearch.service.d/override.conf

Bulk
Usage

This method is most efficient if you have a large number of nodes and/or have manual configurations to apply to the elasticsearch.yml (see next section).

  1. On the first Elasticsearch node, run the configuration script provided in /usr/share/caringo-elasticsearch-search/bin/. This script prompts you for the needed values as it goes:

    Code Block
    languagebash
    /usr/share/caringo-elasticsearch-search/bin/configure_elasticsearch_with_swarm_search.py 
  2. The script generates custom configuration files for each of the nodes in your the Elasticsearch cluster. (v10.x)

    • The current node's file is /etc/elasticsearch/elasticsearch.yml.

    • The other nodes' files (if any) are /etc/elasticsearch/elasticsearch.yml.<node-name-or-ip>

  3. Follow the Customization details (below) to update the YAML files further, such as to change Elasticsearch's path.data (data directory).

    Info
    titleLogging
    • Update log files to match your data path or other customizations.
    • Update the rollingfile appender to delete rotated logs archives, to prevent running out of space. 
  4. For the next and all remaining nodes, complete these steps:

    1. On the next Elasticsearch node, copy over the appropriate file as /tmp/elasticsearch.yml.esnode8.

    2. With the YAML file in place, run the configuration script with the -c argument, so it uses the existing file. 

      Code Block
      languagebash
      configure_elasticsearch_with_swarm_search.py -c \
         /tmp/elasticsearch.yml.esnode8
    3. Go to the next node, if any.

  5. Resume your the installation to turn on the service: Installing Elasticsearch or Migrating from Older Elasticsearch

Non-Bulk
Usage

  1. On the first Elasticsearch node, run the configuration script provided in /usr/share/caringo-elasticsearch-search/bin/. This script prompts you for the needed values as it goes:

    Code Block
    languagebash
    configure_elasticsearch_with_swarm_search.py 
  2. The script generates a custom /etc/elasticsearch/elasticsearch.yml configuration file for the current node as well as ones for each of the nodes, which you can ignore. (v10.x)

  3. Following the Customization details (below) if you need to update the YAML file further, such as to change Elasticsearch's path.data (data directory). 

    Info
    titleLogging
    • Update log files to match your data path or other customizations.
    • Update the rollingfile appender to delete rotated logs archives, to prevent running out of space. 
  4. Run the script the same way on each remaining ES node, answering the prompts consistently and reapplying any manual configurations.

  5. Resume your the installation to turn on the service: Installing Elasticsearch or Migrating from Older Elasticsearch

...

The paths given are relative to the Elasticsearch installation directory, which is assumed to be your the working directory.

Info

Caution

  • Errors in adding and completing these settings can prevent the Elasticsearch service from working properly.

  • If you customize Elasticsearch's path.data location from the default, you must adjust all references to it below to reflect the new location.

Elasticsearch Config File

Info

Version differences

The Elasticsearch configuration settings have changed with each major release. To track how they changed since Elasticsearch 2.3.3, see Elasticsearch Configuration Differences.

Edit the Elasticsearch config file: /etc/elasticsearch/elasticsearch.yml

action.auto_create_index: "+csmeter*,+*_nfsconnector,.watches,
.triggered_watches,.watcher-history-*"

Needed to disable automatic index creation, csmeter indices, and Swarm NFS connectors. (v10.1)

cluster.name: <ES_cluster_name>

Provide your the Elasticsearch cluster a unique name, which is unrelated to your the Swarm cluster name. Do not use periods in the name.

Info

Important

To prevent merging, it must differ from the cluster.name of your the legacy ES cluster, if you have one operating.

node.name: <ES_node_name>

Optional. Elasticsearch supplies a node name if you do one is not set oneDo not use periods in the name.

network.host: _site_

Assign a specific hostname or IP address, which requires clients to access the ES server using that address. If you use using a hostname, update /etc/hosts. Defaults to the special value, _site_.

cluster.initial_master_nodes

(ES 7+) For first-time bootstrapping of a production ES cluster. Set to an array or comma-delimited list of the hostnames of the master-eligible ES nodes whose votes should be counted in the very first election.

discovery.zen.
minimum_master_nodes: 3

(ES 6 only)  Set to (number of master-eligible nodes / 2, rounded down) + 1. Prevents split-brain scenarios by setting the minimum number of ES nodes online before deciding on electing a new master.

discovery.seed_hosts

(ES 7+) Enables auto-clustering of ES nodes across hosts. Set to an array or comma-delimited list of the addresses of all the master-eligible nodes in the cluster. 

discovery.zen.ping.unicast.hosts: ["es0", "es1"]

(ES 6 only) Set to the list of node names/IPs in the cluster, verifying all ES servers are included. Multicast is disabled by default.

gateway.expected_nodes: 4

Add and set to the number of nodes in your the ES cluster. Recovery of local shards starts as soon as this number of nodes have joined the cluster. It falls back to the recover_after_nodes value after 5 minutes. This example is for a 4-node cluster.

gateway.recover_after_nodes: 2

Set to the minimum number of ES nodes started before going into operation status, computed as such:

  • If total nodes is 1 or 2, set to 1.

  • If total nodes is 3 or 4, set to 2.

  • If total nodes is 5 to 7, set to your the number – 2.

  • If total nodes 8 or more, set to your the number – 3.

bootstrap.memory_lock: true

Set to lock the memory on startup to verify Elasticsearch never swaps (swapping makes it perform poorly). Verify enough system memory resources are available for all processes running on the server.

To allow the elasticsearch user to disable swapping and to increase the number of open file descriptors, the RPM installer makes these edits to/etc/security/limits.d/10-caringo-elasticsearch.conf

Code Block
languagebash
# Custom for Caringo Swarm
elasticsearch soft nofile 65536
elasticsearch hard nofile 65536
elasticsearch soft nproc 4096
elasticsearch hard nproc 4096
# allow user 'elasticsearch' memlock
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

path.data: <path_to_data_directory>

By default, path.data goes to /var/lib/elasticsearch with the needed ownership. If you want to move the Elasticsearch data directory, choose a separate, dedicated partition of ample size, and make the elasticsearch user the owner of that directory:

Code Block
languagebash
chown -R elasticsearch:elasticsearch <path_to_data_directory>

thread_pool.write.queue_size

The size of the queue used for bulk indexing.

This variable was called threadpool.bulk.queue_size in earlier Elasticsearch versions.

...

In its default location, logging has the needed ownership. However, if If you want to move the log directory, choose a separate, dedicated partition of ample size, and make the elasticsearch user the owner of that directory:

...

This is the log of deprecated actions, to inform you for future migrations. Adjust the log size and log file count for the deprecation log:

...