Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

« Previous Version 21 Next »

Elasticsearch requires configuration and settings file changes to be made consistently across the Elasticsearch cluster.

Scripted Configuration

Using the provided configuration script automates in-place Elasticsearch upgrades as well as the essential configuration that Elasticsearch requires for use with Swarm.

The script handles the following:

  • Upgrading Elasticsearch in place (using the same index) if it detects a supported version (6.8.6) is already installed and configured

  • Editing /etc/elasticsearch/elasticsearch.yml (except for changing the path.data variable to use a different data directory)

  • Editing /etc/elasticsearch/log4j2.properties

  • Editing /usr/lib/systemd/system/elasticsearch.service

  • Editing /etc/sysconfig/elasticsearch 

  • Creating the override file for Systemd: /etc/systemd/system/elasticsearch.service.d/override.conf

Bulk
Usage

This method is most efficient for a large number of nodes and/or have manual configurations to apply to the elasticsearch.yml (see next section).

  1. On the first Elasticsearch node, run the configuration script provided in /usr/share/caringo-elasticsearch-search/bin/. This script prompts for the needed values as it progresses:

    /usr/share/caringo-elasticsearch-search/bin/configure_elasticsearch_with_swarm_search.py 
  2. The script generates custom configuration files for each of the nodes in the Elasticsearch cluster. (v10.x)

    • The current node's file is /etc/elasticsearch/elasticsearch.yml

    • The other nodes' files (if any) are /etc/elasticsearch/elasticsearch.yml.<node-name-or-ip>

  3. Follow the Customization details (below) to update the YAML files further, such as to change Elasticsearch's path.data (data directory).

    Logging

    • Update log files to match your data path or other customizations.
    • Update the rollingfile appender to delete rotated logs archives, to prevent running out of space. 
  4. For the next and all remaining nodes, complete these steps:

    1. On the next Elasticsearch node, copy over the appropriate file as /tmp/elasticsearch.yml.esnode8

    2. With the YAML file in place, run the configuration script with the -c argument, so it uses the existing file. 

      configure_elasticsearch_with_swarm_search.py -c \
         /tmp/elasticsearch.yml.esnode8
    3. Move to the next node, if any.

  5. Resume the installation to turn on the service: Installing Elasticsearch or Migrating from Older Elasticsearch

Non-Bulk
Usage

  1. On the first Elasticsearch node, run the configuration script provided in /usr/share/caringo-elasticsearch-search/bin/. This script prompts for the needed values as it progresses:

    configure_elasticsearch_with_swarm_search.py 
  2. The script generates a custom /etc/elasticsearch/elasticsearch.yml configuration file for the current node as well as ones for each of the nodes, which can be ignored. (v10.x)

  3. Following the Customization details below to update the YAML file further, such as to change Elasticsearch's path.data (data directory). 

    Logging

    • Update log files to match your data path or other customizations.
    • Update the rollingfile appender to delete rotated logs archives, to prevent running out of space. 
  4. Run the script the same way on each remaining ES node, answering the prompts consistently and reapplying any manual configurations.

  5. Resume the installation to turn on the serviceInstalling Elasticsearch or Migrating from Older Elasticsearch

Customization

The paths given are relative to the Elasticsearch installation directory, which is assumed to be the working directory.

Caution

  • Errors in adding and completing these settings can prevent the Elasticsearch service from working properly.

  • If Elasticsearch's path.data location is customized from the default, adjust all references to it below to reflect the new location.

Elasticsearch Config File

Version differences

The Elasticsearch configuration settings have changed with each major release. To track how they changed since Elasticsearch 2.3.3, see Elasticsearch Configuration Differences.

Edit the Elasticsearch config file: /etc/elasticsearch/elasticsearch.yml

action.auto_create_index: "+csmeter*,+*_nfsconnector,.watches,
.triggered_watches,.watcher-history-*"

Needed to disable automatic index creation, csmeter indices, and Swarm NFS connectors. (v10.1)

cluster.name: <ES_cluster_name>

Provide the Elasticsearch cluster a unique name, which is unrelated to the Swarm cluster name. Do not use periods in the name.

Important

To prevent merging, it must differ from the cluster.name of the legacy ES cluster, if one is operating.

node.name: <ES_node_name>

Optional. Elasticsearch supplies a node name if one is not set. Do not use periods in the name.

network.host: _site_

Assign a specific hostname or IP address, which requires clients to access the ES server using that address. If using a hostname, update /etc/hosts. Defaults to the special value, _site_.

cluster.initial_master_nodes

(ES 7+) For first-time bootstrapping of a production ES cluster. Set to an array or comma-delimited list of the hostnames of the master-eligible ES nodes whose votes should be counted in the very first election.

discovery.zen.
minimum_master_nodes: 3

(ES 6 only)  Set to (number of master-eligible nodes / 2, rounded down) + 1. Prevents split-brain scenarios by setting the minimum number of ES nodes online before deciding on electing a new master.

discovery.seed_hosts

(ES 7+) Enables auto-clustering of ES nodes across hosts. Set to an array or comma-delimited list of the addresses of all master-eligible nodes in the cluster. 

discovery.zen.ping.unicast.hosts: ["es0", "es1"]

(ES 6 only) Set to the list of node names/IPs in the cluster, verifying all ES servers are included. Multicast is disabled by default.

gateway.expected_nodes: 4

Add and set to the number of nodes in the ES cluster. Recovery of local shards starts as soon as this number of nodes have joined the cluster. It falls back to the recover_after_nodes value after 5 minutes. This example is for a 4-node cluster.

gateway.recover_after_nodes: 2

Set to the minimum number of ES nodes started before going into operation status:

  • If total nodes is 1 or 2, set to 1.

  • If total nodes is 3 or 4, set to 2.

  • If total nodes is 5 to 7, set to the number – 2.

  • If total nodes 8 or more, set to the number – 3.

bootstrap.memory_lock: true

Set to lock the memory on startup to verify Elasticsearch does not swap (swapping leads to poor performance). Verify enough system memory resources are available for all processes running on the server.

To allow the elasticsearch user to disable swapping and to increase the number of open file descriptors, the RPM installer makes these edits to/etc/security/limits.d/10-caringo-elasticsearch.conf

# Custom for Caringo Swarm
elasticsearch soft nofile 65536
elasticsearch hard nofile 65536
elasticsearch soft nproc 4096
elasticsearch hard nproc 4096
# allow user 'elasticsearch' memlock
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

path.data: <path_to_data_directory>

By default, path.data goes to /var/lib/elasticsearch with the needed ownership. Choose a separate, dedicated partition of ample size, and make the elasticsearch user the owner of that directory to move the Elasticsearch data directory:

chown -R elasticsearch:elasticsearch <path_to_data_directory>

thread_pool.write.queue_size

The size of the queue used for bulk indexing.

This variable was called threadpool.bulk.queue_size in earlier Elasticsearch versions.

Systemd (RHEL/CentOS 7)

Create a systemd override file for the Elasticsearch service to set the LimitMEMLOCK property to be unlimited.

  1. Create the override file:

     /etc/systemd/system/elasticsearch.service.d/override.conf
  2. Add this content: 

    [Service]
    LimitMEMLOCK=infinity
  3. Load the override file; otherwise, the setting does not take effect until the next reboot:

    sudo systemctl daemon-reload

Environment Settings

Edit the environmental settings: /etc/sysconfig/elasticsearch

MAX_OPEN_FILES

Set to 65536

MAX_LOCKED_MEMORY

Set to unlimited (prevents swapping)

JVM Options

Edit the JVM settings to manage memory and space usage: /etc/elasticsearch/jvm.options

-Xms

Set to half the available memory, but not more than 31 GB.

-Xmx

Set to half the available memory, but not more than 31 GB.

GC logs (optional) — By default, Elasticsearch enables GC logs. These are configured in jvm.options and output to the same default location as the Elasticsearch logs. The default configuration rotates the logs every 64 MB and can consume up to 2 GB of disk space. Disable these logs until needed to troubleshoot memory leaks. To disable them, comment out these lines:

#8:-Xloggc:/var/log/elasticsearch/gc.log
#8:-XX:+UseGCLogFileRotation
#8:-XX:NumberOfGCLogFiles=32
#8:-XX:GCLogFileSize=64m
#9:-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m

Log Setup

To customize the logging format and behavior, adjust its configuration file: /etc/elasticsearch/log4j2.properties

In its default location, logging has the needed ownership. Choose a separate, dedicated partition of ample size, and make the elasticsearch user the owner of that directory to move the log directory:

chown -R elasticsearch:elasticsearch <path_to_log_directory>

Deprecation log

This is the log of deprecated actions, to inform for future migrations. Adjust the log size and log file count for the deprecation log:

Update to these values
appender.deprecation_rolling.policies.size.size = 2097152
appender.deprecation_rolling.strategy.max = 25

By default, deprecation logging is enabled at the WARN level, the level at which all deprecation log messages are emitted. To avoid having large warning logs, change the log level to ERROR:

Change level
logger.deprecation.level = error
  • No labels