Configuring Elasticsearch 2.3.3
Elasticsearch requires configuration and settings file changes made consistently across the Elasticsearch cluster.
On each Elasticsearch node, run the provided configuration script (
/usr/share/caringo-elasticsearch-search/bin/configure_elasticsearch_with_swarm_search.py
), which automates the configuration changes described below.Resume the installation, to turn on the service if no customizations below are needed: Installing Elasticsearch
Proceed as follows if any settings need to be customized, such as to change Elasticsearch's
path.data
(data directory). Edit the configuration file directly and update log files accordingly.
Customizing Elasticsearch
The paths provided are relative to the Elasticsearch installation directory, which is assumed to be the working directory.
Caution
Errors in adding and completing these settings can prevent the Elasticsearch service from working properly.
Adjust all references to the Elasticsearch's
path.data
location below to reflect the new location if customized from the default.
Elasticsearch Config File
Edit the Elasticsearch config file: /etc/elasticsearch/elasticsearch.yml
| Provide the cluster a unique name. Do not use periods in the name. ImportantIt must differ from the |
---|---|
| Setting node.name is optional. Elasticsearch supplies a node name if not set. Do not use periods in the name. |
| Assign a specific hostname or IP address, which requires clients to access the ES server using that address. Update Metrics requirementThe Elasticsearch host for Metrics in /etc/caringo-elasticsearch/metrics/metrics.cfg must match if the Elasticsearch host is configured to a specific hostname or IP address. The host in metrics.cfg can be a valid IP address or hostname for the Elasticsearch server if |
| Set to the list of node names/IPs in the cluster, include ES servers. Multicast is disabled by default. |
| Set to (number of master-eligible nodes / 2, rounded down) + 1 Prevents split-brain scenarios by setting the minimum number of ES nodes online before deciding on electing a new master. |
| Add and set to the number of nodes in the ES cluster. Recovery of local shards starts as soon as this number of nodes have joined the cluster. It falls back to the |
| Set to the minimum number of ES nodes started before going in to operation status. This example is for a 4-node cluster. |
| Add to support queries with very large result sets (it limits start/from and size in queries). Elasticsearch accepts values up to 2 billion, but more than 50,000 consumes excessive resources on the ES server. |
| For best performance, set how often the translog is fsynced to disk and committed, regardless of write operations. |
| For best performance, change to |
| Set to lock the memory on startup to guarantee Elasticsearch does not swap (swapping leads to poor performance). Verify enough system memory resources are available for all processes running on the server. The RPM installer makes these edits to # Custom for Caringo Elasticsearch and CloudGateway
elasticsearch soft nofile 65535
elasticsearch hard nofile 65535
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited |
| Add to increase the indexing bulk queue size to compensate for bursts of high indexing activity that can exceed Elasticsearch’s rate of indexing. |
| (SwarmNFS users only) Add to support dynamic scripting. |
| Add to support metrics in the Swarm Storage UI. |
| By default, path.data goes to chown -R elasticsearch:elasticsearch <path- to- data- directory> |
Systemd (RHEL/CentOS 7)
Create a systemd override file for the Elasticsearch service to set the LimitMEMLOCK property to be unlimited.
Create the override file:
/etc/systemd/system/elasticsearch.service.d/override.conf
Add this content:
Load the override file (the setting does not take effect until the next reboot):
Environment Settings
Edit the environmental settings: /etc/sysconfig/elasticsearch
| Set to |
---|---|
| Set to |
| Set to half the physical memory on the machine, but not more than 31 GB. |
Logging
To customize the logging format and behavior, adjust its configuration file: /etc/elasticsearch/logging.yml
Logging has the needed ownership in the default location. Choose a separate, dedicated partition of ample size to move the log directory, and assign ownership of the directory to the
elasticsearch
user:Best practice - For better archiving and compression than the built-in log4j, turn off the rotation of log4j and use logrotate.
Edit the
logging.yml
to limit the amount of space consumed by Elasticsearch log files in the event of an extremely high rate of error logging.
Locate thefile:
section and make these changes:Before
After
Repeat for the deprecation and slowlog log files, as appropriate:
Add a script to manage the log rotation.
Sample contents of a logrotate.d script (default location: /etc/logrotate.d/elasticsearch):
Configuration is complete. Resume the Elasticsearch installation:
© DataCore Software Corporation. · https://www.datacore.com · All rights reserved.