Adding Nodes to an ES Cluster
VM Users When Cloning ES Servers
Before starting ES on the new (cloned) node, delete all data under the configured data location for the cloned node (e.g., systemctl stop elasticsearch
, rm -rf /var/lib/elasticsearch/*
and rm -f /etc/elasticsearch/elasticsearch.yml
). ES generates an error stating a conflicting node store cannot be used if the data is not cleared out before ES on the cloned node is started.
The symptom of this condition is the ES service shows as running per systemd and the network table (netstat) shows ES listening on ports 9200 and 9300, but any connection to port 9200 on the cloned ES node is refused.
Important
Verify that the ES version of the existing cluster is the same as the version in the downloaded bundle you are planning to install. The ES version in the downloaded bundle is in the RPM name. To view the ES version on the existing cluster, run:
curl -s -XGET "http://{ES-NODE-IP}:9200/_cat/nodes?h=ip,name,master,version"
If the installed ES version is prior to the latest ES version, see Upgrading Elasticsearch for detailed information.
Complete these steps to add a new node to a running Elasticsearch cluster:
Install the new ES server.
Verify the new server meets the prerequisites in Preparing the Search Cluster.
From the Swarm bundle download, get the latest Elasticsearch RPM and Swarm Search RPM, which installs plugins and support utilities.
elasticsearch-VERSION.rpm caringo-elasticsearch-search-VERSION.noarch.rpm
Install the Caringo RPM public key included with the distribution bundle by running the following command:
rpm --import RPM-GPG-KEY
Install the RPMs. Be sure to use the same version used with the original deployment.
yum install elasticsearch-VERSION.rpm yum install caringo-elasticsearch-search-VERSION.noarch.rpm
Configure the ES server (Configuring Elasticsearch) using the installation script:
/usr/share/caringo-elasticsearch-search/bin/configure_elasticsearch_with_swarm_search.py --no-bootstrap
Install as if this was the first of x ES servers, where x is how many ES servers will exist in the ES cluster.
It prompts for information about all other ES nodes, and it creates configuration files for each. Save these configuration files, which can be useful for future redeployment.
Start the ES service:
systemctl start elasticsearch
For each of the existing nodes, follow these steps:
SSH into the existing node and edit
/etc/elasticsearch/elasticsearch.yml
.Add a comma and the new ES server to the list for
discovery.seed_hosts
.discovery.seed_hosts: ["es1.example.com","es2.example.com","es3.example.com","NEW-ES-NAME.example.com"]
The equivalent ES 6.8.6 config was named
discovery.zen.ping.unicast.hosts
and required settingdiscovery.zen.minimum_master_nodes
to be (total number of nodes)/2 + 1 but is no longer necessary with ES 7.
Ifcluster.initial_master_nodes
exists, it should be commented out.Set to the new number of nodes in the ES cluster for
gateway.expected_data_nodes
. It should match the value in the elasticsearch.yml of the new node.Adjust the value as appropriate for
gateway.recover_after_data_nodes
. This is the minimum number of running ES nodes before going into the operation status. It should match the value in the elasticsearch.yml of the new node.Set to 1 if total nodes are 1 or 2.
Set to 2 if total nodes are 3 or 4.
Set to the number – 2 if the total nodes are 5 to 7.
Set to the number – 3 if total nodes 8 or more.
Stop shard allocation in the cluster:
curl -XPUT -H 'Content-Type: application/json' http://{ES-NODE-IP}:9200/_cluster/settings -d ' { "persistent":{ "cluster.routing.allocation.enable": "none" } } '
Restart the Elasticsearch service:
systemctl restart elasticsearch
Monitor the status to verify the node has rejoined the cluster:
curl "http://{ES-NODE-IP}:9200/_cat/nodes?v"
Re-enable shard allocation in the cluster:
curl -XPUT -H 'Content-Type: application/json' http://{ES-NODE-IP}:9200/_cluster/settings -d ' { "persistent":{ "cluster.routing.allocation.enable": null } } '
Check the status to verify it shows the correct number of nodes and has a green status, then go to the next node.
curl "http://{ES-NODE-IP}:9200/_cluster/health?pretty"
Related content
© DataCore Software Corporation. · https://www.datacore.com · All rights reserved.