For description of Swarm Metrics, its components, templates, and the set of rolling indices it generates, see /wiki/spaces/DOCS/pages/2443813497.
Index defaults
For ES 2.3.3, indices are configured with 3 shards and one additional replica per index. If more shards or replicas are desired, contact DataCore Support for help with that configuration.
Once the current version of Elasticsearch is running, install Swarm Metrics on one of the Elasticsearch servers or another system running RHEL/CentOS 6.
Install the new ES curator package. (See Elasticsearch for details.)
Download and install the public signing key for Elasticsearch.
rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
Create a yum repository entry. In
/etc/yum.repos.d
, create the file, "curator.repo
" and include the section correct for this version of RHEL/CentOS:[curator-4] name=CentOS/RHEL 6 repository for Elasticsearch Curator 4.x packages baseurl=http://packages.elastic.co/curator/4/centos/6 gpgcheck=1 gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch enabled=1
Install the ES curator package.
yum install elasticsearch-curator
In the Swarm bundle, locate the Metrics RPM:
caringo-elasticsearch-metrics-<version>.noarch.rpm
Install the metrics package.
yum install caringo-elasticsearch-metrics-<version>.noarch.rpm
Upgrade error
If the elasticsearch-curator package shows an error during an upgrade, this is a known curator issue. The workaround is to reinstall the curator:
yum reinstall elasticsearch-curator
Update the configuration file for the metrics curator:
/etc/caringo-elasticsearch-metrics/metrics.cfg
clusters
Set to one or more Swarm storage clusters for which you want metrics collected. Use spaces or commas to separate multiple values.
storage1.example.com storage2.example.com
host
Set to an Elasticsearch server (bound name or IP address) to which you want to publish metrics.
es2.example.com
Metrics requirement — If you configured the Elasticsearch host (in
) to a specific hostname or IP address, then the host for Metrics must match one and only one of the IPs (if you have multiple IPs configured). However, if you configureelasticsearch.yml
network.host
inelasticsearch.yml
to be "_site_
" (recommended), then the host inmetrics.cfg
can be a valid IP address or hostname for that Elasticsearch server. There can only be one IP address or hostname configured in this fieldlogHost
Specifies the syslog host. Set to blank to disable syslogging.
logPort
Specifies the syslog port. Defaults to 514.
logFile
Set to blank to disable file logging.
<all else>
Accept or modify the remaining configuration values.
Configure the metrics settings for the Swarm Storage cluster.
Using either the Swarm UI or SNMP, update these settings. (See Persisted Settings (SNMP) for how to update settings using SNMP.) Note: the dynamic (SNMP-enabled) values in the config file affect a new cluster on first deployment.Swarm Storage Setting SNMP Name Default Description metrics.target
metrics.targetsmetricsTargetHost none (disabled) Required. One or more Elasticsearch servers (a fully qualified domain name or IP address) where metrics-related statistics are captured. Use spaces or commas to separate multiple values. To disable statistics collection, leave the value blank. metrics.port metricsTargetPort 9200 The port on the Elasticsearch server where metrics-related statistics are captured. metrics.period metricsPeriod 900 In seconds, from 15 seconds to 1 day; defaults to 15 minutes. How frequently to capture metrics-related statistics. metrics.diskUtilizationThreshold 5 In percent, from 0 to 100; defaults to 5 percent. Minimum percentage of Elasticsearch disk space available before metrics stop being indexed. Indexing resumes when space is greater than this minimum. (v9.1) metrics.diskUtilizationCheckInterval 600 In seconds, from 15 seconds to 1 day; defaults to 10 minutes. How frequently to check disk utilization on the Elasticsearch cluster. (v9.1) To start collecting metrics, manually run curator to prime the indexing setup, which defines the metrics schemas, creates empty indices with those schemas, and sets up aliases to the indices. (By default, the curator runs at midnight; however, for a new installation, it runs at the top of the next hour.) Running the curator guarantees the current day's indices exist and all aliases are up to date, so metrics can begin to be collected (priming does not generate any metrics data).
To run curator manually, use this command:/usr/share/caringo-elasticsearch-metrics/bin/metrics_curator -n
Tip
The Curator checks for an existing search index to validate the cluster name. If you have a new cluster or are not using a search feed, add -v (--valid) to skip the check. (v9.4)
After metrics are configured in the Swarm cluster, the first metrics data appears within the number of seconds of
metrics.period
, which defaults to 15 minutes. Look for "create_mapping [metrics]
" log events in the Elasticsearch log (by default:/var/log/elasticsearch/[ES-cluster-name].log
.Caution
The Swarm UI and legacy Admin Console will log CRITICAL messages if metrics is misconfigured or if connection to the Elasticsearch cluster is lost.
If needed, see /wiki/spaces/DOCS/pages/2443813535.
Return to Migrating from Older Elasticsearch, or else resume the Elasticsearch installation: Installing Elasticsearch