The For a complete Swarm Telemetry VM allows for quick deployment of a single instance Prometheus/Grafana installation in combination with Swarm 15 and higher
Table of Contents | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
Environment Pre-requisites
The following infrastructure services are needed in the customer environment:
Swarm 15 solution stack installed and configured, with
nodeExporter
enabled andnodeExporterFrequency
set to 120 ( do not set it too fast; this number is in seconds).Enabled Swarm metrics:
metrics.nodeExporterFrequency = 120
metrics.enableNodeExporter = True
If your deploying your own telemetry solution you will need to use the following software versions
node_exporter 1.6.0
prometheus 2.45.0
grafana server 9.3.2
elasticsearch_exporter 1.5.0
DHCP server, needed for first time boot and configuration
(optional) DNS server (recommended if you don't want to see IP addresses in your dashboards, but can also be solved by configuring static entries in
/etc/hosts
on this VM)
Configuration
VMware Network configuration
Before we proceed with configuring Grafana and Prometheus, we need to make sure the VM can reach the Swarm storage nodes directly via port 9100.
By default the VM uses a single Nic configured with DHCP.
If you have deployed a dual-network SCS/Swarm configuration then we must first select the appropriate "storage vlan" for the second virtual network card.
Boot the VM and configure the second virtual network card inside the OS.
Edit /etc/sysconfig/network-scripts/ifcfg-ens160 and modify/add the following to it:
Panel | ||
---|---|---|
| ||
ONBOOT=yes NETMASK=255.255.255.0 ( Match the same netmask as your storage vlan ) IPADDR= Storage VLAN IP ( picked from 3rd party range to avoid conflicts ) BOOTPROTO=none GATEWAY= SCS IP ( usually this is the gateway in the Swarm vlan ) |
Enable it by typing:
Panel | ||
---|---|---|
| ||
ifdown ens160 ifup ens160 |
Verify the new ip is coming up correctly with "ip a"
Info |
---|
Please note that CentOS7 sometimes renames interfaces, if this happens you will need to rename the matching /etc/sysconfig/network-scripts/ifcfg-xxxx files to the new name you see with "ip a" , don't forget to also rename the config parameter inside the ifcfg-xxx file "NAME" and "DEVICE". |
The second network device is currently hardcoded to 172.29.x.x , change to fit your swarm storage network.
Note |
---|
It is recommended to assign a static ip for the swarm storage network facing nic |
Time Synchronization
Prometheus requires correct time synchronization for it to work and present data to grafana.
The following has already been done on SwarmTelemetry VM, but is mentioned in case you need to re-apply it.
Panel | ||
---|---|---|
| ||
timedatectl set-timezone UTC |
Edit /etc/chrony.conf and if missing add server 172.29.0.3 iburst ( set to your SCS IP )
Panel | ||
---|---|---|
| ||
systemctl stop chronyd hwclock --systohc systemctl start chronyd |
Prometheus master configuration
We need to tell Prometheus which Swarm storage nodes we wish to collect metrics from.
Inside the /etc/prometheus/prometheus.yml file you will see a list of swarm nodes you will need to modify, the following section:
Code Block |
---|
- job_name: 'swarm'
scrape_interval: 30s
static_configs:
- targets: ['10.10.10.84:9100','10.10.10.85:9100','10.10.10.86:9100'] |
Make sure to change the targets to match your Swarm storage node IP's.
Info |
---|
Note: You can also use DNS names , in the absence of a DNS server, you can first modify /etc/hosts with the desired names for each Swarm storage node and then use those in the configuration file. This is highly recommended to avoid showing IP addresses on potential public dashboards. |
IF you have content gateway in your deployment, you can add them to prometheus.yml as follows:
Info |
---|
Note: if you have multiple gateways , just add them to the targets list. |
Code Block |
---|
- job_name: 'swarmcontentgateway'
scrape_interval: 30s
static_configs:
- targets: ['10.10.10.20:9100','10.10.10.21:9100' ]
relabel_configs:
- source_labels: [__address__]
regex: "([^:]+):\\d+"
target_label: instance |
The job_name is what you will see in the gateway dashboard, so make sure to make it human friendly.
Modify the swarmUI template in /etc/prometheus/alertmanager/template/basic-email.tmpl , this will be used for the email html template showing a button to the chosen URL.
Change the part in bold:
Panel | ||
---|---|---|
| ||
{{ define "__swarmuiURL" }}https://172.30.10.222:91/_admin/storage/{{ end }} |
Modify the gateway job name in /etc/prometheus/alertmanager/alertmanager.yml it must match what you chose in prometheus.yml
Code Block |
---|
routes:
- match:
job: swarmcontentgateway |
Modify the gateway job name in /etc/prometheus/alert.rules.yml
Code Block |
---|
- alert: gateway_down
expr: up{job="swarmcontentgateway"} == 0 |
To re-start the service type:
Panel | ||
---|---|---|
| ||
systemctl restart prometheus |
To enable it for reboots type
Panel | ||
---|---|---|
| ||
systemctl enable prometheus |
You can test prometheus is up by opening a browser and going to http://YourVMIP:9090/targets , this page will show which targets it is currently collecting metrics for and if they are reachable.
You can also do this from a terminal by doing:
Panel | ||
---|---|---|
| ||
curl YOURVMIP:9090/api/v1/targets |
Gateway node exporter configuration
Starting Swarm 15.3, the gateway dashboard require you to run the node_exporter service on the gateways.
The systemd service must be configured to listen on port 9095 , because the default port 9100 is used by the gateway metrics component.
Make sure to put the node_exporter golang binary in the /usr/local/bin directory.
Example systemd config file for the node exporter
Code Block |
---|
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
[Service]
User=root
Group=root
Type=simple
ExecStart=/usr/local/bin/node_exporter --web.listen-address=:9095 --collector.diskstats.ignored-devices=^(ram|loop|fd|(h|s|v|xv)d[a-z])\\d+$ --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker)($|/) --collector.filesystem.ignored-fs-types=^/(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)($|/) --collector.meminfo_numa --collector.ntp --collector.processes --collector.tcpstat --no-collector.nfs --no-collector.nfsd --no-collector.xfs --no-collector.zfs --no-collector.infiniband --no-collector.vmstat --no-collector.textfile --collector.conntrack --collector.qdisc --collector.netclass
[Install]
WantedBy=multi-user.target
|
enable and configure the service
Panel | ||
---|---|---|
| ||
systemctl enable node_exporter systemctl start node_exporter |
You will have to add a job definition for it in the prometheus master configuration file,
example:
Code Block |
---|
- job_name: 'gateway-node-exporter'
scrape_interval: 30s
static_configs:
- targets: ['10.10.10.20:9095']
relabel_configs:
- source_labels: [__address__]
regex: "([^:]+):\\d+"
target_label: instance |
SCS node exporter configuration
Starting Swarm 15.3 SCS requires the node_exporter service to monitor partition capacity information which is exposed at the end of the Swarm Node View dashboard.
Use the same systemd script provided by the gateway , but here you must use the default listen port of 9100, SCS 1.5.1 has been modified to add a firewall rule for port 9100 on the swarm storage network,
Make sure to put the node_exporter golang binary in the /usr/local/bin directory.
enable and configure the service
Panel | ||
---|---|---|
| ||
systemctl enable node_exporter systemctl start node_exporter |
You will have to add a job definition for it in the prometheus master configuration file,
example:
Code Block |
---|
- job_name: 'scs-node-exporter'
scrape_interval: 30s
static_configs:
- targets: ['10.10.10.2:9100']
relabel_configs:
- source_labels: [__address__]
regex: "([^:]+):\\d+"
target_label: instance |
Elasticsearch exporter configuration
The Swarm Search v7 Dashboard requires a new elasitcsearch_exporter service that runs locally on the Telemetry VM. You will need to modify the systemd script to tell it what the ip address is of one of your elasticsearch nodes,
modify /usr/lib/systemd/system/elasticsearch_exporter.service if the elasticsearch node IP is different
--uri needs to be pointing at the ip address of one of your elasticsearch nodes, it will auto-discover the other nodes from the metrics.
The new elasticsearch_exporter needs its own job and replaces the old way of scraping metrics from elasticsearch nodes via plugins.
The job you need to add if missing in /etc/prometheus/prometheus.yml is as follows:
Code Block |
---|
- job_name: 'elasticsearch'
scrape_interval: 30s
static_configs:
- targets: ['127.0.0.1:9114']
relabel_configs:
- source_labels: [__address__]
regex: "([^:]+):\\d+"
target_label: instance |
Make sure the elasticsearch exporter is running and configured to start on a reboot.
Panel | ||
---|---|---|
| ||
systemctl enable elasticsearch_exporter systemctl start elasticsearch_exporter |
Prometheus retention time
By default Prometheus will keep metrics for 15 days, I modified it to store 30 days, if you wish to change this follow these instructions:
Edit the /root/prometheus.service file and choose your default retention time for the collected metrics. Note: 30 days is more than enough for POC's and demo's. Modify the --storage.tsdb.retention.time=30d flag to your new desired retention time.
The rule of thumb is 600MB of disk space for 30 days per Swarm Node. This VM comes with a 50 GB dedicated vmdk partition for prometheus. ( This means we can handle up to 32 chassis for 30 days )
If you have modified the retention then you need to commit the change:
Panel | ||
---|---|---|
| ||
cp /root/prometheus.service /usr/lib/systemd/system systemctl daemon-reload promtool check config /etc/prometheus/prometheus.yml systemctl restart prometheus |
Prometheus security
It may be desirable to restrict prometheus server to only allow query's from the local host, since grafana-server is running on the same VM. This can be done by editing prometheus.service file and adding the flag --web.listen-address=127.0.0.1:9090
Note: if you decide to bind only to localhost , you will not be able to access the prometheus bultin UI on port 9090 remotely.
Grafana configuration
The /etc/grafana/grafana.ini file should be modified to setup the IP address the server should be listening too, by default it will bind to all local IP's on port 80.
Review the admin_password parameter.
Note: The default admin password is "datacore" for Grafana.
Grafana has several authentication options including google-auth / oAuth / ldap and by default basic http auth.
See https://docs.grafana.org/ for more details.
To start the service type "service grafana-server start" or "systemctl start grafana-server"
To enable it for reboots type "systemctl enable grafana-server"
Alertmanager configuration
We currently have 4 alerts defined in /etc/prometheus/alert.rules.yml
Service_down: triggered if any swarm storage node is down for more than 30 mins
Gateway_down: triggered if the cloudgateway service is down for more than 2 mins
Elasticsearch_cluster_state: triggered if the cluster state changed to "red" after 5 mins
Swarm_volume_missing: triggered if reported drive count is decreasing over a period of 10 mins.
/etc/prometheus/prometheus.yml now contains a section that points to the alertmanager service on port 9093 as well as which alert.rules.yml file to use.
The configuration for where to send alerts is defined in /etc/prometheus/alertmanager/alertmanager.yml
By default the route is disabled as it requires manual input from your environment ( smtp server , user, pass etc )
Here is an example of a working route to email alerts via gmail:
Code Block |
---|
- name: 'swarmtelemetry'
email_configs:
- to: swarmtelemetry@gmail.com
from: swarmtelemetry@gmail.com
smarthost: smtp.gmail.com:587
auth_username: swarmtelemetry@gmail.com
auth_identity: swarmtelemetry@gmail.com
auth_password: YOUGMAILPASSWORD or APPPASSWORD
send_resolved: true |
Info |
---|
Note: you need to configure this for the swarmtelemetry and gatewaytelemetry route, they are defined separately because they use their own custom email templates. |
Note |
---|
Prometheus alertmanager does not support SMTP NTLM authentication, as such you cannot use it to send authenticated emails directly to Microsoft Exchange. Instead you should configure the smarthost to connect to localhost:25 without authentication , where the default Centos postfix server is running. It will know how to send the email to your corporate relay ( auto-discovered via DNS ). You will need to add require_tls: false to the email definition config section in alertmanager.yml. |
Example configuration for a local SMTP relay in your enterprise environment
Code Block |
---|
- name: 'emailchannel'
email_configs:
- to: admin@acme.com
from: swarmtelemetry@acme.com
smarthost: smtp.acme.com:25
require_tls: false
send_resolved: true |
Once configuration has completed restart the alertmanager:
Panel | ||
---|---|---|
| ||
systemctl restart alertmanager |
To verify the alertmanager.yml has the correct syntax run:
Panel | ||
---|---|---|
| ||
amtool check-config /etc/prometheus/alertmanager/alertmanager.yml |
It will output the following:
Code Block |
---|
Checking '/etc/prometheus/alertmanager/alertmanager.yml' SUCCESS
Found:
- global config
- route
- 1 inhibit rules
- 2 receivers
- 1 templates
SUCCESS |
To show a list of active alerts run:
Panel | ||
---|---|---|
| ||
amtool alert |
The show which alert route is enabled run:
Panel | ||
---|---|---|
| ||
amtool config routes show Routing tree: └── default-route receiver: disabled |
Example Email Alert:
...
The easiest way to trigger an alert for testing purposes is to shutdown 1 gateway.
Note |
---|
If you are aware of an alert and know that the resolution will take several days or weeks to resolve, you can silence alerts via the alert manager GUI on port 9093 |
...
General Advice around defining new alerts
Pages should be urgent, important, actionable, and real.
They should represent either ongoing or imminent problems with your service.
Err on the side of removing noisy alerts – over-monitoring is a harder problem to solve than under-monitoring.
You should almost always be able to classify the problem into one of: availability & basic functionality; latency; correctness (completeness, freshness and durability of data); and feature-specific problems.
Symptoms are a better way to capture more problems more comprehensively and robustly with less effort.
Include cause-based information in symptom-based pages or on dashboards, but avoid alerting directly on causes.
The further up your serving stack you go, the more distinct problems you catch in a single rule. But don't go so far you can't sufficiently distinguish what's going on.
If you want a quiet on call rotation, it's imperative to have a system for dealing with things that need timely response, but are not imminently critical.
installation guide, refer to Swarm 16.1.0 VM Bundle Deployment for Rocky Linux 8.
No other methods of installation are supported/recommended by DataCore.