Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The For a complete Swarm Telemetry VM allows for quick deployment of a single instance Prometheus/Grafana installation in combination with Swarm 15 and higher

Table of Contents
minLevel1
maxLevel6
outlinefalse
typelist
printablefalse

Environment Pre-requisites

The following infrastructure services are needed in the customer environment: 

  1. Swarm 15 solution stack installed and configured, with nodeExporter enabled and nodeExporterFrequency set to 120 ( do not set it too fast; this number is in seconds). 

  2. Enabled Swarm metrics:  

    1. metrics.nodeExporterFrequency = 120 

    2. metrics.enableNodeExporter = True    

  3. DHCP server, needed for first time boot and configuration 

  4. (optional) DNS server (recommended if you don't want to see IP addresses in your dashboards, but can also be solved by configuring static entries in /etc/hosts on this VM) 

Configuration

VMware Network configuration

Before we proceed with configuring Grafana and Prometheus, we need to make sure the VM can reach the Swarm storage nodes directly via port 9100. 

By default the VM uses a single Nic configured with DHCP. 

If you have deployed a dual-network SCS/Swarm configuration then we must first select the appropriate "storage vlan" for the second virtual network card. 

Boot the VM and configure the second virtual network card inside the OS.  

Edit /etc/sysconfig/network-scripts/ifcfg-ens160 and modify/add the following to it: 

Panel
bgColor#DEEBFF

ONBOOT=yes 

NETMASK=255.255.255.0   ( Match the same netmask as your storage vlan ) 

IPADDR= Storage VLAN IP   ( picked from 3rd party range to avoid conflicts )  

BOOTPROTO=none 

GATEWAY= SCS IP               ( usually this is the gateway in the Swarm vlan ) 

Enable it by typing:

Panel
bgColor#DEEBFF

ifdown ens160

ifup ens160

Verify the new ip is coming up correctly with "ip a" 

Info

Please note that CentOS7 sometimes renames interfaces, if this happens you will need to rename the matching /etc/sysconfig/network-scripts/ifcfg-xxxx  files to the new name you see with "ip a"  , don't forget to also rename the config parameter inside the ifcfg-xxx file "NAME" and "DEVICE".

 The second network device is currently hardcoded to 172.29.x.x , change to fit your swarm storage network.

Note

It is recommended to assign a static ip for the swarm storage network facing nic

Time Synchronization

Prometheus requires correct time synchronization for it to work and present data to grafana. 

The following has already been done on SwarmTelemetry VM, but is mentioned in case you need to re-apply it. 

Panel
bgColor#DEEBFF

timedatectl set-timezone UTC

Edit /etc/chrony.conf and if missing add server 172.29.0.3 iburst  ( set to your SCS IP ) 

Panel
bgColor#DEEBFF

systemctl stop chronyd 

hwclock --systohc 

systemctl start chronyd 

Prometheus master configuration

We need to tell Prometheus which Swarm storage nodes we wish to collect metrics from.  

Inside the /etc/prometheus/prometheus.yml file you will see  a list of swarm nodes you will need to modify, the following section: 

Code Block
- job_name: 'swarm' 
   scrape_interval: 30s 
   static_configs: 
     - targets: ['10.10.10.84:9100','10.10.10.85:9100','10.10.10.86:9100']

Make sure to change the targets to match your Swarm storage node IP's. 

Info

Note: You can also use DNS names , in the absence of a DNS server, you can first modify /etc/hosts with the desired names for each Swarm storage node and then use those in the configuration file. This is highly recommended to avoid showing IP addresses on potential public dashboards.

IF you have content gateway in your deployment, you can add them to prometheus.yml as follows: 

Info

Note: if you have multiple gateways , just add them to the targets list. 

Code Block
- job_name: 'swarmcontentgateway' 
   scrape_interval: 30s 
   static_configs: 
    - targets: ['10.10.10.20:9100','10.10.10.21:9100' ] 
relabel_configs: 
  - source_labels: [__address__] 
  regex: "([^:]+):\\d+" 
  target_label: instance 

The job_name is what you will see in the gateway dashboard, so make sure to make it human friendly. 

Modify the swarmUI template in /etc/prometheus/alertmanager/template/basic-email.tmpl , this will be used for the email html template showing a button to the chosen URL. 

Change the part in bold:  

Panel
bgColor#DEEBFF

{{ define "__swarmuiURL" }}https://172.30.10.222:91/_admin/storage/{{ end }} 

Modify the gateway job name in /etc/prometheus/alertmanager/alertmanager.yml it must match what you chose in prometheus.yml 

Code Block
routes: 
  - match: 
  	job: swarmcontentgateway 

Modify the gateway job name in /etc/prometheus/alert.rules.yml 

Code Block
- alert: gateway_down 
expr: up{job="swarmcontentgateway"} == 0 

To re-start the service type:

Panel
bgColor#DEEBFF

systemctl restart prometheus

To enable it for reboots type

Panel
bgColor#DEEBFF

systemctl enable prometheus

You can test prometheus is up by opening a browser and going to http://YourVMIP:9090/targets , this page will show which targets it is currently collecting metrics for and if they are reachable. 

You can also do this from a terminal by doing:

Panel
bgColor#DEEBFF

curl YOURVMIP:9090/api/v1/targets

Elasticsearch exporter configuration

The Swarm Search v7 Dashboard requires a new elasitcsearch_exporter service that runs locally on the Telemetry VM. You will need to modify the systemd script to tell it what the ip address is of one of your elasticsearch nodes, 

modify /usr/lib/system.d/system/elasticsearch_exporter.service if the elasticsearch node IP is different 

--uri needs to be pointing at the ip address of one of your elasticsearch nodes, it will auto-discover the other nodes from the metrics. 

The new elasticsearch_exporter needs its own job and replaces the old way of scraping metrics from elasticsearch nodes via plugins. 

The job you need to add if missing in /etc/prometheus/prometheus.yml  is as follows: 

Code Block
- job_name: 'elasticsearch' 
    scrape_interval: 30s 
    static_configs: 
    - targets: ['127.0.0.1:9114']      
    relabel_configs: 
    - source_labels: [__address__] 
      regex: "([^:]+):\\d+" 
      target_label: instance 

Make sure the elasticsearch exporter is running and configured to start on a reboot. 

Panel
bgColor#DEEBFF

systemctl enable elasticsearch_exporter 

systemctl start elasticsearch_exporter

Prometheus retention time

By default Prometheus will keep metrics for 15 days,  I modified it to store 30 days, if you wish to change this follow these instructions: 

  • Edit the /root/prometheus.service file and choose your default retention time for the collected metrics. Note: 30 days is more than enough for POC's and demo's. Modify the --storage.tsdb.retention.time=30d  flag to your new desired retention time. 

  • The rule of thumb is 600MB of disk space for 30 days per Swarm Node. This VM comes with a 50 GB dedicated vmdk partition for prometheus. ( This means we can handle up to 32 chassis for 30 days ) 

  • If you have modified the retention then you need to commit the change: 

Panel
bgColor#DEEBFF

cp /root/prometheus.service /usr/lib/systemd/system 

systemctl daemon-reload 

promtool check config /etc/prometheus/prometheus.yml 

systemctl restart prometheus 

Prometheus security

It may be desirable to restrict prometheus server to only allow query's from the local host, since grafana-server is running on the same VM. This can be done by editing prometheus.service file and adding the flag --web.listen-address=127.0.0.1:9090  

Note: if you decide to bind only to localhost , you will not be able to access the prometheus bultin UI on port 9090 remotely.

Grafana configuration 

The /etc/grafana/grafana.ini file should be modified to setup the IP address the server should be listening too, by default it will bind to all local IP's on port 80.  

Review the admin_password parameter. 

Note: The default admin password is "datacore" for Grafana. 

Grafana has several authentication options including google-auth / oAuth / ldap and by default basic http auth. 

See https://docs.grafana.org/ for more details. 

To start the service type "service grafana-server start" or "systemctl start grafana-server" 

To enable it for reboots type "systemctl enable grafana-server" 

Alertmanager configuration

We currently have 4 alerts defined in /etc/prometheus/alert.rules.yml 

Service_down: triggered if any swarm storage node is down for more than 30 mins 

Gateway_down: triggered if the cloudgateway service is down for more than 2 mins 

Elasticsearch_cluster_state: triggered if the cluster state changed to "red" after 5 mins 

Swarm_volume_missing: triggered if reported drive count is decreasing over a period of 10 mins. 

/etc/prometheus/prometheus.yml now contains a section that points to the alertmanager service on port 9093 as well as which alert.rules.yml file to use. 

The configuration for where to send alerts is defined in /etc/prometheus/alertmanager/alertmanager.yml 

By default the route is disabled as it requires manual input from your environment ( smtp server , user, pass etc ) 

Here is an example of a working route to email alerts via gmail: 

Code Block
- name: 'swarmtelemetry' 
  email_configs: 
  - to: swarmtelemetry@gmail.com 
    from: swarmtelemetry@gmail.com 
    smarthost: smtp.gmail.com:587 
    auth_username: swarmtelemetry@gmail.com 
    auth_identity: swarmtelemetry@gmail.com 
    auth_password: YOUGMAILPASSWORD or APPPASSWORD 
    send_resolved: true
Info

Note: you need to configure this for the swarmtelemetry and gatewaytelemetry route, they are defined separately because they use their own custom email templates. 

Note

Prometheus alertmanager does not support SMTP NTLM authentication, as such you cannot use it to send authenticated emails directly to Microsoft Exchange. Instead you should configure the smarthost to connect to localhost:25 without authentication , where the default Centos postfix server is running. It will know how to send the email to your corporate relay ( auto-discovered via DNS ). You will need to add require_tls: false to the email definition config section in alertmanager.yml.

Example configuration for a local SMTP relay in your enterprise environment

Code Block
- name: 'emailchannel' 
  email_configs: 
  - to: admin@acme.com 
    from: swarmtelemetry@acme.com
    smarthost: smtp.acme.com:25 
    require_tls: false
    send_resolved: true

Once configuration has completed restart the alertmanager:

Panel
bgColor#DEEBFF

systemctl restart alertmanager 

To verify the alertmanager.yml has the correct syntax run: 

Panel
bgColor#DEEBFF

amtool check-config /etc/prometheus/alertmanager/alertmanager.yml 

It will output the following: 

Code Block
Checking '/etc/prometheus/alertmanager/alertmanager.yml'  SUCCESS 
Found: 
 - global config 
 - route 
 - 1 inhibit rules 
 - 2 receivers 
 - 1 templates 
  SUCCESS 

To show a list of active alerts run: 

Panel
bgColor#DEEBFF

amtool alert

The show which alert route is enabled run: 

Panel
bgColor#DEEBFF

amtool config routes show 

Routing tree: 

└── default-route  receiver: disabled 

Example Email Alert: 

...

The easiest way to trigger an alert for testing purposes is to shutdown 1 gatewayinstallation guide, refer to Swarm 16.1.0 VM Bundle Deployment for Rocky Linux 8.

No other methods of installation are supported/recommended by DataCore.