Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 16 Current »

Overview

This guide will help you to set up and enable regular backups of your Elasticsearch index without causing downtime. Backing up Elasticsearch indices is particularly beneficial for use cases where the data on Swarm is static, like Write Once Read Many (WORM). However, it is still necessary to perform a “Refresh Search Index” operation after recovery to ensure the Elasticsearch index is fully updated and synchronized. This process is crucial because it catches up the Elasticsearch index with the latest data state.

For use cases with frequent updates, such as backup software like Commvault, NetBackup, Veeam et al, restoring an Elasticsearch index from backup is not suitable. After restoring a specific point-in-time of the Elasticsearch index, the backup client software may not be able to read the data beyond that time. To address this, a “Refresh Search Index” is required, which synchronizes the Elasticsearch Index with the current state of data. This refresh process can take almost the same amount of time as creating a new search feed.

Prerequisites

  • Elasticsearch cluster running.

  • Shared file system accessible by all Elasticsearch nodes.

  • Access to the cluster via curl or a similar HTTP client.

  • (Optional) Elasticsearch Curator for automating snapshots.

Step-by-Step Guide

  1. Create a Snapshot Repository

    You need to create a snapshot repository where the snapshots will be stored. This can be a shared file system or an S3 bucket.
    Using a Shared File System:
    First, specify the shared repository location in the elasticsearch.yml file:

    path.repo: "/mount/backups/my_backup"

    Then, create the repository using the following command:

    curl -X PUT "http://<es_node_ip>:9200/_snapshot/my_backup" -H 'Content-Type: application/json' -d'
    {
      "type": "fs",
      "settings": {
        "location": "/mount/backups/my_backup"
      }
    }'

    Using DataCore Swarm S3 bucket:
    When the target repository is another Swarm cluster, the command to create the snapshot repository would be as follows:

    curl -X PUT "http://<es_node_ip>:9200/_snapshot/my_s3_backup" -H 'Content-Type: application/json' -d'
    {
      "type": "s3",
      "settings": {
        "bucket": "my-elasticsearch-backup-bucket",
        "endpoint": "https://datacore-swarm.example.com",
        "access_key": "your_access_key",
        "secret_key": "your_secret_key",
        "protocol": "https"
      }
    }'

Replace ‘my-elasticsearch-backup-backup’, ‘https://datacore-swarm.example.com’, ‘your_access_key’, and ‘your_secret_key’ with your actual S3 bucket name, DataCore Swarm endpoint URL, and AWS credentials.
NOTE: Ensure that the location path is accessible and writable by all nodes in the cluster.

  1. Verify the Repository

    After creating the repository, verify it to ensure it is set up correctly:

    curl -X GET "http://<es_node_ip>:9200/_snapshot/my_backup"
  2. Create a Snapshot

    Once the repository is set up and verified, create a snapshot of your index. Replace index_mumbkctcomobs.datacore.com.com0 with your index name.

    curl -X PUT "http://<es_node_ip>:9200/_snapshot/my_backup/snapshot_$(date +\%Y\%m\%d\%H\%M)" -H 'Content-Type: application/json' -d'
    {
      "indices": "index_swarm.datacore.com.com0",
      "ignore_unavailable": true,
      "include_global_state": false
    }'
  3. Automate Snapshot Creation

    To automate the creation of snapshots, you can use cron jobs on Linux or scheduled tasks on Windows.

    Example using a cron job (runs daily at 2 AM):

    0 2 * * * curl -X PUT "http://<es_node_ip>:9200/_snapshot/my_backup/snapshot_$(date +\%Y\%m\%d\%H\%M)" -H 'Content-Type: application/json' -d'
    {
      "indices": "index_swarm.datacore.com.com0",
      "ignore_unavailable": true,
      "include_global_state": false
    }'
  4. Monitor Snapshots

    Regularly check the status of your snapshots to ensure they are completing successfully:

    curl -X GET "http://<es_node_ip>:9200/_snapshot/my_backup/_all/_status"
  5. Restoring a Snapshot (if needed)
    If you need to restore a snapshot, you can do so with the following command:

    curl -X POST "http://<es_node_ip>:9200/_snapshot/my_backup/snapshot_<snapshot_date>/_restore" -H 'Content-Type: application/json' -d'
    {
      "indices": "index_swarm.datacore.com.com0",
      "ignore_unavailable": true,
      "include_global_state": false
    }'

    For the S3 bucket:

    curl -X POST "http://<es_node_ip>:9200/_snapshot/my_s3_backup/snapshot_<snapshot_date>/_restore" -H 'Content-Type: application/json' -d'
    {
      "indices": "index_swarm.datacore.com.com0",
      "ignore_unavailable": true,
      "include_global_state": false
    }'
    • '"include_global_state": false' means that only the data stored in the particular index is restored.

    • If you wan to restore everything from the cluster, including templates, persistent cluster settings, and more, set '"include_global_state": true'.

Automating with Elasticsearch Curator

Elasticsearch Curator simplifies managing indices and snapshots. Here’s how to set it up:

  1. Install Curator

    pip install elasticsearch-curator
  2. Create a Curator Configuration File (curator.yml)

    client:
      hosts:
        - 127.0.0.1
      port: 9200
    logging:
      loglevel: INFO
      logfile: /var/log/curator.log
      logformat: default
      blacklist: ['elasticsearch', 'urllib3']
  3. Create a Curator Action File (snapshot.yml)

    actions:
      1:
        action: snapshot
        description: "Snapshot selected indices"
        options:
          repository: my_backup
          name: snapshot-%Y%m%d%H%M
          ignore_unavailable: false
          include_global_state: false
        filters:
        - filtertype: pattern
          kind: prefix
          value: index_swarm.datacore.com.com0

  4. Create a Cron Job to Run Curator

    0 2 * * * curator --config /path/to/curator.yml /path/to/snapshot.yml

Best Practices

  • Test Snapshots: Regularly restore snapshots to a test cluster to ensure data integrity.

  • Monitor Resources: Monitor cluster resources during snapshot operations to ensure they do not impact performance.

  • Automate Alerts: Set up alerts to notify you if a snapshot operation fails.

  • Retention Policy: Implement a retention policy to manage storage, deleting older snapshots to save space.

By following these steps, you can enable regular backups of your Elasticsearch index without causing downtime, ensuring your data is safe and recoverable.

  • No labels