Snapshot and Restore Search Data


This technique makes use of and requires the with S3 enabled.

Swarm builds and maintains your search data (index) through your Search Feed, and it regenerates the search index should it ever be lost. You can trigger this regeneration at any time by running the Refresh command for your search feed in the Swarm Storage UI (or legacy Admin Console, port 90). A complete refresh (which verifies all data) takes a long time, during which your listings are unavailable. 

A method exists to take a snapshot of the index data so it can be restored for instant disaster recovery using an Elasticsearch plugin if it needs to be verified listings are never offline. Because the Gateway can function as a, you can leverage the to get capability for your search data (index). To snapshot is to back up your search data to a file system or S3 (Swarm); to restore is to place a snapshot back into production. 

These are key reasons for using the AWS Cloud plugin:

  • Search Index Restoration: If your Search cluster has problems and the search index is lost, you can restore a snapshot so applications that depend on listings and collections are not interrupted.

  • Usage Snapshot: The usage metering indices written are temporary. To preserve data being written since the last backup, you can set up frequent snapshots.

  • Data Move: If you are making changes to your Search cluster, you can restore a snapshot to the new location to minimize disruption in services.


Refresh the feed () for the restored index, and allow time for Swarm to verify the index data. Until it completes, any objects created, changed, or deleted after the last snapshot may be missing or appear erroneously.

Configuring the Plugin

These are required:

Elasticsearch to Backup


Content UI (Portal)



<domain> in destination Swarm storage cluster


<bucket> within the <domain>

S3 Endpoint 


Token ID (Access Key)


S3 Secret Key

generated when token is created

Best Practice

Although you can back up Elasticsearch to the same Swarm cluster that is using it, it is best to use a separate Swarm cluster.

  1. In the Content UI (Portal) on the Swarm cluster storing the Elasticsearch snapshots, create an S3 token.

    1. Create or select the domain.

    2. Open its Settings (gear icon) and select the Tokens tab.

    3. Create a token that includes an S3 key.

    4. Record both the access key (token ID) and the secret (S3) key:

  2. On each node in your Elasticsearch cluster, install the AWS Cloud Plugin ():

    1. Log in to the node as the root user using ssh.

    2. Install the plugin: 

      sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch repository-s3
    3. To allow the keys to be specified, set the following JVM option (/etc/elasticsearch/jvm.options): 

    4. Restart the Elasticsearch service.

  3. Configure an S3 repository using the token (see below).

  4. Test the plugin, as shown below:

    1. Take a snapshot.

    2. Delete the search index (which causes listings to fail).

    3. Restore your snapshot using the manifest file.

    4. Verify the listings are working again.

Configure the S3 Repository

In these examples, the Elasticsearch repository is using S3 to store the search data snapshots. 

  1. Create the S3 repository using a command like the following. The base_path can be empty, or set if this bucket is the backup destination for multiple Elasticsearch clusters.

    • The endpoint with the bucket in the host must be accessible from every Elasticsearch node (to verify, run 'curl -i'). This requires either explicit /etc/host entries or wildcard DNS for the domain. If any node fails to contact the endpoint, you must delete the repo with "curl -XDELETE 'http://elasticsearch:9200/_snapshot/myRepo'" and PUT it again.

    • These configuration values (endpoint, access_key, secret_key) can be stored in elasticsearch.yml instead of the JSON body (see the Elasticsearch docs for the config names).

      curl -XPUT -H 'Content-type: application/json' 'http://elasticsearch:9200/_snapshot/myRepo' -d '{ "type": "s3", "settings": { "bucket": "essnapshots", "region": null, "endpoint": "", "protocol": "http", "base_path": "myswarmcluster", "access_key": "18f2423d738416f0e31b44fcf341ac1e", "secret_key":"BBgPFuLcO3T4d6gumaAxGalfuICcZkE3mK1iwKKs" } }'
  2. List information about the snapshot repository:                                                     

  3. Verify the repository is created successfully:                                                                      

Creating a Snapshot the S3 Repository

  1. Create a new snapshot into S3 repository, setting it to wait for completion:  

    If needed, you can restrict the indices (such as to Search only, if Metrics backups are not needed). See for details on restricting indices.

  2. Allow several hours for this to complete, especially for an initial snapshot of an Elasticsearch with a lot of large indices.

Restoring from a Snapshot

  1. Always test restoring a backup before needed. Delete the search index in a test or staging environment to simulate a situation where restoring Elasticsearch data is needed:

  2. In the Storage UI:

    1. Open Cluster > Feeds, open the Swarm search feed, and select Actions (gear icon) > Pause to prevent a new index from being created.

    2. Open Settings > Cluster, Metrics and temporarily disable Swarm metrics (metrics.targets set to nothing) to prevent those indices from being created during restore.

  3. Restore the search index, renaming indices if they exist and are locked:

  4. In the Storage UI:

    1. Open Settings > Cluster, Metrics and re-enable Swarm metrics (metrics.targets  set to its prior value).

    2. Open Cluster > Feeds, open your Swarm search feed, and select Actions (gear icon) > Unpause to reactivate the feed.

  5. Verify the listings are working as before:

© DataCore Software Corporation. · · All rights reserved.