Snapshot and Restore Search Data
Note
This technique makes use of and requires the Swarm Content Gateway with S3 enabled.
Swarm builds and maintains your search data (index) through your Search Feed, and it regenerates the search index should it ever be lost. You can trigger this regeneration at any time by running the Refresh command for your search feed in the Swarm Storage UI (or legacy Admin Console, port 90). A complete refresh (which verifies all data) takes a long time, during which your listings are unavailable.
A method exists to take a snapshot of the index data so it can be restored for instant disaster recovery using an Elasticsearch plugin if it needs to be verified listings are never offline. Because the Gateway can function as ahttps://www.elastic.co/guide/en/elasticsearch/plugins/2.3/cloud-aws-repository.html, you can leverage the https://www.elastic.co/guide/en/elasticsearch/plugins/2.3/cloud-aws.html to get https://www.elastic.co/guide/en/elasticsearch/reference/2.3/modules-snapshots.html capability for your search data (index). To snapshot is to back up your search data to a file system or S3 (Swarm); to restore is to place a snapshot back into production.
These are key reasons for using the AWS Cloud plugin:
Search Index Restoration: If your Search cluster has problems and the search index is lost, you can restore a snapshot so applications that depend on listings and collections are not interrupted.
Usage Snapshot: The usage metering indices written are temporary. To preserve data being written since the last backup, you can set up frequent snapshots.
Data Move: If you are making changes to your Search cluster, you can restore a snapshot to the new location to minimize disruption in services.
Important
Refresh the feed (Managing Feeds) for the restored index, and allow time for Swarm to verify the index data. Until it completes, any objects created, changed, or deleted after the last snapshot may be missing or appear erroneously.
Configuring the Plugin
These are required:
Elasticsearch to Backup |
|
---|---|
Content UI (Portal) |
|
Domain | <domain> in destination Swarm storage cluster |
Bucket | <bucket> within the |
S3 Endpoint |
|
Token ID (Access Key) | UUID |
S3 Secret Key | generated when token is created |
Best Practice
Although you can back up Elasticsearch to the same Swarm cluster that is using it, it is best to use a separate Swarm cluster.
In the Content UI (Portal) on the Swarm cluster storing the Elasticsearch snapshots, create an S3 token.
Create or select the domain.
Open its Settings (gear icon) and select the Tokens tab.
Create a token that includes an S3 key.
Record both the access key (token ID) and the secret (S3) key:
On each node in your Elasticsearch cluster, install the AWS Cloud Plugin (https://www.elastic.co/guide/en/elasticsearch/plugins/current/repository-s3.html):
Log in to the node as the root user using ssh.
Install the plugin:
sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch repository-s3
To allow the keys to be specified, set the following JVM option (
/etc/elasticsearch/jvm.options
):-Des.allow_insecure_settings=true
Restart the Elasticsearch service.
Configure an S3 repository using the token (see below).
Test the plugin, as shown below:
Take a snapshot.
Delete the search index (which causes listings to fail).
Restore your snapshot using the manifest file.
Verify the listings are working again.
Configure the S3 Repository
In these examples, the Elasticsearch repository is using S3 to store the search data snapshots.
Create the S3 repository using a command like the following. The base_path can be empty, or set if this bucket is the backup destination for multiple Elasticsearch clusters.
The endpoint with the bucket in the host must be accessible from every Elasticsearch node (to verify, run
'curl -i http://essnapshots.mydomain.example.com/
'). This requires either explicit/etc/host
entries or wildcard DNS for the domain. If any node fails to contact the endpoint, you must delete the repo with "curl -XDELETE 'http://elasticsearch:9200/_snapshot/myRepo'
" and PUT it again.These configuration values (
endpoint, access_key, secret_key
) can be stored inelasticsearch.yml
instead of the JSON body (see the Elasticsearch docs for the config names).curl -XPUT -H 'Content-type: application/json' 'http://elasticsearch:9200/_snapshot/myRepo' -d '{ "type": "s3", "settings": { "bucket": "essnapshots", "region": null, "endpoint": "http://mydomain.example.com/", "protocol": "http", "base_path": "myswarmcluster", "access_key": "18f2423d738416f0e31b44fcf341ac1e", "secret_key":"BBgPFuLcO3T4d6gumaAxGalfuICcZkE3mK1iwKKs" } }'
List information about the snapshot repository:
Verify the repository is created successfully:
Creating a Snapshot the S3 Repository
Create a new snapshot into S3 repository, setting it to wait for completion:
If needed, you can restrict the indices (such as to Search only, if Metrics backups are not needed). See https://www.elastic.co/guide/en/elasticsearch/reference/2.3/modules-snapshots.html for details on restricting indices.
Allow several hours for this to complete, especially for an initial snapshot of an Elasticsearch with a lot of large indices.
Restoring from a Snapshot
Always test restoring a backup before needed. Delete the search index in a test or staging environment to simulate a situation where restoring Elasticsearch data is needed:
In the Storage UI:
Open Cluster > Feeds, open the Swarm search feed, and select Actions (gear icon) > Pause to prevent a new index from being created.
Open Settings > Cluster, Metrics and temporarily disable Swarm metrics (
metrics.targets
set to nothing) to prevent those indices from being created during restore.
Restore the search index, renaming indices if they exist and are locked:
In the Storage UI:
Open Settings > Cluster, Metrics and re-enable Swarm metrics (
metrics.targets
set to its prior value).Open Cluster > Feeds, open your Swarm search feed, and select Actions (gear icon) > Unpause to reactivate the feed.
Verify the listings are working as before:
© DataCore Software Corporation. · https://www.datacore.com · All rights reserved.