The S3 Backup Restore Tool is the standalone utility for performing DR from your S3 backup bucket, either to the original cluster or to an empty cluster that is meant to replace the original. See S3 Backup Feeds.
Once your data is backed up in S3, the restore tool allows both examining a backup and control how, what, and where it is restored:
- List all domains and buckets, or the buckets within a domain, with the logical space used for each.
- List all objects within a bucket or unnamed objects in a domain, optionally with sizes and paging.
- Restore either the complete cluster contents or else a list of domains, buckets, or individual objects to restore.
- Rerun the restore, should any part of it fail to complete.
- Partition your restoration tasks across multiple instances of the command line tool, to run them in parallel.
Installing the Restore Tool
The S3 Backup Restore tool has its own install package included in your Swarm download bundle. Install it on one or more (for parallel restores) systems where you want to run the restore processes.
Required
The S3 Backup Restore Tool must be installed on a system that is running RHEL/CentOS 7.
Preparation (one-time only)
The swarmrestore package is delivered as a Python pip3 source distribution. You will need to prepare each machine to be able to install this and future versions of swarmrestore.
As root, run the following command:
yum install python3
Verify that you have version 3.6:
python3 --version
Installation
If you have the Python 2 generation of the tool (caringo-swarmrestore-1.0.x.tar.gz
), first uninstall that version:
pip uninstall caringo-swarmrestore
From then on, whenever you get a new versions of swarmrestore, rerun this installation:
- Copy the latest version of the swarmrestore package to your server.
As root, run the following:
pip3 install caringo-swarmrestore-<version>.tar.gz
- At this point, swarmrestore should be in
/usr/local/bin
and is likely already on your path. - Repeat for any additional servers, if you plan to do partitioning for parallel restores.
Restore Tool Settings
The tool uses a configuration file, .swarmrestore.cfg. Because the file contains sensitive passwords, the tool warns you if the configuration file is not access-protected (chmod
mode 600 or 400).
The configuration file follows the format of Swarm Storage settings files, using sections listing name = value pairs. These setting names map to the S3 Backup feed definition, where the values have the same meaning.
Locate the sample configuration file where it is installed:
/usr/local/sample-.swarmrestore.cfg
Copy the file into the home directory and rename it, and open it for editing:
cp /usr/local/sample-.swarmrestore.cfg ~/.swarmrestore.cfg vi ~/.swarmrestore.cfg # Edit config settings
# This is a sample configuration file for the swarmrestore utility. # Save this file as ~/.swarmrestore.cfg and chmod 600 ~/.swarmrestore.cfg to keep passwords private. # S3 host must be a fully qualified host name. The virtual host access style is supported if # the host's first component is the bucket name. # See https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region for Amazon S3 endpoints. [s3] host=s3.amazonaws.com port=443 accessKeyID=<youraccesskeyid> secretAccessKey=<yoursecretaccesskey> bucketName=<yourbucketname> region=us-east-1 # The option below uses HTTPS for access. For HTTP, set sslOption=none and adjust port. sslOption=trusted # The 4 options below are for swarmrestore initiating archival restore of content, such as GLACIER. performArchiveRetrieval=false retrievalTier=Standard accountID=<ninedigitaccountid> activeLifetimeDays=7 # Use these only if you need a forward proxy to reach the S3 service. [forwardProxy] host= port=80 username= password= # The log file can be /dev/null, but logs are useful for diagnosing problems. [log] filename=swarmrestore.log level=30 # The Swarm cluster must either be directly accessible or accessible via # a proxy. The password below is the administrative password for the cluster. [swarm] host=<space separated list of swarm host IPs or gateway host> password=ourpwdofchoicehere cluster=<yourclustername>
Section | Settings |
---|---|
[s3] |
|
[s3] archival only | If you are using an S3 bucket with an archival storage class (Glacier, Glacier Deep Archive), set these additional parameters:
|
[forwardProxy] | This section is for use only with an optional forward proxy:
|
[log] | You may use the same log settings as your Swarm cluster; if you do so, identify the logs by looking for those with the component "
|
[swarm] |
|
Additional Restore Configuration
- Gateway — Add the IP of the machine where the Restore tool will run to the Gateway configuration setting
scsp.allowSwarmAdminIP
if communicating with a Swarm cluster via Gateway.
Using the Restore Tool
Full cluster restore
Before undertaking a restore of a large cluster, contact DataCore Support. They will help you balance the speed of your restore with your bandwidth constraints by examining the space used by the S3 backup bucket, estimating the bandwidth needed, and recommending best use of the -p
command line option (for multiple simultaneously running restore commands on different hosts). They will also advise you on whether you need a forward proxy, to reduce bandwidth usage.
If you are using an AWS Glacier storage class, you might pull your AWS bucket out of cold storage before your full cluster restore by changing the storage class to Standard.
The restoration tool runs using batch-style operation with commands given on the command line. The tool will log its actions to the log file or server in the log configuration section. The restoration tool uses the following command format:
swarmrestore [<tool option>...] <command> [<command option> ...] [<objectspec> …]
Specifying objects
<objectspec>
, or object specification, refers to how you reference the path to the Swarm object that you want to target. It may be a domain name, a bucket name, a named object, an unnamed UUID, or an historical version of an object.
Options:
--help
— Displays a summary of the current configuration.>> swarmrestore --help usage: swarmrestore [-h] [-v] {ls,restore} ... Explore or restore objects stored in an S3 backup of a Swarm cluster. positional arguments: {ls,restore} ls list the contents of the S3 bucket, optionally recursively or using a long format restore restore the contents of the S3 bucket, optionally recursively or including prior versions optional arguments: -h, --help show this help message and exit -v, --version show program's version number and exit Uses ~/.swarmrestore.cfg for configuration.
--version
— Reports the version of the tool.ls --help
— Displays help on thels
command, for listing and enumerating.>> swarmrestore ls --help usage: swarmrestore [-h] [-v] {ls,restore} ... Explore or restore objects stored in an S3 backup of a Swarm cluster. positional arguments: {ls,restore} ls list the contents of the S3 bucket, optionally recursively or using a long format restore restore the contents of the S3 bucket, optionally recursively or including prior versions optional arguments: -h, --help show this help message and exit -v, --version show program's version number and exit Uses ~/.swarmrestore.cfg for configuration.
restore --help
— Displays help on therestore
command, for selective restore and disaster recovery.>> swarmrestore restore --help usage: swarmrestore restore [-h] [-R] [-v] [-n] [-p count/total] [-f FILE] [objectspec [objectspec ...]] positional arguments: objectspec any number of object specifications to restore optional arguments: -h, --help show this help message and exit -R, --recursive recursively traverse the objectspecs -v, --versions also restore prior versions -n, --noop perform checking but do not actually restore -p count/total, --partition count/total partition the work <count> from among <total> -f FILE, --file FILE use the specified file for objectspecs, one per line
ls
subcommand
Enumeration and selection are handled by the ls
command, which is modeled after the Linux command ls and whose results are captured with standard Linux stdout
. Use the command to visualize what domains and buckets you have backed up in S3 and available to be restored. By default, the output is sorted by name and interactively paginated to help you manage large result sets.
The ls
subcommand has this format:
ls [<command option> ...] [<objectspec> …]
Command options, which can be combined (for example, -Rvl
):
-R or --recursive
— Recursively lists the given domain or bucket, or else the entire cluster. Without this option, the command lists only the top-level contents of the object.-v or --versions
— List previous versions of versioned objects. Versions are not listed by default.-l or --long
— Lists details for each item returned in the output:- Creation date
- Content length of the body
- ETag
- Archive status:
- AN — Archived; not available for restoration
- AR — Archived with an archive restore in progress; not available for restoration
- AA — Archived with a copy available for restoration
- OK — Not archived and fully available
- Objectspec
- Alias UUID, if the object is a domain or bucket
<objectspec>
— If none, the command runs across the entire contents of the S3 backup. If present, filters the command to a specific domain or bucket (context object) in Swarm. Use this format:Cluster Domain mydomain/
Bucket mydomain/mybucket/
Named object mydomain/mybucket/myobject/name/with/slashes.jpg
Named version mydomain/mybucket/myobject/name/with/slashes.jpg//645f3912802bb4c31311afc46de2cfc3
Unnamed object mydomain/06ea262a860af23504261f50c09a6b29
(no domain if untenanted)Unnamed version mydomain/06ea262a860af23504261f50c09a6b29//137a88d550041ecda9b8ec4bc36ebea2
Note
Use the double-slash format (
//
) before including a specific version ID for an object. Newlines separate objects.
When you run the command without any options, it returns the list of domains that are included in this S3 bucket for your Swarm cluster:
>>> swarmrestore ls domain1/ domain2/ www.testdomain.com/
If you wanted a complete accounting of every object backed up for a specific domain, run a command like this, redirecting to an output file:
>>> swarmrestore ls -Rvl mydomain/ > mydomaincontents
restore
subcommand
Object restoration and verification is handled by the restore
subcommand, which has the following format:
restore [<command option> ...] [<objectspec> …]
<objectspec>
— If none, applies the command to the entire cluster backup. If present, filters the command to a specific domain, bucket, object, or object version.
To target a command to a specific context (domain/bucket) or content object in Swarm, format the type of object as follows:
Cluster | |
---|---|
Domain | mydomain/ |
Bucket | mydomain/mybucket/ |
Named object | mydomain/mybucket/myobject/name/with/slashes.jpg |
Named version | mydomain/mybucket/myobject/name/with/slashes.jpg//645f3912802bb4c31311afc46de2cfc3 |
Unnamed object | mydomain/06ea262a860af23504261f50c09a6b29 (no domain if untenanted) |
Unnamed version | mydomain/06ea262a860af23504261f50c09a6b29//137a88d550041ecda9b8ec4bc36ebea2 |
Note
Use the double-slash format (//
) before including a specific version ID for an object. Newlines separate objects.
You can use any number of command options, and you may combine the short forms with a single dash (-Rv
). The <objectspecs>
, -R
, and -v
options iterate over objects the same way as the ls
command.
Options:
-R or --recursive
— Recursively restore domains, buckets, or the entire cluster with an empty object spec. See above for what will be iterated over when -R is not used.-v or --versions
— Include previous versions of versioned objects. They are not included by default.-f <file> or --file <file>
— Use objectspecs from a file instead of the command line.-p <count>/<total> or --partition <count>/<total>
— Partition work for a large restore job (but every instance will restore buckets and domains before objects).- Example: To run 4 instances in parallel, configure each option to be one of the series:
-p
1/4, -p -2/4, -p -3/4, -p -4/4
- Example: To run 4 instances in parallel, configure each option to be one of the series:
-n or ---noop
— Perform the checking of a restore, but do not restore any objects.- Does not change the cluster state. The option can be used before and after a restore, as both a pre-check and a verification.
<objectspecs>
— Any number; newlines separate objects. If none, the top level of the cluster’s backup contents is the scope.- Using no object specification with the command options
-Rv
causes Swarm to restore all backed up objects in the entire cluster, including any historical versions of versioned objects.
- Using no object specification with the command options
What gets restored: Restore will copy an object from S3 to the cluster only if the cluster object is missing or else older than the S3 object. Note that context objects always restore before the content they contain: restore will first restore any domains or buckets needed before restoring objects within them.
Output of Restore — At the end of the restoration, the tool reports the number of objects restored and the number of objects skipped, for being either identical to or newer than the backed up copy. The command output lists each object spec with its status:
current
— The object was not restored because the target cluster already has the same version of the object.older
— The object was not restored because it is older than the one in the target cluster.obsolete
— The object was not restored because the cluster does not allow the object to be written. Usually, it means the object has been deleted.needed
— The object needs restoration, but the -n option was used.restored
— The object was successfully restored.nocontext
— The object cannot be restored because its parent domain or bucket cannot be restored.failure
— The object cannot be restored. Consult the logs for details.archived
— The object is archived and the restore tool is not configured for archive restoration. This is a failure condition.initiated
— The object is archived and the tool has issued an object restoration request. See the Amazon S3 API RestoreObject Request Syntax. This is also a failure condition, but the object will be counted in the archive retrieval initiated stats. It is these operations that incur expense to the bucket owner by the restore tool.ongoing
— The object is in archive and a restoration request has already been initiated. Restoration from archive is in progress. This is also a failure condition.
Rate of Restore — Restoration might take a long time run, especially if recursion (-R
) is used on domains or buckets. To boost the rate of restore, you can install the S3 Backup Restore tool on multiple servers and then run the restore command with partitioning parameters (-p
) across all the instances of the tool, which allows restoring faster in parallel, with minimal overlap.
Headers for Audit — When your S3 Backup feed writes an object to the S3 bucket, it adds to the S3 copy a header (Castor-System-Tiered) that captures when and from where the object was tiered. When the S3 Backup Restore tool writes the S3 object back to Swarm, it includes that S3 header and then adds another one of the same, to capture when and from where the object was restored. These paired headers (both named Castor-System-Tiered) provide the audit trail of the object's movement to and from S3. Swarm persists these headers but does not include them in Entity-MD5 or Header-MD5 calculations. The dates are of the same format as Last-Modified (RFC 7232, section 2.2). See SCSP Headers.
Castor-System-Tiered: <date-of-backup> <cluster-name>/<cluster-settings-uuid> Castor-System-Tiered: <date-of-restore> <S3-service-host>/<bucket-name>