Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The S3 Backup Restore Tool is the standalone utility for performing DR from your the S3 backup bucket, either to the original cluster or to an empty cluster that is meant to replace the original. See S3 Backup Feeds.

Once your the data is backed up in S3, the restore tool allows both examining a backup and control how, what, and where it is restored:

  • List all domains and buckets, or the buckets within a domain, with the logical space used for each.

  • List all objects within a bucket or unnamed objects in a domain, optionally with sizes and paging.

  • Restore either the complete cluster contents or else a list of domains, buckets, or individual objects to restore. 

  • Rerun the restore, should any part of it fail to complete.

  • Partition your the restoration tasks across multiple instances of the command line tool, to run them in parallel.

...

The S3 Backup Restore tool has its own a separate install package included in your the Swarm download bundle. Install it on one or more (for parallel restores) systems where you want to run the restore processes run.

Info

Required

The S3 Backup Restore Tool must be installed on a system that is running RHEL/CentOS 7.

Preparation (one-time

...

)

The swarmrestore package is delivered as a Python pip3 source distribution. You will need to prepare each machine Each machine needs to be prepared to be able to install this and future versions of swarmrestore.

  1. As root, run the following command:

    Code Block
    languagebash
    yum install python3
  2. Verify that you have version 3.6 is installed:

    Code Block
    languagebash
    python3 --version

Installation

If you have the Uninstall Python 2 generation of the tool (caringo-swarmrestore-1.0.x.tar.gz) , first uninstall that versionif installed:

Code Block
pip uninstall caringo-swarmrestore

From then on, whenever you get Rerun this installation whenever a new versions version of swarmrestore , rerun this installationis obtained:

  1. Copy the latest version of the swarmrestore package to your the server.

  2. As root, run Run the following as root:

    Code Block
    languagebash
    pip3 install caringo-swarmrestore-<version>.tar.gz
  3. At this point, swarmrestore should be swarmrestore is likely in /usr/local/bin and is likely already on your in the path.

  4. Repeat for any additional servers , if you plan planning to do perform partitioning for parallel restores.

...

The tool uses a configuration file, .swarmrestore.cfg. Because the file contains sensitive passwords, the tool warns you if the configuration file is not access-protected (chmod mode 600 or 400).

...

If you are

Section

Settings

[s3] 

  • host — The hostname of the S3 service.

  • port — The port to use for the S3 service. Use 443 or else 80, if SSL (sslOption) is disabled.

  • accessKeyID — The S3 access key ID.

  • secretAccessKey — The S3 secret access key.

  • bucketName — The name of the destination bucket in S3.

  • sslOption — The S3 connection constraint, with one of two values:

    • "trusted" (the default) specifies use of SSL and requires a trusted server certificate from the destination server.

    • “none” disables use of SSL. Use only for testing and troubleshooting, and change the port to 80.

[s3]

archival only

Set these additional parameters if using an S3 bucket with an archival storage class (Glacier, Glacier Deep Archive), set these additional parameters:

  • performArchiveRetrieval — Whether restoration from archival storage is needed. If false (default), performing Performing a restore will does not incur any expenses for the bucket owner .if false (default), .

  • retrievalTier — Which S3 Glacier retrieval tier to use for restoration: ‘Standard' (default), 'Expedited', or 'Bulk'. Each tier has its own cost and expected restoration time; see Amazon S3 Storage Classes.

  • accountID — Specifies the 9-digit AWS account ID of the bucket owner, granting the tool permission to incur archive restoration expenses at the tier requested. This setting appears in the x-amz-expected-bucket-owner header on the restore object request.

  • activeLifetimeDays — How many days an object restored from archive should remain active before expiring (returning to archival storage). The default is 7 (1 week).

[forwardProxy] 

This section is for use only with an optional forward proxy:

  • host — The forward proxy hostname or IP address.

  • port — The forward proxy host to use.

  • username — (optional) The user name.

  • password — (optional) The password.

[log] 

You may use the The same log settings as your the Swarm cluster ; if you do so, may be used; identify the logs by looking for those with the component "RESTORE" if done so.

  • host — The log host. Leave blank to disable logging.

  • port — (optional) The log port. Defaults to 514.

  • file — (optional) The log filename. Accepts the value of “stdout” for logging to the console screen. Defaults to /dev/null.

  • level — The log level. Defaults to 30 (Warning). Levels are the same used by Swarm: 20 (Info), 15 (Audit), 10 (Debug).

[swarm] 

  • host — A list of host names or IP addresses of Swarm nodes or Gateway nodes.

  • port — (optional) The SCSP port. Defaults to 80.

  • user — The cluster administrator user name, usually "admin".

  • password — The cluster administrator password.

...

  • Gateway — Add the IP of the machine where the Restore tool will run runs to the Gateway configuration setting scsp.allowSwarmAdminIP if communicating with a Swarm cluster via Gateway.

...

Info

Full cluster restore

Before undertaking a restore of a large cluster, contact DataCore Support. They will help you balance the speed of your the restore with your bandwidth constraints by examining the space used by the S3 backup bucket, estimating the bandwidth needed, and recommending best use of the -p command line option (for multiple simultaneously running restore commands on different hosts). They will also advise you on whether you need a forward proxy is needed, to reduce bandwidth usage.

If you are using an AWS Glacier storage class, you may pull your AWS bucket The AWS bucket may be pulled out of cold storage before your the full cluster restore by changing the storage class to Standard if using an AWS Glacier storage class.

The restoration tool runs using batch-style operation with commands given on the command line. The tool will log its logs the actions to the log file or server in the log configuration section. The restoration tool uses the following command format:

...

Info

Specifying objects

<objectspec>, or object specification, refers to how you reference the path to the Swarm object that you want to targetto be targeted is referenced. It may be a domain name, a bucket name, a named object, an unnamed UUID, or an historical version of an object.

...

Enumeration and selection are handled by the ls command, which is modeled after the Linux command ls and whose results are captured with standard Linux stdout. Use the command to visualize what domains and buckets you have been backed up in S3 and are available to be restored. By default, the  The output is sorted by name and interactively paginated to help you manage large result sets by default.

The ls subcommand has this format:

...

  • -R or --recursive — Recursively lists the given domain or bucket, or else the entire cluster. Without this option, the command lists only the top-level contents of the object.

  • -v or --versions — List previous versions of versioned objects. Versions are not listed by default.

  • -l or --long — Lists details for each item returned in the output:

    • Creation date

    • Content length of the body

    • ETag

    • Archive status:

      • AN — Archived; not available for restoration

      • AR — Archived with an archive restore in progress; not available for restoration

      • AA — Archived with a copy available for restoration

      • OK — Not archived and fully available

    • Objectspec

    • Alias UUID, if the object is a domain or bucket

  • <objectspec> — If none, the command runs across the entire contents of the S3 backup. If present, filters the command to a specific domain or bucket (context object) in Swarm. Use this format:

...

Info

Note

Use the double-slash format (//) before including a specific version ID for an object. Newlines separate objects.

When you run running the command without any options, it returns the list of domains that are included in this S3 bucket for your the Swarm cluster:

Code Block
languagetext
>>> swarmrestore ls
domain1/
domain2/
www.testdomain.com/

If you wanted Run a command like this if wanting a complete accounting of every object backed up for a specific domain, run a command like this, redirecting to an output file:

...

Info

Note

Use the double-slash format (//) before including a specific version ID for an object. Newlines separate objects.

You can use any Any number of command options can be used, and you may combine the short forms forms may be combined with a single dash (-Rv). The <objectspecs>, -R, and -v options iterate over objects the same way as the ls command.

...

  • -R or --recursive — Recursively restore domains, buckets, or the entire cluster with an empty object spec. See above for what will be is iterated over when -R is not used.

  • -v or --versions — Include previous versions of versioned objects. They are not included by default.

  • -f <file> or --file <file> — Use objectspecs from a file instead of the command line.

  • -p <count>/<total> or --partition <count>/<total> — Partition work for a large restore job (but every instance will restore restores buckets and domains before objects).

    • Example: To run 4 instances in parallel, configure each option to be one of the series: -p 1/4, -p -2/4, -p -3/4, -p -4/4

  • -n or ---noop — Perform the checking of a restore, but do not restore any objects.

    • Does not change the cluster state. The option can be used before and after a restore, as both a pre-check and a verification.

  • <objectspecs> — Any number; newlines separate objects. If none, the top level of the cluster’s backup contents is the scope.

    • Using no object specification with the command options -Rv causes Swarm to restore all backed up objects in the entire cluster, including any historical versions of versioned objects.

What gets restored: Restore will copy copies an object from S3 to the cluster only if the cluster object is missing or else older than the S3 object. Note that context objects always restore before the content they contain: restore will first restore restores any domains or buckets needed before restoring objects within them.

...

  • current — The object was not restored because the target cluster already has the same version of the object.

  • older — The object was not restored because it is older than the one in the target cluster.

  • obsolete — The object was not restored because the cluster does not allow the object to be written. Usually, it means the object has been deleted.

  • needed — The object needs restoration, but the -n option was used.

  • restored — The object was successfully restored.

  • nocontext — The object cannot be restored because its parent domain or bucket cannot be restored.

  • failure — The object cannot be restored. Consult the logs for details.

  • archived — The object is archived and the restore tool is not configured for archive restoration. This is a failure condition.

  • initiated — The object is archived and the tool has issued an object restoration request. See the Amazon S3 API RestoreObject Request Syntax. This is also a failure condition, but the object will be is counted in the archive retrieval initiated stats. It is these operations that incur expense to the bucket owner by the restore tool.

  • ongoing — The object is in archive and a restoration request has already been initiated. Restoration from archive is in progress. This is also a failure condition.

Rate of Restore — Restoration may take a long time run, especially if recursion (-R) is used on domains or buckets. To boost the rate of restore, you can install the S3 Backup Restore tool on multiple servers and then run the restore command with partitioning parameters (-p) across all the instances of the tool, which allows restoring faster in parallel, with minimal overlap.

Headers for Audit — When your the S3 Backup feed writes an object to the S3 bucket, it adds to the S3 copy a header (Castor-System-Tiered) that captures when and from where the object was tiered. When the S3 Backup Restore tool writes the S3 object back to Swarm, it includes that S3 header and then adds another one of the same, to capture when and from where the object was restored. These paired headers (both named Castor-System-Tiered) provide the audit trail of the object's movement to and from S3. Swarm persists these headers but does not include them in Entity-MD5 or Header-MD5 calculations. The dates are of the same format as Last-Modified (RFC 7232, section 2.2). See SCSP Headers

...