Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
maxLevel3

...

  • Cold storage offers the lowest monthly prices per byte stored compared to the standard storage classes.

  • Standard storage classes have low-latency retrieval times, which can allow a Swarm Restore to complete in a single run.

  • Cold storage has longer retrieval latency, as much as 12-48 hours for S3 Glacier Deep Archive, to pull content from archival storage. Depending upon how a restore is performed, the Swarm Restore tool may need to be run multiple times over several hours in order to complete a restoration.

  • Cold storage incurs additional charges for egress and API requests to access the backup, so it is best suited to low-touch use cases.

  • S3 Glacier Deep Archive rounds up small objects, so the overall footprint being charged may be larger because of Swarm's use of metadata objects.

...

While these instruction steps are for AWS S3 (see also S3 Backup Feeds to Wasabi), S3-based public cloud providers have a similar setup process:

  1. Service — Sign up for Amazon S3 if needed.

    1. Navigate to aws.amazon.com/s3 and choose Get started with Amazon S3.

    2. Follow the on-screen instructions.

    3. AWS notifies by email when the account is active and ready to use.

    4. Note: S3 is accessed for the new bucket but the separate IAM service for the new user:

  2. Bucket — Create a bucket dedicated to backing up the Swarm cluster.

    1. Sign in and open the S3 console: console.aws.amazon.com/s3

    2. Choose Create bucket. (See S3 documentation: Creating a Bucket.) 

    3. On tab 1 - Name and region, make the initial entries:

      1. For Bucket name, enter a DNS-compliant name for the new bucket. This cannot be changed later, so choose well:

        1. The name must be unique across all existing bucket names in Amazon S3.

        2. The name must be a valid DNS name, containing lowercase letters and numbers (and internal periods, hyphens, underscores), between 3 and 64 characters. (See S3 documentation: Rules for Bucket Naming.)
          Tip: For easier identification, incorporate the name of the Swarm cluster that this bucket is dedicated to backing up.

      2. For Region, choose the one that is appropriate for business needs. (See S3 documentation: Regions and Endpoints.)

    4. On tab 2 - Configure options, take the defaults. (See S3 documentation: Creating a Bucket, step 4.)
      Best practice: Do not enable versioning or any other optional features, unless it is required for the organization.

    5. On tab 3 - Set permissions, take the default to select Block all public access; now the bucket owner account has full access.
      Best practice: Do not use the bucket owner account to provide Swarm's access to the bucket; instead, create a new, separate IAM user that holds the credentials to share with Swarm. 

    6. Choose Create, and record the fully qualified bucket name (such as "arn:aws:s3:::example.cluster1.backup") for use later, in policies.

    7. Record these values for configuring the S3 Backup feed in Swarm:

      • Bucket Name

      • Region

  3. User — Create a programmatic (non-human) user dedicated to Swarm access.

    1. On the Amazon S3 console, select the service IAM (Identity and Access Management), click Users.

    2. Add a dedicated user, such as caringo_backup, to provide Programmatic access for Swarm.

    3. The IAM console generates an access key (an access key ID + secret access key), which must be recorded immediately.
      (See S3 documentation: Managing Access Keys for IAM Users and Understanding and Getting Your Security Credentials.)

      • This is the sole opportunity to view or download the secret access key, so save it in a secure place.

    4. Record the fully qualified user (such as "arn:aws:iam::123456789012:user/caringo_backup") for use later, in policies.

    5. Record these values for configuring the S3 Backup feed in Swarm:

      • Access Key ID

      • Secret Access Key

  4. Policies — Create policies on both the user and the bucket so the programmatic user has exclusive rights to the S3 bucket. Use the policy generators provided or enter edited versions of the examples below.

    1. Create an IAM policy for this user, allowing it all S3 actions on the backup bucket, which need to be specified as a fully qualified Resource (recorded above), starting with arn:aws:s3:::

      IAM policy

      Code Block
      languagexml
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": "s3:*",
                  "Resource": "arn:aws:s3:::example.cluster1.backup"
              }
          ]
      }


    2. Create a matching bucket policy to grant access to the dedicated backup user, which need to be specified as a fully qualified Principal, which is the User ARN (recorded above) starting with arn:aws:iam:: (See S3 Using Bucket Policies.) 
      Using the Policy Generator, allow all S3 actions for the bucket, using the full ARN name:

      Bucket policy

      Code Block
      languagexml
      {
        "Id": "Policy1560809845679",
        "Version": "2012-10-17",
        "Statement": [
          {
            "Sid": "Stmt1560809828003",
            "Action": "s3:*",
            "Effect": "Allow",
            "Resource": "arn:aws:s3:::example.cluster1.backup",
            "Principal": {
              "AWS": [
                "arn:aws:iam::123456789012:user/caringo_backup"
              ]
            }
          }
        ]
      }


  5. Best practice for security: After implementing the S3 Backup feed in Swarm, write a script to automate rotation of the S3 secret access key on a regular basis, including updating in the S3 Backup feed definition in Swarm (using the management API call, given in Rotating the S3 Access Key, below).

...

  1. Through the public cloud provider, create a new S3 access key and grant the correct permissions for the target S3 bucket.

  2. Using Swarm's management API, update the access credentials for the existing S3 backup feed.

  3. Upon Expire/remove the old S3 access key upon confirming successful feed operations with the new credentials, expire/remove the old S3 access key.

The following command template demonstrates how to use the Swarm management API to update the access credentials for an existing S3 backup feed:

...