Table of Contents | ||
---|---|---|
|
...
- Cold storage offers the lowest monthly prices per byte stored compared to the standard storage classes.
- Standard storage classes have low-latency retrieval times, which can allow a Swarm Restore to complete in a single run.
- Cold storage has longer retrieval latency, as much as 12-48 hours for S3 Glacier Deep Archive, to pull content from archival storage. Depending upon how a restore is performed, you may need to run the Swarm Restore tool multiple times over several hours in order to complete a restoration.
- Cold storage incurs additional charges for egress and API requests to access your backup, so it is best suited to low-touch use cases.
- S3 Glacier Deep Archive rounds up small objects, so the overall footprint being charged may be larger because of Swarm's use of metadata objects.
...
While these instruction steps are for AWS S3 (see also S3 Backup Feeds to Wasabi), S3-based public cloud providers will have a similar setup process:
- Service — If needed, sign up for Amazon S3.
- Go to aws.amazon.com/s3 and choose Get started with Amazon S3.
- Follow the on-screen instructions.
- AWS will notify you by email when your account is active and ready to use.
- Note that you will access S3 for your new bucket but the separate IAM service for your new user:
- Bucket — Create a bucket that will be dedicated to backing up your Swarm cluster.
- Sign in and open the S3 console: console.aws.amazon.com/s3
- Choose Create bucket. (See S3 documentation: Creating a Bucket.)
- On tab 1 - Name and region, make your initial entries:
- For Bucket name, enter a DNS-compliant name for your new bucket. You will not be able to change it later, so choose well:
- The name must be unique across all existing bucket names in Amazon S3.
The name must be a valid DNS name, containing only lowercase letters and numbers (and internal periods, hyphens, underscores), between 3 and 64 characters. (See S3 documentation: Rules for Bucket Naming.)
Tip: For easier identification, incorporate the name of the Swarm cluster that this bucket will be dedicated to backing up.
- For Region, choose the one that is appropriate for your business needs. (See S3 documentation: Regions and Endpoints.)
- For Bucket name, enter a DNS-compliant name for your new bucket. You will not be able to change it later, so choose well:
- On tab 2 - Configure options, take the defaults. (See S3 documentation: Creating a Bucket, step 4.)
Best practice: Do not enable versioning or any other optional features, unless it is required for your organization. - On tab 3 - Set permissions, take the default to select Block all public access; now only the bucket owner account has full access.
Best practice: Do not use the bucket owner account to provide Swarm's access to the bucket; instead, you will create a new, separate IAM user that will hold the credentials to share with Swarm. - Choose Create, and record the fully qualified bucket name (such as "
arn:aws:s3:::example.cluster1.backup
") for use later, in policies. - Record these values for configuring your S3 Backup feed in Swarm:
- Bucket Name
- Region
- User — Create a programmatic (non-human) user that will be dedicated to Swarm access.
- On the Amazon S3 console, select the service IAM (Identity and Access Management), click Users.
- Add a dedicated user, such as
caringo_backup
, to provide Programmatic access for Swarm. - The IAM console generates an access key (an access key ID + secret access key), which you must record immediately.
(See S3 documentation: Managing Access Keys for IAM Users and Understanding and Getting Your Security Credentials.)- This is your only opportunity to view or download the secret access key, so save it in a secure place.
- Record the fully qualified user (such as "
arn:aws:iam::123456789012:user/caringo_backup
") for use later, in policies. - Record these values for configuring your S3 Backup feed in Swarm:
- Access Key ID
- Secret Access Key
- On the Amazon S3 console, select the service IAM (Identity and Access Management), click Users.
Policies — Create policies on both the user and the bucket so that the programmatic user has exclusive rights to your S3 bucket. You may use the policy generators provided or enter edited versions of the examples below.
Create an IAM policy for this user, allowing it all S3 actions on the backup bucket, which you need to specify as a fully qualified
Resource
(which you recorded above), starting witharn:aws:s3:::
Code Block language xml title IAM policy { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:*", "Resource": "arn:aws:s3:::example.cluster1.backup" } ] }
Create a matching bucket policy to grant access to the dedicated backup user, which you need to specify as a fully qualified
Principal
, which is the User ARN (which you recorded above) starting witharn:aws:iam
(See S3 Using Bucket Policies.)::
Using the Policy Generator, be sure to allow all S3 actions for your bucket, using the full ARN name:Code Block language xml title Bucket policy { "Id": "Policy1560809845679", "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1560809828003", "Action": "s3:*", "Effect": "Allow", "Resource": "arn:aws:s3:::example.cluster1.backup", "Principal": { "AWS": [ "arn:aws:iam::123456789012:user/caringo_backup" ] } } ] }
- Best practice for security: After you implement the S3 Backup feed in Swarm, write a script to automate rotation of the S3 secret access key on a regular basis, including updating in the S3 Backup feed definition in Swarm (using the management API call, given in Rotating the S3 Access Key, below).
...