...
Using OS X brew or python pip install s3cmd Windows: Install python 2.7 and pip. For more info see README
sudo
pip
install
s3cmd
Verify that that s3cmd is version 1.5.2 or later:
s3cmd --version
Edit your /etc/hosts (or c:\Windows\System32\etc\hosts) file and add a mapping for your domain to your Content Gateway IP address.
192.168.99.100 mydomain.example.com
Edit your ~/.s3cfg file and paste into it all of these settings. Note: if you don't increase part size here, use command-line argument --multipart-chunk-size-mb=100 on s3cmd put/sync:
# This should be your ~/.s3cfg file. It configures the s3cmd utility
# to access your Swarm Content Gateway domain.
[default]
access_key = {access-key-for-token}
secret_key = {secret-key-for-token}
# Must use default port 80 to avoid "S3 error: 403 (SignatureDoesNotMatch)".
# Or you can use a custom S3 port if you configure V2 signatures below.
host_base = mydomain.example.com:80
host_bucket = mydomain.example.com:80
# Below format might be needed under older s3cmd versions, but requires wildcard dns.
#host_bucket = %(bucket)s.mydomain.example.com:80
signature_v2 = True
check_ssl_certificate = False
use_https = False
# Important for improving Swarm performance and reducing storage overhead!
multipart_chunk_size_mb = 100Remember to replace "mydomain.example.com:80" in all places with your actual Content Gateway domain and S3 port!
Generate a new access key (token) via the Content Portal or a command-line curl, e.g.:
# Create an S3 token that expires in 90 days, assumes gateway's scsp port is 8081
$ curl -v -u "caringoadmin" -X POST --data-binary "" -H "X-User-Secret-Key-Meta: secret" -H "X-User-Token-Expires-Meta: +90" "http://mydomain.example.com:8081/.TOKEN/"
Set access_key to the 32-character token uuid and set secret_key to the secret string that was used.
You're now ready to use s3cmd to list and create buckets, and copy files in or out.
# List all your buckets in the domain
$ s3cmd ls ...
# Problems connecting, signature mismatch? Show debug
# output to see exactly what's sent and returned.
$ s3cmd ls -d
# Download all the files from your "images" bucket
$ mkdir headshots && s3cmd get -r s3://images headshots
# Generate a signed url that expires in an hour
$ s3cmd signurl s3://mybucket/file.html +3600
http://mbyucket.mydomain.example.com:80/file.html?AWSAccessKeyId=0e71169c9ab10b293bda2b454bf20c35&Expires=1447998649&Signature=KKwTgl0x%2Fk96jaPzp60LQ97ozO0%3D
The bucket can be moved from the hostname into the path. It always outputs "http", but you can use "https" -- make sure your front-end proxy routes requests with the "AWSAccessKeyId" query arg to the Content Gateway S3 port.# List S3 multipart uploads in progress that were begun in 2015 and delete them, including parts:
$ s3cmd multipart s3://inbox | grep '^2015-' | sed 's/ /%20/g' | awk -F$'\t' '{print $2, $3}' | xargs -p -r -t -n 2 s3cmd abortmp
...