The open source command-line tool "rclone" is a fast and stable command-line utility for listing and copying files between storage systems and local file systems. It is also cross-platform, available for Linux, OS X, and Microsoft Windows.
http://rclone.org/
http://linoxide.com/file-system/configure-rclone-linux-sync-cloud/
Download rclone for your platform, unzip, and put the binary in your PATH. http://rclone.org/downloads/
You can skip "rclone config" by using this template.
[caringo]
type=s3
access_key_id=${S3_ACCESS_KEY}
secret_access_key=${S3_SECRET_KEY}
endpoint=${S3_PROTOCOL}://${DOMAIN}:${S3_PORT}
location_constraint=
For example, if your S3 domain / endpoint is "https://mydomain.cloud.caringo.com" you can create a token with:
$ curl -i -u johndoe -X POST --data-binary '' -H 'X-User-Secret-Key-Meta: secret' \
-H 'X-User-Token-Expires-Meta: +90' https://mydomain.cloud.caringo.com/.TOKEN/ HTTP/1.1 201 Created ... Token c63d5b1034c6b41b119683a5e264abd0 issued for johndoe in [root] with secret secret
Then add this entry to a ~/.rclone.conf
(or newer location ~/.config/rclone/rclone.conf
) file:
[caringo]
type = s3
# Do NOT use V2 sigs, have seen signature problems.
# region = other-v2-signature
access_key_id = c63d5b1034c6b41b119683a5e264abd0
secret_access_key = secret
endpoint = https://mydomain.cloud.caringo.com
location_constraint=
If you prefer a GUI client to manage your copy and sync jobs try https://martins.ninja/RcloneBrowser/. Just download the binary from https://github.com/mmozeiko/RcloneBrowser/releases and point it to your rclone.conf. It's very flexible, you can configure any of the below options.
Here are some example commands:
- List the buckets in your domain
$ rclone lsd caringo:
-1 2015-03-16 20:13:52 -1 public
-1 2015-11-28 23:10:32 -1 inboxTransferred: 0 Bytes ( 0.00 kByte/s)
Errors: 0
Checks: 0
Transferred: 0
Elapsed time: 5.653212245s - Copy your Pictures directory (recursively) to a "old-pics" bucket. It will be created if it does not exist.
$ rclone copy --s3-upload-concurrency 10 --s3-chunk-size 100M '/Volumes/Backup/Pictures/' caringo:old-pics
2016/01/12 13:55:47 S3 bucket old-pics: Building file list
2016/01/12 13:55:48 S3 bucket old-pics: Waiting for checks to finish
2016/01/12 13:55:48 S3 bucket old-pics: Waiting for transfers to finish
2016/01/12 13:56:45
Transferred: 2234563 Bytes ( 36.36 kByte/s)
Errors: 0
Checks: 0
Transferred: 1
Elapsed time: 1m0.015171105s
Transferring: histomapwider.jpg
... - List the files in the bucket
$ rclone ls caringo:old-pics
6148 .DS_Store
4032165 histomapwider.jpg
... - Quickly see the size of the objects in a bucket:
$ rclone size jam:old-pics
Total objects: 173
Total size: 9.550 GBytes (10254108727 Bytes) - Verify all files were uploaded (note trailing slash is necessary on local directory!). The check command can also compare two buckets.
$ rclone check ~/Pictures/test/ jam:old-pics
2016/01/12 14:01:18 S3 bucket old-pics: Building file list
2016/01/12 14:01:18 S3 bucket old-pics: 1 files not in Local file system at /Users/jamshid/Pictures/test
2016/01/12 14:01:18 .DS_Store: File not in Local file system at /Users/jamshid/Pictures/test
2016/01/12 14:01:18 Local file system at /Users/jamshid/Pictures/test: 0 files not in S3 bucket old-pics
2016/01/12 14:01:18 S3 bucket old-pics: Waiting for checks to finish
2016/01/12 14:01:18 S3 bucket old-pics: 1 differences found
2016/01/12 14:01:18 Failed to check: 1 differences found
Note that "check" appears to be confused by the Mac OS X hidden directory ".DS_Store". - Tips: use "
-v
" and "--dump headers
" or "--dump bodies
" to see verbose details. - Increase the part size with
--s3-chunk-size 100M
(defaults to 5M) to improve the speed and storage efficiency of resulting large streams. - Speed up large transfers with "
--
transfers=10
" and "--s3-upload-concurrency 4
". - You might want to use
--s3-disable-checksum
when uploading huge files. - Unfortunately rclone does not copy or let you add metadata, though there are some enhancement requests on github.