Created 1/12/2016 jamshid.afshar · Updated 4/14/2017 jamshid.afshar
The open source command-line tool "rclone" is a fast and stable and stable command-line utility for listing and copying files between storage systems and local file systems. It is also cross-platform, available for Linux, OS X, and Microsoft Windows.
http://rclone.org/ [http://rclone.org/]
http://linoxide.com/file-system/configure-rclone-linux-sync-cloud/ [http://linoxide.com/file-system/configure-rclone-linux-sync-cloud/]
Download rclone for your platform, unzip, and put the binary in your PATH. http://rclone.org/downloads/ [http://rclone.org/downloads/]
You can skip skip "rclone config" by by using this template.
[caringo]
type=s3
region = other-v2-signature
access_key_id=${S3_ACCESS_KEY}
secret_access_key=${S3_SECRET_KEY}
endpoint=${S3_PROTOCOL}://${DOMAIN}:${S3_PORT}
location_constraint=
part_size=52428800
*Increasing the part_size from the default 5MB is necessary to improve to improve speed and storage efficiency of resulting large resulting large streams. It is not is not configurable in current rclone client, but contact Support for a patched version.
For example, if your S3 domain / endpoint is is "https://mydomain.cloud.caringo.com" you can create a token with:
$ curl -i -u johndoe -X POST --data-binary '' -H 'X-User-Secret-Key-Meta: secret' \
-H 'X-User-Token-Expires-Meta: +90' https://mydomain.cloud.caringo.com/.TOKEN/
...
HTTP/1.1 201 Created ... Token c63d5b1034c6b41b119683a5e264abd0 issued for johndoe in [root] with secret secret
Then add
Then add this entry to a ~a ~/.rclone.conf fileconf file:
[caringo]
type = s3
region = other-v2-signature
access_key_id = c63d5b1034c6b41b119683a5e264abd0
secret_access_key =
...
secret
endpoint =
...
https://mydomain.cloud.caringo.com
location_constraint=
Here are some
...
example commands:
- List the buckets in your domain
$ rclone lsd caringo:
-1 2015-03-16 20:13:
5252 -1 public
32 -1 inbox
-1 2015-11-28 23:10:32 -1 inbox
Transferred: 0 Bytes ( 0.00 kByte/s)
Errors: 0
Checks: 0
Transferred: 0
Elapsed time: 5.653212245s - Copy your Pictures directory (recursively) to a "old-pics" bucket. It will be Created created if it does not exist.
$ rclone copy '/Volumes/Backup/Pictures/' caringo:old-pics
2016/01/12 13:55:47 S3 bucket old-pics: Building file list
45
2016/01/12 13:55:48 S3 bucket old-pics: Waiting for checks to finish
2016/01/12 13:55:48 S3 bucket old-pics: Waiting for transfers to finish
2016/01/12 13:56:45
Transferred: 2234563 Bytes ( 36.36 kByte/s)
Errors: 0
Checks: 0
Transferred: 1
Elapsed time: 1m0.015171105s
Transferring: histomapwider.jpg
... - List the files in the bucket
$
rclone ls caringorclone ls caringo:old-pics
4032165
6148 .DS_Store4032165 histomapwider.jpg
... - Verfiy all files were uploaded (note trailing slash is necessary on local directory!)
$ rclone check ~/Pictures/test/ jam:old-pics
2016/01/12 14:01:18 S3 bucket old-pics: Building file list
2016/01/12 14:01:18 S3 bucket old-pics: 1 files not in Local file system at /Users/jamshid/Pictures/test
2016/01/12 14:01:18 .DS_Store: File not in Local file system at /Users/jamshid/Pictures/test
2016/01/12 14:01:18 Local file system at /Users/jamshid/Pictures/test: 0 files not in S3 bucket old-pics
2016/01/12 14:01:18 S3 bucket old-pics: Waiting for checks to finish
2016/01/12 14:01:18 S3 bucket old-pics: 1 differences found
2016/01/12 14:01:18 Failed to check: 1 differences found
by the
Note that "check" appears to be confusedby the Mac OS X hidden directory ".DS_Store".
- Tips: use "-v" and "--dump-headers" or "--dump-bodies" to see verbose details and you can try speeding up large up large transfers with "--transfers=10".
- Unfortunately rclone does not copy or let you add metadata, though there are some some enhancement requests [https://github.com/ncw/rclone/issues/111] on on github.