Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

[caringo]
type=s3
access_key_id=${S3_ACCESS_KEY}
secret_access_key=${S3_SECRET_KEY}
endpoint=${S3_PROTOCOL}://${DOMAIN}:${S3_PORT}
location_constraint=part_size=52428800

*Increasing the part_size from the default 5MB is necessary to improve speed and storage efficiency of resulting large streams. It is not configurable in current rclone client, but contact Support for a patched version.

For example, if your S3 domain / endpoint is "https://mydomain.cloud.caringo.com" you can create a token with:

$ curl -i -u johndoe -X POST --data-binary '' -H 'X-User-Secret-Key-Meta: secret' \
-H 'X-User-Token-Expires-Meta: +90' https://mydomain.cloud.caringo.com/.TOKEN/ HTTP/1.1 201 Created ... Token c63d5b1034c6b41b119683a5e264abd0 issued for johndoe in [root] with secret secret

 


Then add this entry to a ~~/.rclone.conf (or newer location ~/config/rclone.conf fileconf) file:

[caringo]
type = s3
region = other-v2-signature
access_key_id = c63d5b1034c6b41b119683a5e264abd0
secret_access_key = secret
endpoint = https://mydomain.cloud.caringo.com
location_constraint=

 


Here are some example commands:

  • List the buckets in your domain
    $ rclone lsd caringo:
              -1 2015-03-16 20:13:52        -1 public
             -1 2015-11-28 23:10:32        -1 inbox
    Transferred:            0 Bytes (   0.00 kByte/s)
    Errors:                 0
    Checks:                 0
    Transferred:            0
    Elapsed time:  5.653212245s
  • Copy your Pictures directory (recursively) to a "old-pics" bucket. It will be created if it does not exist.
    $ rclone copy --s3-upload-concurrency 10 --s3-chunk-size 100M '/Volumes/Backup/Pictures/' caringo:old-pics
    2016/01/12 13:55:47 S3 bucket old-pics: Building file list
    2016/01/12 13:55:48 S3 bucket old-pics: Waiting for checks to finish
    2016/01/12 13:55:48 S3 bucket old-pics: Waiting for transfers to finish
    2016/01/12 13:56:45 
    Transferred:      2234563 Bytes (  36.36 kByte/s)
    Errors:                 0
    Checks:                 0
    Transferred:            1
    Elapsed time:  1m0.015171105s
    Transferring:  histomapwider.jpg
    ...
  • List the files in the bucket
    $ rclone ls caringo:old-pics
        6148 .DS_Store
     4032165 histomapwider.jpg
    ...
  • Verfiy all files were uploaded (note trailing slash is necessary on local directory!)
    $ rclone check ~/Pictures/test/ jam:old-pics
    2016/01/12 14:01:18 S3 bucket old-pics: Building file list
    2016/01/12 14:01:18 S3 bucket old-pics: 1 files not in Local file system at /Users/jamshid/Pictures/test
    2016/01/12 14:01:18 .DS_Store: File not in Local file system at /Users/jamshid/Pictures/test
    2016/01/12 14:01:18 Local file system at /Users/jamshid/Pictures/test: 0 files not in S3 bucket old-pics
    2016/01/12 14:01:18 S3 bucket old-pics: Waiting for checks to finish
    2016/01/12 14:01:18 S3 bucket old-pics: 1 differences found
    2016/01/12 14:01:18 Failed to check: 1 differences found

    Note that "check" appears to be confused by the Mac OS X hidden directory ".DS_Store".
  • Tips: use "-v" and "--dump - headers" or "--dump - bodies" to see verbose details and you can try speeding
    Increase the part size with --s3-chunk-size from the default 5M to improve the speed and storage efficiency of resulting large streams. 
    Speed up large transfers with "--transfers=10" and "--s3-upload-concurrency 10".
    You might want to use --s3-disable-checksum when uploading huge files.
  • Unfortunately rclone does not copy or let you add metadata, though there are some enhancement requests on github.