...
- List the buckets in your domain
$ rclone lsd caringo:
-1 2015-03-16 20:13:52 -1 public
-1 2015-11-28 23:10:32 -1 inboxTransferred: 0 Bytes ( 0.00 kByte/s)
Errors: 0
Checks: 0
Transferred: 0
Elapsed time: 5.653212245s - Copy your Pictures directory (recursively) to a "old-pics" bucket. It will be created if it does not exist.
$ rclone copy --s3-upload-concurrency 10 --s3-chunk-size 100M '/Volumes/Backup/Pictures/' caringo:old-pics
2016/01/12 13:55:47 S3 bucket old-pics: Building file list
2016/01/12 13:55:48 S3 bucket old-pics: Waiting for checks to finish
2016/01/12 13:55:48 S3 bucket old-pics: Waiting for transfers to finish
2016/01/12 13:56:45
Transferred: 2234563 Bytes ( 36.36 kByte/s)
Errors: 0
Checks: 0
Transferred: 1
Elapsed time: 1m0.015171105s
Transferring: histomapwider.jpg
... - List the files in the bucket
$ rclone ls caringo:old-pics
6148 .DS_Store
4032165 histomapwider.jpg
... - Quickly see the size of the objects in a bucket:
$ rclone size jam:old-pics
Total objects: 173
Total size: 9.550 GBytes (10254108727 Bytes) - Verify all files were uploaded (note trailing slash is necessary on local directory!). The check command can also compare two buckets.
$ rclone check ~/Pictures/test/ jamcaringo:old-pics
2016/01/12 14:01:18 S3 bucket old-pics: Building file list
2016/01/12 14:01:18 S3 bucket old-pics: 1 files not in Local file system at /Users/jamshid.../Pictures/test
2016/01/12 14:01:18 .DS_Store: File not in Local file system at /Users/jamshid.../Pictures/test
2016/01/12 14:01:18 Local file system at /Users/jamshid..../Pictures/test: 0 files not in S3 bucket old-pics
2016/01/12 14:01:18 S3 bucket old-pics: Waiting for checks to finish
2016/01/12 14:01:18 S3 bucket old-pics: 1 differences found
2016/01/12 14:01:18 Failed to check: 1 differences found
Note that "check" appears to be confused by the Mac OS X hidden directory ".DS_Store". - Tips: use "
-v
" and "--dump headers
" or "--dump bodies
" to see verbose details. - To ignore system files you don't want compared or uploaded use something like:
--excludes '.DS_Store' --exclude '.Trashes**' --exclude '.fseventsd**' --exclude '.Spotlight**' --exclude '._*'
- Increase the part size with
--s3-chunk-size 100M
(defaults to 5M) to improve the speed and storage efficiency of resulting large streams. - Speed up large transfers with "
--
transfers=10
" and "--s3-upload-concurrency 4
". - You might want to use
--s3-disable-checksum
when uploading huge files. - Unfortunately rclone does not copy or let you add metadata, though there are some enhancement requests on github.
- Copy a file from a plain http website into Swarm by streaming it directly:
# rclone -v --dump headers copy commondatastorage:gtv-videos-bucket/sample/ElephantsDream.mp4 /tmp/
[commondatastorage]
type = http
url = https://commondatastorage.googleapis.com