Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Note this is only supported via Gateway: https://connect.caringo.com/system/files/docs/c/MultipartMIMEPOST.html [https://connect.caringo.com/system/files/docs/c/MultipartMIMEPOST.html]s/Multipart_MIME_POST.html

curl -v -u "USER:PASSWORD" -F upload=@/tmp/myhugefile.zip -F upload=@/tmp/foo.gif "http://mydomain.example.com/mybucket/"

*Don't forget the "@" before the filename! Note you can specify multiple files, but remember these files will use the Gateway spool directory. Note the URL is only the bucket (or a subdirectory-like path), the stream name will be based on the filename uploaded.

This type of upload will result in streams that are either a single object (replcated as policy.replicas) or EC (see https://connect.caringo.com/system/files/docs/s/WorkingwithLargeObjects.html) depending on factors such as the file size and EC settings. Whether a file is uploaded with Transfer-encoding: chunked can also influence how it's written.

SCSP MULTIPART (PARALLEL WRITE)

This is useful for uploading large files. You "initiate" the upload then upload each part of the file, then make a "complete" request. https://connect.caringo.com/system/files/docs/s/ParallelWriteExample.html Multipart_Write_Example.html

This type of upload always results in an EC stream, even if the final object is smaller than the EC minimum setting.

S3 MULTIPART UPLOAD

The S3 protocol is only supported via Gateway but the implementation of S3 multipart uses Swarm SCSP multipart (parallel writes) and behaves similarly. The s3cmd utility provides a good way to do a multipart upload, but rclone is faster because it uploads the parts in parallel. If your bucket allows "anonymous" writes, you can use "curl". See http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html 

This type of upload always results in an EC stream, even if the final object is smaller than the EC minimum setting.