Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: minor formatting

...

If you're using Swarm without the Gateway proxy, you must add the "--post301 --location-trusted" curl options. You do not need to pass user credentials with Swarm.

...

Regular POST

curl

...

-v

...

-u

...

"USER:PASSWORD"

...

-T

...

/tmp/myhugefile.zip

...

-XPOST

...

-H

...

"Content-type:

...

application/zip"

...

"http://mydomain.example.com/mybucket/myhugefile.zip"

Tip

Tip

...

If you need to write an unnamed object you must input the file via stdin and use "-T -" to prevent curl from appending the filename.

This also works, but unless you're using "-T" curl will load the entire file into memory.

curl

...

-v

...

-u

...

"USER:PASSWORD"

...

-XPOST

...

-H

...

"Content-type:

...

application/zip"

...

--data-binary

...

@/tmp/myhugefile.zip

...

"http://mydomain.example.com/mybucket/myhugefile.zip"

HTTP

...

Multipart MIME (

...

Form) POST (

...

only via Gateway)

Info

This is only supported via Gateway: see Multipart MIME POST.

Multiple files are uploaded in a single POST with Content-Type: multipart/form-data; boundary=----X.

curl

...

-v

...

-u

...

"USER:PASSWORD"

...

-F

...

upload=@/tmp/myhugefile.zip

...

-F

...

upload=@/tmp/foo.gif

...

"http://mydomain.example.com/mybucket/"

*Don't forget the "@" before the filename! Note you

Info

Info

You can specify multiple files, but remember these files will use the Gateway spool directory.

...

The URL is only the bucket (or a subdirectory-like path), the object name will be based on the filename uploaded.

This type of upload will result in objects that are either a single object (replicated as policy.replicas) or EC (see Working with Large Objects) depending on factors such as the file size and EC settings. Whether a file is uploaded with Transfer-encoding: chunked can also influence how it's written.

SCSP

...

Multipart (

...

Parallel Write)

This is useful for uploading large files. You "initiate" the upload, then upload each part of the file , then and make a "complete" request. See Multipart Write Example.

This type of upload always results in an EC object, even if the final object is smaller than the EC minimum setting.

S3

...

Multipart Upload

The S3 protocol is only supported via Gateway but the implementation of S3 multipart uses Swarm SCSP multipart (parallel writes) and behaves similarly. The s3cmd utility provides a good way to do a multipart upload, but rclone is faster because it uploads the parts in parallel. If your bucket allows "anonymous" writes, you can use "curl". See http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html 

...