There are a few different ways to upload files into Swarm. The examples assume "mydomain.example.com" is your domain name and that your DNS resolves that host name to your Gateway. Or, you can specify the domain via the "domain" query arg.
If you're using Swarm without the Gateway proxy, you must add the "--post301 --location-trusted" curl options. You do not need to pass user credentials with Swarm.
REGULAR POST
curl -v -u "USER:PASSWORD" -T /tmp/myhugefile.zip -XPOST -H "Content-type: application/zip" "http://mydomain.example.com/mybucket/myhugefile.zip"
This also works, but without "-T" curl will load the entire file into memory.
curl -v -u "USER:PASSWORD" -XPOST -H "Content-type: application/zip" --data-binary @/tmp/myhugefile.zip "http://mydomain.example.com/mybucket/myhugefile.zip"
HTTP MULTIPART MIME (FORM) POST (ONLY VIA GATEWAY)
Note this is only supported via Gateway: https://connect.caringo.com/system/files/docs/c/MultipartMIMEPOST.html [https://connect.caringo.com/system/files/docs/c/MultipartMIMEPOST.html]
curl -v -u "USER:PASSWORD" -F upload=@/tmp/myhugefile.zip -F upload=@/tmp/foo.gif "http://mydomain.example.com/mybucket"
*Don't forget the "@" before the filename! Note you can specify multiple files, but remember these files will use the Gateway spool directory. Note the URL is only the bucket (or a subdirectory-like path), the stream name will be based on the filename uploaded.
SCSP PARALLEL WRITE
This is useful for uploading large files. You "initiate" the upload then upload each part of the file, then make a "complete" request. https://connect.caringo.com/system/files/docs/s/ParallelWriteExample.html
S3 MULTIPART UPLOAD
The S3 protocol is only supported via Gateway but the implementation uses Swarm parallel writes and behaves similarly. The s3cmd utility provides a good way to do a multipart upload. If your bucket allows "anonymous" writes, you can use "curl". See http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html