Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

Note

Multipart Write was previously referred to as Parallel Write; the functionality is the same.

Parts of a large object from multiple clients at the same time can be uploaded with Multipart Write. Multipart Write allow client applications to split a large file in to multiple pieces, transfer the pieces concurrently to Swarm, and then request Swarm combine the separately uploaded parts together as a single object, thereby minimizing the upload time.

Multipart write requires erasure coding (EC). The health processor (HP) has the ability to consolidate segments of erasure-coded objects with sub-optimal segment usage, as can happen when performing SCSP or S3 multipart writes of objects using small parts. DataCore recommends configuring clients to use 50MB-100MB parts. Set the configuration setting, ec.segmentConsolidationFrequency, to 10 (recommended), which performs all consolidations over 20 HP cycles if consolidation is needed.

Tip

Every multipart write must be erasure-coded for upload; the HP converts it to a replicated object if the uploaded object does not meet the current policy for EC encoding. Add a lifepoint to that effect to maintain erasure coding for the lifetime of the object.

Three distinct actions must be performed in this order to upload a large object in parts using multipart write:

  1. Initiate a multipart write.

  2. Upload or copy the parts.

  3. Complete or cancel the procedure.

  • No labels