Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
maxLevel2

There are three types of storage policies in Swarm: replication, erasure coding, and versioning. They can be customized at the level of domains and buckets, but this section concerns the Swarm settings that control your cluster-wide requirements. In the Swarm UI, they They appear in the Policy section of the Cluster Settings in the Swarm UI:

...

Those settings that show an SNMP name are persisted settings, which means that you can update them dynamically (. They can be update dynamically without a cluster restart).

See Swarm Storage Policies

...

Setting

Default


ec.conversionPercentage

SNMP: ecConversionPercentage

0

Percentage, 1-100; 0 stops all conversion. Adjusts the rate at which the Health Processor consolidates multi-set erasure-coded objects each HP cycle. Lower to reduce cluster load; increase to convert a large number of eligible objects faster, at the cost of load on the cluster. If enabled, requires policy.eCEncoding to be specified.

ec.maxManifests

6

Range, 3-36. The maximum number of manifests written for an EC object. Usually, p+1 are written for a k:p encoding.

Requirement: Manifests must all be written to different nodes, even when using ec.protectionLevel=volume.

Do not set above 6 unless directed by Support.

ec.minParity

-1

Range -1 or 1-4; default of -1 is max(policyminreps - 1, 1), where policyminreps is the min value in policy.replicas. The minimum number of parity segments the cluster requires. This is the lower limit on p for EC content protection, regardless of the parity value expressed on individual objects through query arguments or lifepoints.

ec.protectionLevel

SNMP: ecProtectionLevel

node

Either 'node', 'subcluster', or 'volume'. At what level segments must be distributed for an EC write to succeed; note that multiple segments are allowed per level, if needed. 'node' (default) distributes segments across the cluster's physical/virtual machines. 'subcluster' requires node.subcluster to be defined across sets of nodes. You must have (k+p)/p nodes/subclusters for those levels; at minimum, you must have k+p volumes.

See details below.

ec.segmentConsolidationFrequency

SNMP: ecSegmentConsolidationFrequency

10

Percentage, 1-100, 0 to disable. How quickly the health processor consolidates object segments after ingest. Increase this value (such as to 25, to consolidate over 4 HP cycles) to make new content readable sooner by clients. For multipart uploads via S3 clients, 10 is recommended; for SwarmNFS, 100 is recommended, with extra space allowances for trapped space.

Consolidation changes the ETag (which affects If-Match requests) and Castor-System-Version headers, but Content-MD5 and Composite-Content-MD5 headers are unchanged. Therefore, have clients use the hash and last-modified date, rather than ETag, to find if an object has changed.

ec.segmentSize

-1

In bytes; default of -1 implies 200 MB, with recommended minimum of 100 MB. The maximum size allowed for an EC segment before triggering another level of erasure coding. For mostly large (1+ GB) objects, increase to minimize the number of EC sets, which reduces index memory usage. Alternatively, increase the size as needed per write request using the 'segmentsize' query argument.

policy.eCEncoding

SNMP: policyECEncoding

unspecified anchored

The cluster-wide setting for the EC (erasure coding) encoding policy. Valid values: unspecified, disabled, k:p (a tuple such as 5:2 that specifies the data (k) and parity (p) encoding to use). Add 'anchored' to set this cluster-wide; remove it to allow domains and buckets to have custom encodings.
Examples:
5:2
6:3 anchored

policy.ecMinStreamSize

SNMP: policyECMinStreamSize

1MB anchored

In integer units of megabytes (MB) or gigabytes (GB); must be 1MB or greater. The size that triggers an object to be erasure-coded, if specified (by eCEncoding, lifepoint, query argument) and allowed by policy. Below this threshold, objects are replicated unless they are multipart or chunked writes. Add 'anchored' to set this cluster-wide; remove it to allow domains and buckets to have custom values.
Examples:
100Mb
1GB anchored

Excerpt

What EC Protection Level is needed?

The EC protection level determines how strictly EC segments must be distributed for a write to succeed, or else return an error (412 Precondition Failed) to the writing application. After Swarm writes an object to the cluster, the health processor tries to maintain the requested protection level. If cluster resources become unavailable, it will degrade gracefully. When this occurs, the health processor logs errors, alerting you that the requested protection cannot be maintained and that your data may be at risk.

Regardless of the protection level you set, Swarm always makes a best effort to distribute segments as broadly as possible across your hardware, to protect your data.

ec.protectionLevel

Cluster requirements

Effect

subcluster

>= (k+p)/p subclusters

Requires a subcluster for every p segments. Use only if you have geographical or systems-based subclusters defined that you need to factor into content protection.

node (default)

>= (k+p)/p nodes

Requires a node for every p segments. Use for most situations.

Important: When working with a small number of nodes, be sure that your EC encoding can support what you have. 

  • With 3 nodes, you can use 3:2 encoding ((3 + 2) ÷ 2 = 3 nodes required), but not 3:1 encoding ((3 + 1) ÷ 1 = 4 nodes required).

  • With 4 nodes, you can use 4:2 encoding ((4 + 2) ÷ 2 = 3 nodes required), but not 4:1 encoding ((4 + 1) ÷ 1 = 5 nodes required).

volume

>= k+p volumes

Least protection. Requires k+p volumes, but p+1 nodes are still needed because the manifest must be written to separate nodes. Use only if you have insufficient nodes for node-based protection. 


< k+p volumes

Unsupportable. EC writes will fail.

Info

Deprecated

The setting  ec.subclusterLossTolerance has been deprecated and needs to be removed from configurations when upgrading to Swarm 10.

Versioning Policy

Swarm has policy support for object versioning, which makes it possible to enable versioning . Versioning can be enabled for specific contexts (domains and buckets) after you configure the cluster is configured to permit versioning of objects. 

...