Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

You can use optional lifepoint headers to define object-specific Swarm replication and retention policies, which can be as simple or complex as your situation requires.

See SCSP Headers.

Understanding Storage Policies

Each node in a storage cluster includes a Health Processor that continuously cycles through the list of content objects that it stores on disk to determine what is considered "healthy" for each object at this particular point in its lifecycle. For example, the Health Processor may determine that an object needs to have at least three replicas of itself stored within Swarm. This requirement referred to as a content constraint or simply a constraint enables the Health Processor to take the appropriate action when needed to ensure disk-level and lifecycle data protection.

You can specify a constraint when you first store the object in the storage cluster. For mutable or named objects, the constraint can be changed with a COPY or a PUT.

Constraints can also be grouped together and given an expiration date. This type of constraint group is called a lifepoint because it represents a point where the health requirements of an object will change. When you create a sequence of lifepoints, they are collectively called a storage policy or a content lifecycle.

Lifepoints to prevent deletion

An important use of lifepoints is to protect objects from deletion. However, deleting a bucket that contains such protected objects will generate errors and orphan those named objects.

Best practice

If you want to maintain a bucket for undeletable objects, make the bucket object itself undeletable.

See "DELETE for domains and buckets" in SCSP DELETE.

Lifecycle evaluation example

Assume that an object was written to Swarm on June 12, 2015. In the first six months of its life, the object must have at least three replicas and cannot be deleted by any user. In the second six months of its life, the object needs just two replicas, and client applications can delete the object. After a year, the object is deleted.

Complete lifecycle policy
Lifepoint: [Wed, 12 Dec 2015 15:59:02 GMT] reps=3, deletable=no 
Lifepoint: [Sun, 08 Jun 2016 15:59:02 GMT] reps=2, deletable=yes 
Lifepoint: [] delete

Note

If there is one replica of an object in a cluster, then there is one instance of the object. Replica and instance are synonymous in this context.

Each time the Health Processor (HC) examines the object, it checks the current date to see how to apply the lifepoint policies:

TimeframeLifepoint EffectsNotes
Before the first lifepoint date

Swarm refuses SCSP DELETE requests.

HP maintains at least three replicas of the object in the cluster.


Between the first and second lifepoint dates

Swarm accepts SCSP DELETE requests.

HP allows the number of replicas in the cluster to decrease.

Now the lifepoint specifies the deletable constraint enables a client to delete the content by sending an SCSP DELETE message with the object's name or UUID
After the second lifepoint date

Swarm accepts SCSP DELETE requests.

HP deletes the object at the first checkup.

Whenever the last lifepoint has no end date, it is in effect indefinitely once it comes in range.

Specifying Lifepoints and Lifecycles

You can use a simple syntax to specify a complete object lifecycle, and you can specify one or more lifepoints. You do this by attaching lifepoint entity headers to an SCSP WRITE message.

The entity header is shown below in Augmented Backus-Naur Form (BNF) syntax:

lifepoint = "lifepoint" ":" end-date 1#constraint end-date = "[" [HTTP-date] "]" 
	constraint = replication-constraint | delete-constraint | deletable-constraint replication-constraint = 
		"reps" ["=" (1*DIGIT | 1*DIGIT:1*DIGIT)] delete-constraint = "delete" ["=" ("yes" | "no")] 
			deletable-constraint = "deletable" ["=" ("yes" | "no")]

Guidelines for lifepoints

When you create a lifepoint, follow these guidelines:

GuidelineExplanation
Make every lifepoint stand alone

Lifepoints do not build upon one another: they stand alone as a complete specification of the constraints that apply to the object in a given date range. Be sure to include the complete set of constraints for a given end date in the lifepoint header.

Correct lifepoint
Lifepoint: [] reps=1,deletable=no
Give time in GMTFor HTTP-date, adhere to the Full Date Section 3.3.1 of the HTTP/1.1 specification. This means that the indicated time must be specified in Greenwich Mean Time (GMT). When dealing with Swarm, GMT is exactly equal to UTC (Coordinated Universal Time).
Do not use deletable= without reps=

The delete constraint does not store a value and cannot include end-date :

Incorrect delete constraint
Lifepoint: [] reps=1
Lifepoint: [] deletable=no
Do not delete contexts by lifepoint

To protect content objects from being orphaned, Swarm does not allow lifepoint-triggered deletes of contexts (domains and bucket objects).

See SCSP DELETE for guidance on deleting domains and buckets.

Do not replicate chunked uploads

Chunked uploads are erasure-coded automatically, so a request will fail if it is chunked and the current lifepoint specifies replication.

To convert a chunked upload, specify two lifepoints: have the first specify an EC encoding that expires in one day, and have the second specify the number of replicas that you want going forward:

Converting chunked to replication
Transfer-Encoding: chunked
Lifepoint: [Wed, 12 Dec 2016 15:59:02 GMT] reps=5:2 
Lifepoint: [] reps=3
Do not expect Swarm to validate lifepointsTo maximize performance, Swarm does not validate lifepoints when they are added to the cluster. Swarm accepts an invalid lifepoint and later logs an error only if the HP cannot parse the lifepoint.

Constraints for Replication and Deletion

Constraint names and values are parsed by Swarm object classes called ConstraintSpecialists that maintain one or more related constraints. For example, the reps constraint is parsed and maintained by the ReplicationConstraintSpecialist. In general, constraint names are case-sensitive, and constraint names not recognized by any of the ConstraintSpecialists are ignored. As a result, the set of allowable constraints is extensible, and new constraint types may be added to the system in future releases.

Constraint names and arguments recognized by the ConstraintSpecialists in Swarm include:

  • ReplicationConstraintSpecialist
  • DeletionConstraintSpecialist

ReplicationConstraintSpecialist

The ReplicationConstraintSpecialist maintains the desired level of redundancy of content objects and ensures they are stored in the most efficient manner. It understands one constraint name: reps, which is set by protection type:

  • Replicas – a single integer value
  • EC – a tuple of k:p integers (such as 5:2)

The ReplicationConstraintSpecialist does this by ensuring that the actual number of replicas or segments for an object is equal to reps at all times. If a replication constraint is missing from the lifepoint, a default value is supplied from the node or cluster configuration. Cluster administrators have control over some aspects of replication behaviors through Swarm configuration parameters:

  • Replicas – Place limits on the number of replicas that can be specified by defining policy.replicas min and max
  • EC – Specify the ec.minParity to ensure that all objects have a minimum number of parity segments included for protection. If invalid or conflicting values of the reps constraint are found in a lifepoint, they are ignored, defaults are used, and warnings are written to the log. Lifepoints with erasure coding define what EC level to apply. For example: lifepoint = [] reps=5:2 expresses an erasure-coded level of 5 data segments and 2 parity segments.

Supported conversion methods

As of v6.5, a storage policy with multiple lifepoints that include the following conversion methods are supported:

  • Replication to EC
  • EC to replication
  • One EC encoding to a different encoding 

Important

The object size value must be greater than the policy.ecMinStreamSize setting, regardless of the specified lifepoint. Otherwise, the object will not be erasure-coded and will instead be protected with p+1 replicas.

DeletionConstraintSpecialist

The DeletionConstraintSpecialist completely removes a content object at a certain point in time and allows or disallows client applications to delete the content object using the SCSP DELETE request.

DeletionConstraintSpecialist understands two constraint names: deletable and delete.

  • The deletable constraint is set to yes|true or no|false:
    • yes|true (default) indicates that the object is deletable by any client that knows its name or UUID. The DELETE method must also be included in the Allow header for a client delete to be allowed. 
    • no|false prevents any agent from deleting the object during the effective period of the lifepoint. Any attempt to delete the object result in a 403 (Forbidden) response.
  • The delete constraint does not accept a value. This constraint causes DeletionConstraintSpecialist to delete the content object from the cluster. The result is the same as if a client application had deleted the object.

To avoid ambiguity, when delete is present in a lifepoint specification, it must be the only constraint in that lifepoint because other conditions on a deleted object may not be applicable. Additionally, a delete lifepoint must be specified with an empty end date.

Incorrect delete constraint
Lifepoint: [Wed, 08 Jun 2012 15:59:02 GMT] reps=3, deletable=no, delete
Correct delete constraint
Lifepoint: [Fri, 12 Dec 2011 15:59:02 GMT] reps=3, deletable=no 
Lifepoint: [] delete

Important

Do not use deletable=no and delete in the same lifepoint.
  • No labels