Table of Contents |
---|
...
|
Use optional lifepoint headers to define object-specific Swarm replication and retention policies , which can be as simple or complex as your situation with varying complexity as the situation requires.
See SCSP Headers.
Understanding Storage Policies
Each node in a storage cluster includes a Health Processor that continuously cycles through the list of content objects that it stores on disk to determine what is considered "healthy" for each object at this particular point in its the lifecycle. For example, the The Health Processor may determine that an object needs to have at least three replicas of itself stored within Swarm. This requirement referred to as a content constraint or simply a constraint enables the Health Processor to take the appropriate action when needed to ensure verify disk-level and lifecycle data protection.
You can specify Specify a constraint when you first store storing the object in the storage cluster. For mutable or named objects, the The constraint can be changed with a COPY or a PUT for mutable or named objects.
Constraints can also be grouped together and given provided an expiration date. This type of constraint group is called a lifepoint because it represents a point where the health requirements of an object will change. When you create a A sequence of lifepoints , they are collectively called a storage policy or a content lifecycle.
Lifepoints to
...
Prevent Deletion
Insert excerpt | ||||||
---|---|---|---|---|---|---|
|
An important use of lifepoints is to protect objects from deletion. However, deleting Deleting a bucket that contains containing such protected objects will generate generates errors and orphan orphans those named objects.
Info | |
---|---|
Tip | |
Best | practiceIf you want to maintain PracticeMake the bucket object indelible if maintaining a bucket for undeletable objects, make the bucket object itself undeletableindelible objects. |
See "DELETE for domains and buckets" in SCSP DELETE.
Lifecycle
...
Evaluation Example
Consider an object was written to Swarm on June 12, 2015. In the first six months of its life, the The object must have at least three replicas and cannot be deleted by any user . In in the second first six months of its life, the since creation. The object needs just two replicas , and client applications can delete the object . After a year, the in the second six months since creation. The object is deleted after a year.
Complete Lifecycle Policy
Code Block | ||||
---|---|---|---|---|
| ||||
Lifepoint: [Wed, 12 Dec 2015 15:59:02 GMT] reps=3, deletable=no Lifepoint: [Sun, 08 Jun 2016 15:59:02 GMT] reps=2, deletable=yes Lifepoint: [] delete | ||||
Info | ||||
Note
...
There is one instance of the object if there is one replica of an object in a cluster
...
. Replica and instance are synonymous in this context.
Each time the Health Processor (HCHP) examines the object , it checks the current date to see determine how to apply the lifepoint policies:
Time Frame | Lifepoint Effects | Notes |
---|---|---|
Before the first lifepoint date | Swarm refuses SCSP DELETE requests. HP maintains at least three replicas of the object in the cluster. | |
Between the first and second lifepoint dates | Swarm accepts SCSP DELETE requests. HP allows the number of replicas in the cluster to decrease. | Now the lifepoint specifies the deletable constraint enables a client to delete the content by sending an SCSP DELETE message with the object's name or UUID |
Afterthe second lifepoint date | Swarm accepts SCSP DELETE requests. HP deletes the object at the first checkup. |
The last lifepoint with no end date |
is in effect indefinitely once it comes in range. |
Specifying Lifepoints and Lifecycles
You can use Use a simple syntax to specify a complete object lifecycle, and you can specify one or more lifepoints. You do this by attaching Attach lifepoint entity headers to an SCSP WRITE message.
...
Code Block | ||
---|---|---|
| ||
lifepoint = "lifepoint" ":" end-date 1#constraint end-date = "[" [HTTP-date] "]" constraint = replication-constraint | delete-constraint | deletable-constraint replication-constraint = "reps" ["=" (1*DIGIT | 1*DIGIT:1*DIGIT)] delete-constraint = "delete" ["=" ("yes" | "no")] deletable-constraint = "deletable" ["=" ("yes" | "no")] |
Guidelines for
...
Lifepoints
Follow these guidelines when creating a lifepoint:
Guideline | Explanation |
---|---|
Make every lifepoint stand alone | Lifepoints do not build upon one another |
. They stand alone as a complete specification of the constraints that apply to the object in a |
provided date range. |
Include the complete set of constraints for a |
provided end date in the lifepoint header. Correct Lifepoint
|
|
Provide time in GMT |
---|
Adhere to the Full Date Section 3.3.1 of the HTTP/1.1 specification |
for HTTP-date. The indicated time must be specified in Greenwich Mean Time (GMT). |
GMT is exactly equal to UTC (Coordinated Universal Time) when dealing with Swarm. | |||||
Do not use deletable= without reps= | The delete constraint does not store a value and cannot include end-date : Incorrect Delete Constraint
|
---|
| |
Do not delete contexts by lifepoint |
---|
Swarm does not allow lifepoint-triggered deletes of contexts (domains and bucket objects) to protect content objects from being orphaned. See SCSP DELETE for guidance on deleting domains and buckets. | |
Do not replicate chunked uploads | Chunked uploads are erasure-coded automatically |
---|
. A request fails if chunked and the current lifepoint specifies replication. |
Specify two lifepoints to convert a chunked upload |
. Have the first specify an EC encoding |
expiring in one day |
and have the second specify the number of replicas |
going forward: Converting Chunked to Replication
|
| |||
Do not expect Swarm to validate lifepoints |
---|
Swarm does not validate lifepoints when they are added to the cluster to maximize performance. Swarm accepts an invalid lifepoint and later logs an error |
if the HP cannot parse the lifepoint. |
Constraints for Replication and Deletion
Constraint names and values are parsed by Swarm object classes called ConstraintSpecialists that maintain one or more related constraints. For example, the The reps constraint is parsed and maintained by the ReplicationConstraintSpecialist. In general, constraint Constraint names are case-sensitive , and constraint names not recognized by any of the ConstraintSpecialists are ignored. As a result, the The set of allowable constraints is extensible , and new constraint types may be added to the system in future releases.
Constraint names and arguments recognized by the ConstraintSpecialists in Swarm include:
ReplicationConstraintSpecialist
DeletionConstraintSpecialist
ReplicationConstraintSpecialist
The ReplicationConstraintSpecialist maintains the desired level of redundancy of content objects and ensures verifies they are stored in the most efficient manner. It understands one constraint name: , reps, which is set by protection type:
Replicas – a single integer value
EC – a tuple of k:p integers (such as
5:2
)
The ReplicationConstraintSpecialist does this by ensuring that verifying the actual number of replicas or segments for an object is equal to reps at all times. If a replication constraint is missing from the lifepoint, a A default value is supplied from the node or cluster configuration if a replication constraint is missing from the lifepoint. Cluster administrators have control over some aspects of replication behaviors through Swarm configuration parameters:
Replicas – Place limits on the number of replicas that can be specified by defining policy.replicas min and max.
EC – Specify the ec.minParity to
ensure thatverify all objects have a minimum number of parity segments included for protection.
If invalidInvalid or conflicting values of the reps constraint
are found in a lifepoint, theyare ignored, defaults are used, and warnings are written to the log if found in a lifepoint. Lifepoints with erasure coding define what EC level to apply. For example: lifepoint = [] reps=5:2 expresses an erasure-coded level of 5 data segments and 2 parity segments.
Supported conversion methodsAs of v6.5, a Conversion Methods
A storage policy with multiple lifepoints that include including the following conversion methods are supported as of v6.5:
Replication to EC
EC to replication
One EC encoding to a different encoding
Info |
---|
ImportantThe object size value must be greater than the policy.ecMinStreamSize setting, regardless of the specified lifepoint. Otherwise, theThe object willis not beerasure-coded and willis instead beprotected with p+1 replicas otherwise. |
DeletionConstraintSpecialist
...
DeletionConstraintSpecialist understands two constraint names: deletable and delete.
The deletable constraint is set to
yes|true
orno|false
:
thatyes|true
(default) indicatesthe object is deletable by any client
that knows itsknowing the name or UUID. The DELETE method must
alsobe included in the Allow header for a client delete to be allowed.
no|false
prevents any agent from deleting the object during the effective period of the lifepoint. Any attempt to delete the object result in a 403 (Forbidden) response.
The delete constraint does not accept a value. This constraint causes DeletionConstraintSpecialist to delete the content object from the cluster. The result is the same as if a client application had deleted the object.
To avoid ambiguity, when delete is present in a lifepoint specification, it Delete must be the only sole constraint in that a lifepoint when present because other conditions on a deleted object may not be applicable. Additionally, a A delete lifepoint must be specified with an empty end date.
Incorrect Delete Constraint
Code Block | ||||
---|---|---|---|---|
| ||||
Lifepoint: [Wed, 08 Jun 2012 15:59:02 GMT] reps=3, deletable=no, delete |
Correct Delete Constraint
Code Block | ||||
---|---|---|---|---|
| ||||
Lifepoint: [Fri, 12 Dec 2011 15:59:02 GMT] reps=3, deletable=no Lifepoint: [] delete |
Info |
---|
ImportantDo not use deletable=no and delete in the same lifepoint. |