Table of Contents |
---|
...
|
Overview
Swarm Configurator is a tool that determines the hardware specifications needed for the various components in the Swarm cluster. The customer input is Customer inputs are required with respect to cluster specifications, therefore, some typical data is needs to be collected through DataCore Cloud UI which includes, including:
Storage Characteristics
Data Protection requirementsRequirements
Cluster Replication across clustersErasure Coding for clustersconfiguration
Protection Scheme configuration (for example, Erasure Coding scheme)
Client Characteristics
Access the configurator tool here: Swarm Configurator Tool
...
Note |
---|
The Swarm Configurator tool is not infallible. Customers and partners are always encouraged to engage with DataCore for any questions about tool results or to address special requirements not outlined in the tool. |
Customer Inputs
There are three types of inputs (Storage Characteristics, Data Protection, and Client Characteristics) required from the customer to determine hardware components. The outcome is displayed under the Results tab in the tabular format , which you can download and is downloadable in YAML if needed.
Storage Characteristics
...
Number of logical objects in millions
Average size of the object in MB
Data Protection
There are two types of data protection methods available in the Swarm Configuratorcluster; you can apply one protection method at a time so choose an option accordingly.
ReplicationReplicas
No Number of Replicas – Capacity is based on replicas; therefore, it is recommended to use less number of replicas (i.e., maximum 2-3 replicas). Having more replicas of a cluster requires more memory which results in less I/O capacity and makes the data access slower. Requires a minimum of 1 replica.
Erasure Coding
Erasure Data Segments – The A number of data blocks to store the fragmented datasegments created for each logical object.
Erasure Parity Segments – A calculated value to restore data from other drives. It is added to the end of each data block to verify the number of bits available in the block is odd or evennumber of parity segments for the given number of data segments.
Segment Size (MB) – The maximum size of each block in MegaBytessegment. The object gets split further if the segment size is more than the maximum size.
Client Characteristics
...
No Number of concurrent clients – The number of clients concurrently connected to the storage for accessing data.
Write throughput per client Mbps – The rate at which the client writes or reads.
...
in megabytes per second.
Hardware Components
The hardware configuration is based on customer inputs. Swarm Configurator calculates inputs from the customer are optional for hardware components. If not provided, Swarm Configurator will calculate the required hardware components and represents based on other inputs such as Storage Characteristics, Data Protection Requirements, and Client characteristics, and represent them into three different categories:
Storage Nodes
Elastic Search
Elasticsearch
Gateway
...
Results
This menu provides the complete results result including data collected from the customer and the calculated configurations that are generated through Swarm Configurator. The result is displayed in the tabular format which you can export into a YAML file format.
...
What is Swarm Configurator
...
- Reverse?
Swarm Configurator – Reverse functions opposite to Swarm Configurator wherein inputs for the data protection method and hardware requirements specifications are collected from the customer. Based on given inputs, this tool determines how much storage is neededavailable for the given hardware. It includes:
Logical capacity per node (TB)
Number of objects per node
The customer can move back and forth between Swarm Configurator and Swarm Configurator - Reverse by clicking on the home icon and reverse icon.
Customer Inputs
Data Protection
Replication
No Number of Replicas – It is recommended to make maximum two or three replicas for faster I/O capacity and less memory load.
A number of copies of data to be maintained.
Erasure Coding
Erasure Data Segments –The A number of data blocks to store the fragmented datasegments created for each logical object.
Erasure Parity – An encoded value for each data block to ensure that the number of bits assigned to each data block is odd or even. The parity is added at the end of each data block.
Hardware Components
The input is collected from the customer for the following hardware components:
...
Segments – A number of parity segments for the given number of data segments.
...
Hardware Specification
The following inputs are collected from the customer:
...
Hard drive size in Terra Bytes terabytes (TB).
Number of hard drives required for each node
Network speed in Gbps
Number of network ports for on each node
CPU cores required for core on each node (e.g., 32, 64, etc.)
Total RAM required for available on each node in GigaBytes (GB)
Outcome
Results
Based on customer inputs for data protection and hardware components, Swarm Configure - Reverse calculates the storage characteristics.
...
Info |
---|
ImportantEnsure that Verify the data under Settings is not updated or reset for any section given on the UISwarm Configurator. It is solely managed by the DataCore Swarm Team only. |
...