Hardware Selection
Storage CPU
Swarm Storage supports standard x86-64 CPUs (Intel, AMD)
Single or multiple sockets supported (and multi-core)
Recommend use of above CPUs that include AES New Instructions (AES-NI) support
Used by Swarm for improved performance of Encryption at Rest (EAR)
Most modern server processors include this as of 2010
Storage Memory
RAM per storage node for the following object capacities:
RAM per Node | 16 GB | 32 GB | 64 GB | 128 GB |
---|---|---|---|---|
Storage Node RAM Index Slots | 268M | 536M | 1073M | 2146M |
Immutable Objects | 268M | 536M | 1073M | 2146M |
Mutable Objects | 134M | 268M | 536M | 1073M |
5:2 Erasure Coded Objects | 26M | 53M | 107M | 214M |
Info
Memory required is a function of object count, object type, and data protection scheme chosen.
Larger clusters need additional memory for the Overlay Index or other features which may require additional resources.
Storage Drives
Direct Attached
Controllers: SAS or SATA JBOD HBAs (SAS preferred)
“Hot plug” connector/backplane support
Disks: “Enterprise Grade”
Designed for 24x7 continuous duty cycles
Typically 5 years of warranty
Examples: Seagate “Exos”, Western Digital “Gold”
Storage Networking
Best Practice
Maintain the same network speed for all devices within the Swarm cluster; mixing speeds requires additional configuration to avoid performance problems.
Ethernet (with appropriate connector type)
1 Gb to 10 Gb (or higher if needed)
Bonding of multiple ports supported for throughput and redundancy
Including 802.3ad (LAG/LACP) if switch redundancy is required
Jumbo Frame support
Typical vendor choices are Intel, Broadcom, etc.
Info
Multiport network cards (two or more ports per card) are not redundant when considering failover for the storage hosts. Multiport NICs experience common failure modes that can disconnect a Swarm host completely. An 'active-active' design includes the use of separate NICs in the storage hosts to meet the requirement.
Minimum Hardware for Storage
Appropriate for functional design and testing
3 or more nodes (chassis) in a cluster
Can be deployed as virtual machines (VMware guests)
Rule of thumb, minimum physical memory is 2 GB + (0.5 GB * number of volumes), but more memory improves cluster operation
Production Hardware for Storage
Multi-socket / Multi-core x86-64 CPUs
“Enterprise Grade” SAS drives
RAM depends on object counts and other factors
Minimum of 4 nodes (chassis) in the cluster (scale up / scale out)
Typically physical servers, but can be virtual machines (VMware)
Hardware for Other Components
Component | Platform Server | Elasticsearch | Content Gateway | SwarmFS |
---|---|---|---|---|
Purpose | Boot, monitor, manage Storage cluster | Query and list objects in Storage | Protocol and auth/auth gateway to Storage | NFS protocol gateway to Storage |
CPU | x86-64 (multi-socket/core, 2 cores) | x86-64 (multi-socket/core) | x86-64 (multi-socket/core) | x86-64 (multi-socket/core, 4+ cores) |
Memory | 8 GB RAM | 64 GB RAM per 1 billion distinct objects | 4+ GB RAM | 4+ GB RAM (16 GB recommended) |
Drive | 80+ GB (large clusters: more for logs) | 1.5 TB per 1 billion distinct objects | 4+ GB plus OS install footprint | 40+ GB plus OS install footprint |
Network | 1 Gb Ethernet | 1 Gb Ethernet | 1 Gb Ethernet | 1 Gb Ethernet (10 Gb heavy traffic) |
Servers | 1 | 3 to 4 (for redundancy and performance) | Scale to support client sessions | Scale to support client sessions |
Virtualize | Yes (OVA available) | Yes | Yes | Yes |
Notes | Assume full index of object metadata (custom metadata) | Scale RAM and CPU with concurrent writes |
© DataCore Software Corporation. · https://www.datacore.com · All rights reserved.