Table of Contents | ||
---|---|---|
|
...
Excerpt | |||||
---|---|---|---|---|---|
Impacts for 11.3
|
...
Volumes newly formatted in Swarm 11.1, 11.2, or 11.3 to use encryption-at-rest cannot be downgraded to Swarm 11.0 or earlier without a special procedure to prevent data loss. Contact DataCore Support before any such downgrade with encrypted volumes. (SWAR-8941)
Infrequent WARNING messages, "Node/Volume entry not published due to lock contention (...); action will be retried," may appear in logs. Unless they are frequent, they may be ignored. (SWAR-8802)
If a node mounts an encrypted volume that is missing the encryption key in the configuration, the node will fail fails to mount all of the disks in the node. (SWAR-8762)
S3 Backup feeds do not yet backup logical objects larger than 5 GB. (SWAR-8554)
When restarting a cluster of virtual machines that are UEFI-booted (versus legacy BIOS), the chassis shut down but do not come back up. (SWAR-8054)
With multipath-enabled hardware, the Swarm console Disk Volume Menu may erroneously show too many disks, having multiplied the actual disks in use by the number of possible paths to them. (SWAR-7248)
...
If you downgrade from Swarm 11.0, CRITICAL errors may appear on your feeds. To stop the errors, edit the existing feed definition names via the Swarm UI or legacy Admin Console. (SWAR-8543)
If you wipe your Elasticsearch cluster, the Storage UI will show shows no NFS config. Contact DataCore Support for help repopulating your SwarmFS config information. (SWAR-8007)
If you delete a bucket, any incomplete multipart upload into that bucket will leave its leaves the parts (unnamed streams) in the domain. To find and delete them, use the s3cmd utility (search the Support site for "s3cmd" for guidance). (SWAR-7690)
If you remove subcluster assignments are removed in the CSN UI, doing so creates invalid config parameters that will prevent preventing the unassigned nodes from booting. (SWAR-7675)
Logs showed the error "FEEDS WARNING: calcFeedInfo(etag=xxx) cannot find domain xxx, which is needed for a domains-specific replication feed". The root cause is fixed; if you received such warnings, contact DataCore Support so the issue can be resolved. (SWAR-7556)
If a feed is subject to a prolonged outage, a node reboot may be required for it to resume progress after the outage is cleared. If progress is not resolved after the reboot, contact DataCore Support. This has been resolved in 12.1.0 (SWAR-9062)
If Elasticsearch 6.8.6 blocks an index due to low disk space, this will have needs to be issued against each index (
index_*
,csmeter*
,metrics*
) in theread_only_allow_delete
state. This is no longer an issue after upgrading to Swarm 12 / Elasticsearch 7 as it automatically unblocks when disk space frees up. (SWAR-8944)curl -i -XPUT "<ESSERVERIP>:9200/<INDEXNAME>/_settings" -d '{"index.blocks.read_only_allow_delete" : null}' -H "Content-Type: application/json"
...