...
During heavy recursive S3 delete operations Swarm storage nodes can fail to delete all the metadata entries from elasticsearchElasticsearch.
This leads to listing requests returning ghost entries – objects or versions that were already deleted.
Any attempt to access those these ghost entries returns an HTTP 404 response which leads to failed Veeam jobs.
...
Failed Veeam offload jobs will show the following error message: (truncated for readability)
REST API error: S3 error: The specified key does not exist. Failed to load object.
Solution:
If This requires a manual cleanup procedure if you are already experiencing this issue please . Please open a ticket with DataCore Swarm support ticket including the Veeam logs containing the 404 errors, as it requires a manual cleanup procedure.
Whether you have experienced this issue or not please apply the settings changes below to avoid new occurrences of this bug.
Run the following 4 commands on your Swarm Cluster Services node(SCS) server. This enables synchronous indexing in Swarm and increases the wait time so that multi deletes -delete operations are more likely to keep elasticsearch Elasticsearch in sync.
Code Block |
---|
/root/dist/swarmctl -d SwarmStorageIP -C scsp.autoSynchronousIndex -V 1 -p <swarm_admin>:<swarm_password> -a |
Code Block |
---|
scsctl storage config set -d "scsp.autoSynchronousIndex=true" |
...
Code Block |
---|
/root/dist/swarmctl -d SwarmStorageIP -C scsp.defaultSynchronousIndexWait -V 60 -p <swarm_admin>:<swarm_password> -a |
Code Block |
---|
scsctl storage config set -d "scsp.defaultSynchronousIndexWait=60" |
...
Info |
---|
The |
...
commands records the setting so |
...
they persist after storage node |
...
are rebooted. |
Info |
---|
...
The |
...
was released |
...
in March 2023 with Swarm 15.2 |
...
. |
...
Info |
---|
The |