Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Current »

DataCore Swarm Support sometimes provides customers with a script like below to automatically upload rolled log files from an SCS server to a bucket using the rclone utility. This is easier and faster than manually gathering and uploading support bundles.

Just install rclone on the SCS and copy the provided script to /etc/cron.hourly/.

# yum install -y https://downloads.rclone.org/v1.62.2/rclone-v1.62.2-linux-amd64.rpm
# cp -p support-upload-logs /etc/cron.hourly/
# chmod +x /etc/cron.hourly/support-upload-logs
# run-parts  --test /etc/cron.hourly   # verify the new script is listed- this --test flag does not work on the CSN but is not critical

The script will look like the file below but contains a customer-specific domain, bucket and key.

#!/bin/sh
#
# This script copies recent DataCore Swarm logs (castor, cloudgateway, haxproxy) to a
# bucket configured to temporarily and privately make logs accessible to Support staff
# without relying on techsupport-bundle-grab.sh and uploading bundles to a ticket.
#
# Copy this into /etc/cron.hourly/support-upload-logs and verify "systemctl status crond".
# The keys, endpoint and bucket will be provided by DataCore Support. They are provided in 
# this self-contained script so they do not rely on ~/.rclone.conf.

S3_ENDPOINT=https://customer-demo.cloud.datacore.com
BUCKET=logs
# Expires 2024-05-15
S3_ACCESS_KEY=ec1246520a9574bf278d376732abcfc7
S3_SECRET_KEY=secret

# Set and uncomment if a proxy is needed to reach the S3_ENDPOINT.
# Reference: https://rclone.org/faq/#can-i-use-rclone-with-an-http-proxy
# export https_proxy=

# Set higher if you want all current log files uploaded
MAX_AGE=7d

SYSLOG_HOST=127.0.0.1

# Uses "timeout" to prevent multiple copies of rclone from running at the same time
timeout 55m rclone -vv copy --transfers 1 --s3-no-head --s3-upload-cutoff 1G --s3-chunk-size 100M --max-age "${MAX_AGE}" --max-depth 1 \
    --include "cloudgateway_*.gz" --include "castor*.gz" --include "haproxy.log*.gz" /var/log/datacore/ \
    ":s3,provider=Other,endpoint='${S3_ENDPOINT}',access_key_id=${S3_ACCESS_KEY},secret_access_key=${S3_SECRET_KEY}:${BUCKET}"
EXITVALUE=$?
if [ $EXITVALUE != 0 ]; then
    # Logs to /var/log/messages
    logger -n "${SYSLOG_HOST}" -p user.notice -t support-upload-logs "ALERT: the DataCore support rclone cron job exited abnormally with [$EXITVALUE]"
fi
exit 0

Installing on the CSN will involved changing /var/log/datacore/ to /var/log/caringo/ in the script.

You can use a similar script to archive your logs to your own Swarm. Just create a domain and bucket, choose a user to be the owner of the bucket and create an S3 token for that user.

Tip: force a logrotate to compress the latest log files so they are eligible for upload:

# logrotate --force /etc/logrotate.conf

This script is not for Gateway servers. The log directory is different and it only looks for compressed logs.

  • No labels