Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

It is recommended that, at installation time, the SCS includes a dedicated /var/log partition so that excessive logging does not overrun the root partition. Having a dedicated /var partition is not sufficient from preventing problems arising from excessive logging as the podman containers live in /var/lib. Overrunning the /var partition will prevent bootup of storage nodes among other problems. The following instructions only cover how to add a new disk to a SCS VMWare-based installation as a /var/log partition to your environment and do not cover expanding or reconfiguring existing logical volumes.

To verify if you have a dedicated partition for /var/log, run this on your SCS:

df -h 

If you see a line for /var/log under the “Mounted on” column, the SCS already has a dedicated /var/log partition. In the example following, I have a dedicated /var partition, but not /var/log partition:

[root@ace-scs2-scs1 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 1.9G     0  1.9G   0% /dev
tmpfs                    1.9G     0  1.9G   0% /dev/shm
tmpfs                    1.9G  8.9M  1.9G   1% /run
tmpfs                    1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/mapper/centos-root   15G  2.2G   13G  15% /
/dev/sda1               1014M  146M  869M  15% /boot
/dev/mapper/centos-var    30G  228M   30G   1% /var
tmpfs                    379M     0  379M   0% /run/user/0

Make a note of the output of the above command in your environment as it may be useful later.

Proceed with the Instructions to create a dedicated /var/log partition. The instructions presume:

  • You are an administrator of your VMWare environment (or can get someone to perform actions).

  • You have a maintenance window with approved SCS downtime - this should not impact cluster operations.

  • You have SSH access to the SCS, Content Gateways, and any other devices in the Swarm architecture that are sending logs to the SCS.

  • You have rsync installed on the SCS or can install it beforehand.

\uD83D\uDCD8 Instructions

Follow these steps during an approved maintenance window:

  1. Before you continue, backup your SCS virtual machine in case anything goes wrong.

  2. If the SCS is receiving logs from other sources- Content Gateways, Elasticsearch nodes, etc- stop these sources from sending logs to the SCS temporarily.

  3. Power down the SCS and have the VMWare administrator add a new disk to the SCS VM. 20 GB is a good size depending on how much log history you want to keep.

  4. Power on the SCS after the new disk has been added.

  5. Stop rsyslog on the SCS: systemctl stop rsyslog

  6. Ensure that you can see the new disk with: fdisk -l - if you had a single disk previously, the new disk may be /dev/sdb and the size reported should match your expectations.

  7. Create a new temporary directory where we will mount the disk: mkdir /mnt/var_new

  8. Assuming the new disk is /dev/sdb for the following instructions. Type: fdisk /dev/sdb

    1. type n to create the new partition

    2. type p for primary partition on this disk

    3. type 1 for partition number

    4. keep defaults for First sector and Last sector

    5. type w to save

  9. Next, we will format the disk with xfs: mkfs.xfs -f -L varlog -b size=1024 /dev/sdb

  10. Now, we will mount the new disk into our temporary directory: mount /dev/sdb /mnt/var_new/

  11. Next we will rsync the data from the existing /var/log directory onto the new mount point: rsync -avHPSAX /var/log/ /mnt/var_new/

  12. Verify that the temporary mount point, /mnt/var_new, has the existing log files: ls -al /mnt/var_new/ - you can compare them with: diff -r /var/log/ /mnt/var_new/

  13. Now we will move the old /var/log/ directory out of the way so that we can move the new temporary mount point into its place. mv /var/log /var/log_old

  14. Recreate /var/log: mkdir /var/log

  15. Add this mount point in /etc/fstab for the new disk to mount to /var/log:

    /dev/sdb        /var/log                        xfs     defaults        0 0
  16. Unmount the temporary mountpoint /mnt/var_new since we will mount it to /var/log in the following step: umount /mnt/var_new

  17. Mount the disk as defined in /etc/fstab by simply typing: mount /var/log

  18. At this point, you should see all of your log files in /var/log: ls -la /var/log

  19. df -h should show that your /var/log directory has the expected space.

    # df -h | egrep '^Filesystem|log'
    Filesystem               Size  Used Avail Use% Mounted on
    /dev/sdb                  20G   52M   20G   1% /var/log
  20. Restart rsyslog: systemctl start rsyslog

  21. Resume logging from external servers if applicable: Content Gateways, etc.

  22. Remove the /var/log_old directory after verifying that your logging is working as expected on the new disk: rm -rf /var/log_old

  • No labels