Installing a Physical Node
Introduction
We are installing Ubuntu + Perifery Single Node Swarm on a fresh host as an example. This will take us from a base Ubuntu OS to a fully deployed Containerized Swarm on K3s.
Hardware
The hardware used in this example has the following characteristics.
Component | Count | Type | Comment |
---|---|---|---|
processor | 1 | Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz |
|
memory | 4 | 64gb (256gb total) |
|
disk group | 1tb | nvme |
|
| 3.5tb | nvme |
|
disk swarm | 8 | 18.2tb sata |
|
network | 10/25gbe |
|
|
This is the required partition layout.
We are using the IPMI interface to deploy the system. The steps are similar to a USB-booted host.
This guide is on supermicro-based hardware.
Accessing the IPMI Console and Booting From a Local ISO
Navigate to the IPMI IP address. We are using https://172.30.8.221 for an example.
Select Remote Control from the left pane.
Click Launch Console.
We are using JAVA plug-in as it can mount an ISO. HTML5 can also do this, but it is a licensed feature for ISO.
There might be a warning as shown below:
Click Continue.
Click Run.
Select Virtual Storage.
Expand the Logical Drive Type and select ISO Image.
Click Open Image to upload the downloaded ISO image.
Select the ISO image downloaded earlier, and click Open.
Click OK.
Press f11 to invoke the boot menu.
BIOS may not be set to use “virtual media” as a boot option. If so, enable it as an extra boot option in the BIOS.
Virtual Media is displayed as shown below:Select the virtual media as a boot device.
Initial Steps for Ubuntu Install
The first screen on booting from the Ubuntu 22.04.4 is:
Select Ubuntu Server with the HWE Kernel.
Select the preferred language. We have used English as an example.
Select the layout and variant as per your preferred language.
Select Done.
Select Ubuntu Server > Done.
Select the network connection > Done. The network connection is configured to talk to the server.
Provide the proxy address > select Done.
Provide the archive mirror address > select Done.
Storage Configuration
Ubuntus installer by default chooses the largest disk to deploy, which is not required for this installation. Therefore, press return and select an alternate drive for root.
There is a large selection of drives here, so we will go with 893.750G drive and delete the existing partitions on this drive.
Press return on the highlighted drive.
Select Done.
Select the new LVM volume group and edit it.
Select Leave Unmount from the Mount drop-down. This will unmount the partition to change the amount of free space.
Select Save.
When it is changed on mount, change it to /.
Use the remaining volumes for var.
Create a new logical volume group for the containers. This will be on the 3.4tb volume.
Provide a name for the LVM volume group. Here, we have used vg0 as an example.
Select Create.
When finished, you will have a new device called vg0.
Highlight the free space.
Select Create Logical Volume.
Select /var/lib from the Mount drop-down. This is used to dedicate all the free space to /var/lib where the containers live.
Select Create.
It shows a warning where select Continue.
User Creation
Provide the following information to set up a user profile:
Name - Name of the user.
Server name - Name of your server name to talk to other devices/computers.
Username - The username to log into the system.
Password - It is recommended to have a strong password.
Select Done.
Upgrade to Ubuntu Pro
Select Enable Ubuntu Pro.
Select Continue.
Select Install OpenSSH server and import SSH identity as No.
Select Done.
Ignore the following screens if displayed on your device.After successful installation, the system ejects the cdrom that fails.
Reboot the machine manually to have a fresh OS. While rebooting the host, you will see a lot of text on screen, but if you hit return, it displays as shown below:
Now, it will show your hostname. Here, my hostname is Godzilla, yours should be different.
Logging into the system takes you to the command prompt.
Installing periferyssa.deb and api_server
Switch to the root via su.
Installation is done via script. Please contact your Solutions Architect for the SNS code installation.
The initial output will look like this:
Answer yes to the prompt. Some messages will pop up on the screen, which is normal.
The last update is displayed like this:
When we are done rebooting machine, go to http://ip-address:9010/ui.
Click Create Cluster.
Fill in all the required fields.
Cluster Name - The name of your Swarm cluster. This needs to be unique, but will not be seen on the network.
Domain Name - The first domain created for the appliance.
Bucket Name - The name of the bucket created within that domain to test the system.
License - If you have a license file, you can upload it here. If not, a default 2TB license will automatically be deployed with the cluster.
Click Next.
Now, it asks to confirm the cluster details.Click Confirm. It starts the deployment.
This screen shows up when the base tools for the deployment have been installed. At this point, there is no Elasticsearch, Swarm, Gateway, etc.Click Done.
Click on Show details. This provides the current state of the process.
This shows the deployment state of each piece of the setup.
When the deployment is complete, it takes you to the login screen.
The default credentials to log in are:
Username - periferyadmin
Password - password
The system asks to update the default password on the first login.
Use a unique and strong password, then click Update.
After a successful password update, log into the system with the updated password. It displays the dashboard:
© DataCore Software Corporation. · https://www.datacore.com · All rights reserved.