Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Reverted from v. 22

...

Run the install-ssa script.

Code Block
sudo su -c "bash <(wget -qO- https://apt.cloud.datacore.com/installssa/install-ssa.sh)" root

...

Run the script as root, so you can sudo su to switch to root.

Code Block
datacore@demo00:~$ sudo su
root@demo00:/home/datacore#

You may or may not need to enter a password here to switch to root depending on the permissions provided earlier when installing the system.

Installation is done via script. Please contact your Solutions Architect for the SNS code installation.

There are a few prechecks to accept.
It adds the apt repository, then installs the latest periferyssa package as shown in the example below.

...

Info

Info

Make sure you have the Physical node installed before installing the SNS 1.1.1 base packages.

Follow the steps below to install SNS 1.1.1 base packages:

  1. Use SSH to log in to the server. Please provide the user password when prompted.

    Code Block
    ssh <username>@<server_ip>
  2. Once logged in, switch to the root user. Please provide the password when prompted.

    Code Block
    sudo su
  3. To install the base packages, execute the following command.

    Code Block
    sudo su -c "bash <(wget -qO- https://apt.cloud.datacore.com/dev jammy InRelease [1,164 B]
    Hit:4 http://us.archive.ubuntu.com/ubuntu jammy-backports InRelease
    Hit:5 http://security.ubuntu.com/ubuntu jammy-security InRelease
    Get:6 https://apt.cloud.datacore.com/dev jammy/main amd64 Packages [1,204 B]
    Fetched 2,368 B in 1s (4,586 B/s)
    Reading package lists... Done
    W: https://apt.cloud.datacore.com/dev/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
    W: https://apt.cloud.datacore.com/dev/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
    installssa/install-ssa.sh)" root
  4. There are a few prechecks through the scripts which when passed will ask for confirmation to proceed with installation. Please provide yes to proceed.

    Code Block
    The apt repository for the Swarm Software Appliance has now been installed\n added to this machine.
    The script will now install the application pending some environmental checks
    Do you want to proceed? (yes/no)
    
    
  5. Accepting those prechecks adds the Swarm deb repo.

  6. Select yes; it proceeds to install bootstrapper.

    Code Block
    Do you want to proceed? (yes/no) yes
    ok, attempting to install
    installing ssa bootstrapper
    Extracting templates from packages: 100%
    Scanning processes...
    Scanning processor microcode...
    Scanning linux images...
    
    Running kernel seems to be up-to-date.
    
    The processor microcode seems to be up-to-date.
    
    No services need to be restarted.
    
    No containers need to be restarted.
    
    No user sessions are running outdated binaries.
    
    No VM guests are running outdated hypervisor (qemu) binaries on this host.
    2024-03-19 16:49:39 - Installing pre-requisites
    Hit:1 https://apt.cloud.datacore.com/dev jammy InRelease
    Hit:2 http://us.archive.ubuntu.com/ubuntu jammy InRelease
    Hit:3 http://security.ubuntu.com/ubuntu jammy-security InRelease
    Hit:4 http://us.archive.ubuntu.com/ubuntu jammy-updates InRelease
    Hit:5 http://us.archive.ubuntu.com/ubuntu jammy-backports InRelease
    Reading package lists... Done
    W: https://apt.cloud.datacore.com/dev/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
    Reading package lists... Done
    Building dependency tree... Done
    Reading state information... Done
    ca-certificates is already the newest version (20230311ubuntu0.22.04.1).
    ca-certificates set to manually installed.
    curl is already the newest version (7.81.0-1ubuntu1.15).
    curl set to manually installed.
    gnupg is already the newest version (2.2.27-3ubuntu2.1).
    gnupg set to manually installed.
    The following NEW packages will be installed:
      apt-transport-https
    0 upgraded, 1 newly installed, 0 to remove and 62 not upgraded.
    Need to get 1,510 B of archives.
    After this operation, 170 kB of additional disk space will be used.
    Get:1 http://us.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 apt-transport-https all 2.4.11 [1,510 B]
    Fetched 1,510 B in 0s (8,571 B/s)
    Selecting previously unselected package apt-transport-https.
    (Reading database ... 174603 files and directories currently installed.)
    Preparing to unpack .../apt-transport-https_2.4.11_all.deb ...
    Unpacking apt-transport-https (2.4.11) ...
    Setting up apt-transport-https (2.4.11) ...
    Scanning processes...
    Scanning processor microcode...
    Scanning linux images...
    
    Running kernel seems to be up-to-date.
    
    The processor microcode seems to be up-to-date.
    
    No services need to be restarted.
    
    No containers need to be restarted.
    
    No user sessions are running outdated binaries.
    
    No VM guests are running outdated hypervisor (qemu) binaries on this host.
    Hit:1 https://apt.cloud.datacore.com/dev jammy InRelease
    Hit:2 http://us.archive.ubuntu.com/ubuntu jammy InRelease
    Get:3 https://deb.nodesource.com/node_18.x nodistro InRelease [12.1 kB]
    Hit:4 http://security.ubuntu.com/ubuntu jammy-security InRelease
    Hit:5 http://us.archive.ubuntu.com/ubuntu jammy-updates InRelease
    Hit:6 http://us.archive.ubuntu.com/ubuntu jammy-backports InRelease
    Get:7 https://deb.nodesource.com/node_18.x nodistro/main amd64 Packages [7,386 B]
    Fetched 19.5 kB in 0s (44.8 kB/s)
    Reading package lists... Done
    W: https://apt.cloud.datacore.com/dev/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
    2024-03-19 16:49:44 - Repository configured successfully. To install Node.js, run: apt-get install nodejs -y
    Scanning processes...
    Scanning processor microcode...
    Scanning linux images...
    
    Running kernel seems to be up-to-date.
    
    The processor microcode seems to be up-to-date.
    
    No services need to be restarted.
    
    No containers need to be restarted.
    
    No user sessions are running outdated binaries.
    
    No VM guests are running outdated hypervisor (qemu) binaries on this host.
    Scanning processes...
    Scanning processor microcode...
    Scanning linux images...
    
    Running kernel seems to be up-to-date.
    
    The processor microcode seems to be up-to-date.
    
    No services need to be restarted.
    
    No containers need to be restarted.
    
    No user sessions are running outdated binaries.
    
    No VM guests are running outdated hypervisor (qemu) binaries on this host.
    npm notice
    npm notice New major version of npm available! 9.8.1 -> 10.5.0
    npm notice Changelog: https://github.com/npm/cli/releases/tag/v10.5.0
    npm notice Run npm install -g npm@10.5.0 to update!
    npm notice

If the installer is at “Scanning linux images…” stage, press return and it should proceed further.

Once the bootstrapper is installed, the UI will install Nodejs, TypeScript, and some npm packages.

Code Block
Reading package lists... Done
W: https://apt.cloud.datacore.com/dev/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
2024-03-20 15:26:43 - Repository configured successfully. To install Node.js, run: apt-get install nodejs -y
Reading package lists...
Building dependency tree...
Reading state information...
nodejs is already the newest version (18.18.2-1nodesource1).
The following packages were automatically installed and are no longer required:
  libavahi-client-dev libavahi-common-dev libavahi-compat-libdnssd-dev libavahi-compat-libdnssd1 libdbus-1-dev pkg-config wireguard wireguard-tools
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 58 not upgraded.
Reading package lists...
Building dependency tree...
Reading state information...
node-typescript is already the newest version (4.5.4+ds1-1).
The following packages were automatically installed and are no longer required:
  libavahi-client-dev libavahi-common-dev libavahi-compat-libdnssd-dev libavahi-compat-libdnssd1 libdbus-1-dev pkg-config wireguard wireguard-tools
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 59 not upgraded.
Scanning processes...
Scanning processor microcode...
Scanning linux images...

The processor microcode seems to be up-to-date.

No services need to be restarted.

No containers need to be restarted.

No user sessions are running outdated binaries.

No VM guests are running outdated hypervisor (qemu) binaries on this host.
The bootstrapper and api services have now been installed you can now go to http://172.30.17.64:9010/ui to continue configuration

When the below pop-up prompts, you are ready to go to the UI to configure the system.

...

  1. Click “Create New” to create a new cluster.

    1. Fill in all the required fields:

      image-20241106-105154.pngImage Removed

      1. Cluster Name - The name of your new Swarm cluster. This needs to be unique, but will not be seen on the network.

      2. Virtual IP Address - The IP address for the new cluster.

      3. Network Services Addresses - The Cilium IP ranges. It can be an individual IP, IP ranges, or CIDR-formatted IP blocks. These IPs must be unique and must not be assigned to any machine.

      4. License - If you have a license file, you can upload it here. If not, a default 2TB license will automatically be deployed with the cluster.

      5. Click “Advanced” to add a new domain and bucket on the default cluster.

        image-20241106-112328.pngImage Removed

Clicking on Next will start the deployment, which includes:

  • Installing k3s

  • Installing flux

  • Adding base overseer

  • Adding bucket login details

  • Using hostype to push the configuration

  • Helm reconcilliation via flux

image-20241106-112801.pngImage Removed
  1. The K3s install takes a few seconds to complete, then you should be able to use Kubectl to query the cluster.

    image-20240320-153324.pngImage Removed

    When cluster manifests are deployed, the terminal looks like this:

    Code Block
    root@demo00:/var/perifery# kubectl get pods -A --watch
    NAMESPACE     NAME                         READY   STATUS    RESTARTS   AGE
    kube-system   helm-install-traefik-q6wjs   0/1     Pending   0          0s
    kube-system   helm-install-traefik-crd-28bgc   0/1     Pending   0          0s
    kube-system   helm-install-traefik-q6wjs       0/1     Pending   0          0s
    kube-system   helm-install-traefik-crd-28bgc   0/1     Pending   0          0s
    kube-system   helm-install-traefik-q6wjs       0/1     ContainerCreating   0          0s
    kube-system   helm-install-traefik-crd-28bgc   0/1     ContainerCreating   0          0s
    kube-system   svclb-overseer-lb-265778c4-sq6k8   0/1     Pending             0          0s
    kube-system   svclb-overseer-lb-265778c4-sq6k8   0/1     Pending             0          0s
    kube-system   svclb-overseer-lb-265778c4-sq6k8   0/1     ContainerCreating   0          0s
    default       overseer-77bcc8795c-zhd2k          0/1     Pending             0          0s
    kube-system   metrics-server-5f8b4ffd8-hlg9l     0/1     Pending             0          0s
    kube-system   local-path-provisioner-79ffd768b5-v742d   0/1     Pending             0          0s
    kube-system   coredns-77ccd57875-bsn64                  0/1     Pending             0          0s
    kube-system   metrics-server-5f8b4ffd8-hlg9l            0/1     Pending             0          0s
    kube-system   local-path-provisioner-79ffd768b5-v742d   0/1     Pending             0          1s
    kube-system   coredns-77ccd57875-bsn64                  0/1     Pending             0          1s
    kube-system   metrics-server-5f8b4ffd8-hlg9l            0/1     ContainerCreating   0          1s
    kube-system   local-path-provisioner-79ffd768b5-v742d   0/1     ContainerCreating   0          1s
    kube-system   coredns-77ccd57875-bsn64                  0/1     ContainerCreating   0          1s
    flux-system   helm-controller-74b9b95b88-7r586          0/1     Pending             0          0s
    flux-system   helm-controller-74b9b95b88-7r586          0/1     Pending             0          0s
    flux-system   kustomize-controller-696657b79c-rlb28     0/1     Pending             0          0s
    flux-system   kustomize-controller-696657b79c-rlb28     0/1     Pending             0          0s
    flux-system   helm-controller-74b9b95b88-7r586          0/1     ContainerCreating   0          0s
    flux-system   notification-controller-6cb7b4f4bf-c4tnr   0/1     Pending             0          0s
    flux-system   notification-controller-6cb7b4f4bf-c4tnr   0/1     Pending             0          0s
    flux-system   source-controller-5c69c74b57-tn8vl         0/1     Pending             0          0s
    flux-system   kustomize-controller-696657b79c-rlb28      0/1     ContainerCreating   0          0s
    flux-system   source-controller-5c69c74b57-tn8vl         0/1     Pending             0          0s
    flux-system   notification-controller-6cb7b4f4bf-c4tnr   0/1     ContainerCreating   0          0s
    flux-system   source-controller-5c69c74b57-tn8vl         0/1     ContainerCreating   0          0s
    kube-system   svclb-overseer-lb-265778c4-sq6k8           1/1     Running             0          5s
    kube-system   coredns-77ccd57875-bsn64                   0/1     Running             0          6s
    kube-system   coredns-77ccd57875-bsn64                   1/1     Running             0          6s
    kube-system   helper-pod-create-pvc-9341ef6d-81aa-4332-9d1b-29cc631e1b1a   0/1     Pending             0          0s
    kube-system   helper-pod-create-pvc-9341ef6d-81aa-4332-9d1b-29cc631e1b1a   0/1     ContainerCreating   0          0s
    kube-system   local-path-provisioner-79ffd768b5-v742d                      1/1     Running             0          7s
  2. When K3s is online, the UI looks like this:

    image-20241106-113506.pngImage Removed

    image-20241106-114349.pngImage Removed

    It may take a few minutes for the storage nodes to come online.

    image-20240320-154321.pngImage Removed

    The gateway gets created when the storage nodes are up. We are about 30 sec away from a running cluster.

    Code Block
    root@demo00:/var/perifery# kubectl get pods -n swarm
    NAME                                                 READY   STATUS    RESTARTS   AGE
    elastic-operator-0                                   1/1     Running   0          3m50s
    swarm-operators-controller-manager-c7cbfb844-nhp6m   1/1     Running   0          2m53s
    ssa-stack-ldap-deploy-84d58c844d-kt7v6               1/1     Running   0          2m53s
    ssa-stack-syslog-deploy-796bbb577f-njtgn             1/1     Running   0          2m53s
    ssa-stack-gatewayoobe-job-r2h4z                      1/1     Running   0          2m37s
    ssa-stack-es-es-microa-1                             1/1     Running   0          2m52s
    ssa-stack-es-es-microa-0                             1/1     Running   0          2m52s
    ssa-stack-es-es-microa-2                             1/1     Running   0          2m52s
    ssa-stack-castor-microa-demo00-0                     0/1     Running   0          77s
    ssa-stack-castor-microa-demo00-2                     0/1     Running   0          77s
    ssa-stack-castor-microa-demo00-1                     0/1     Running   0          77s
    ssa-stack-castor-microa-demo00-3                     0/1     Running   0          77s
    

    When the pods are in this state, you are ready to login.

    Code Block
    root@demo00:/var/perifery# kubectl get pods -n swarm
    NAME                                                 READY   STATUS    RESTARTS   AGE
    elastic-operator-0                                   1/1     Running   0          6m16s
    swarm-operators-controller-manager-c7cbfb844-nhp6m   1/1     Running   0          5m19s
    ssa-stack-ldap-deploy-84d58c844d-kt7v6               1/1     Running   0          5m19s
    ssa-stack-syslog-deploy-796bbb577f-njtgn             1/1     Running   0          5m19s
    ssa-stack-gatewayoobe-job-r2h4z                      1/1     Running   0          5m3s
    ssa-stack-es-es-microa-1                             1/1     Running   0          5m18s
    ssa-stack-es-es-microa-0                             1/1     Running   0          5m18s
    ssa-stack-es-es-microa-2                             1/1     Running   0          5m18s
    ssa-stack-castor-microa-demo00-0                     1/1     Running   0          3m43s
    ssa-stack-castor-microa-demo00-2                     1/1     Running   0          3m43s
    ssa-stack-castor-microa-demo00-1                     1/1     Running   0          3m43s
    ssa-stack-castor-microa-demo00-3                     1/1     Running   0          3m43s
    ssa-stack-gateway-hpqbt                              1/1     Running   0          73s
    

    Now, the UI will switch to the following screen:

    image-20241106-115640.pngImage Removed

    Login credentials for the first-time install are:

    1. username – periferyadmin

    2. password – password

The UI asks you to update the default password. Once updated, you can log in to the system as a normal user and see the below dashboard.

...

  1. It may take a while to finish installation. Once the installation is complete, a URL will be provided to access the UI (see the below screenshot).

    image-20250130-123752.pngImage Added
Info

Info

Make sure the template file has been generated before accessing the UI to create a cluster. Run the command ls /etc/hosttype to verify the existence of the file.

If the file exists, execute cat /etc/hosttype to check its contents. If the file is empty or not generated or contains the template name "test", halt the cluster creation process and contact the DataCore’s Customer Support Engineer to resolve the issue. Once the issue is fixed, you can proceed with the cluster creation as mentioned below. This will help avoid reinstalling the Debian package again.

Tip

Next, see [DRAFT] SWARM Containerized Stack Installation