K3s installation in HA

Below installation steps must be run if you are wanting to deploy a high availability setup. For a HA setup, we need 3 master nodes and at least 1 worker node to run K3s in HA mode.

Note: The nodes must be homogeneous, having the same number of CPUs, RAM, and disk space.
Draft comment: Dipshikha.Basu
Step1 and 2 to be done on all nodes. Step 3 to be done once at the end after installing k3s on the last node.
  1. (IF USING PROXY) Set the local proxy variables in the script k3s.sh. You need to provide product name in a form of PRODUCT_NAME argument. This will instruct the installer to test your current environment against product requirements. Allowed product names are: synergy, dspm, enterprise, and ultimate. Capitalization of the name is important. If you provide a name that cannot be recognized or if you do not provide product name at all, the script will default to PRODUCT_NAME="dspm"
    export http_proxy="$PROXY_IP"
    export https_proxy="$PROXY_IP"
    no_proxy="$NODE_IP,localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local"
  2. (IF USING PROXY) Make sure k3s service has proper proxy variables in the file /etc/systemd/system/k3s.service.env. They should already show the required values, if not change them in the file.
    http_proxy="$PROXY_IP" | https_proxy="$PROXY_IP" |  no_proxy="$NO_PROXY"
  3. Contact Forcepoint Technical Support and inform them the values of your proxy variables from Step 5. Forcepoint Technical Support adds the proxy variables to the Rancher setup. Before proceeding further, wait for Forcepoint Technical Support to confirm that they have added the procxy variables to Rancher.

1st master node

To get started launch a server node using the cluster-init flag:
curl -sfL https://assets.master.k3s.getvisibility.com/k3s/k3s.sh | INSTALL_K3S_VERSION="v1.26.10+k3s1" K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=master1 --cluster-init

Check for your first master node status, it should have the Ready state:

kubectl get nodes

Use the following command to copy the TOKEN that will used to join the other nodes to the cluster:

cat /var/lib/rancher/k3s/server/node-token

Do not forget to copy the private IP address of the 1st master node which will be used by the other nodes to join the cluster.

2nd master node

SSH into the 2nd server to join it to the cluster:
  1. Replace K3S_TOKEN with the contents of the file /var/lib/rancher/k3s/server/node-token from the 1st master node installation.
  2. Set --node-name to master2.
  3. Set --server to the private static IP address of the 1st master node.

Check the node status:

kubectl get nodes

3rd master node

SSH into the 3rd server to join it to the cluster:
  1. Replace K3S_TOKEN with the contents of the file /var/lib/rancher/k3s/server/node-token from the 1st master node installation.
  2. Set --node-name to master3
  3. Set --server to the private static IP address of the 1st master node.
    curl -sfL https://assets.master.k3s.getvisibility.com/k3s/k3s.sh | K3S_TOKEN=SHARED_SECRET INSTALL_K3S_VERSION="v1.26.10+k3s1" K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=master3 --server https://<ip or hostname of master1>:6443

Check the node status:

kubectl get nodes

1st worker node

SSH into the 4th server to join it to the cluster:
  1. Replace K3S_TOKEN with the contents of the file /var/lib/rancher/k3s/server/node-token from the 1st master node installation.
  2. Set --node-name to worker1.
  3. Set --server to the private static IP address of the 1st master node.
    curl -sfL https://assets.master.k3s.getvisibility.com/k3s/k3s.sh | K3S_TOKEN=SHARED_SECRET INSTALL_K3S_VERSION="v1.26.10+k3s1" K3S_KUBECONFIG_MODE="644" sh -s - agent --node-name=worker1 --server https://<ip or hostname of any master node>:6443

Joining additional worker nodes

You may create as many additional worker nodes as you want.

SSH into the server to join it to the cluster:
  1. Replace K3S_TOKEN with the contents of the file /var/lib/rancher/k3s/server/node-token from the 1st master node installation.
  2. Update --node-name with your worker node name(Ex: worker2, worker3 etc.)
  3. Set --server to the private static IP address of the 1st master node.
    curl -sfL https://assets.master.k3s.getvisibility.com/k3s/k3s.sh | K3S_TOKEN=SHARED_SECRET INSTALL_K3S_VERSION="v1.26.10+k3s1" K3S_KUBECONFIG_MODE="644" sh -s - agent --node-name=workerX --server https://<ip or hostname of any master node>:6443

Check the node status: kubectl get nodes.

.

Register HA K3s Cluster to Rancher
Draft comment: Dipshikha.Basu
This section is internal only done by TS.

You may run the registration command that you generated using Rancher UI or through license manager. You should see all master and worker nodes in your cluster through the Machine Pools on the Rancher dashboard: