K3s installation in HA
Below installation steps must be run if you are wanting to deploy a high availability setup. For a HA setup, we need 3 master nodes and at least 1 worker node to run K3s in HA mode.
- (IF USING PROXY) Set the local proxy variables in the script k3s.sh. You need to provide product name in a form of
PRODUCT_NAME
argument. This will instruct the installer to test your current environment against product requirements. Allowed product names are: synergy, dspm, enterprise, and ultimate. Capitalization of the name is important. If you provide a name that cannot be recognized or if you do not provide product name at all, the script will default toPRODUCT_NAME="dspm"
export http_proxy="$PROXY_IP" export https_proxy="$PROXY_IP" no_proxy="$NODE_IP,localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local"
- (IF USING PROXY) Make sure k3s service has proper proxy variables in the file /etc/systemd/system/k3s.service.env. They should already show the required values, if not change them in
the file.
http_proxy="$PROXY_IP" | https_proxy="$PROXY_IP" | no_proxy="$NO_PROXY"
- Contact Forcepoint Technical Support and inform them the values of your proxy variables from Step 5. Forcepoint Technical Support adds the proxy variables to the Rancher setup. Before proceeding further, wait for Forcepoint Technical Support to confirm that they have added the procxy variables to Rancher.
1st master node
cluster-init
flag:curl -sfL https://assets.master.k3s.getvisibility.com/k3s/k3s.sh | INSTALL_K3S_VERSION="v1.26.10+k3s1" K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=master1 --cluster-init
Check for your first master node status, it should have the Ready
state:
kubectl get nodes
Use the following command to copy the TOKEN that will used to join the other nodes to the cluster:
cat /var/lib/rancher/k3s/server/node-token
Do not forget to copy the private IP address of the 1st master node which will be used by the other nodes to join the cluster.
2nd master node
- Replace
K3S_TOKEN
with the contents of the file/var/lib/rancher/k3s/server/node-token
from the 1st master node installation. - Set
--node-name
tomaster2
. - Set
--server
to the private static IP address of the 1st master node.
Check the node status:
kubectl get nodes
3rd master node
- Replace
K3S_TOKEN
with the contents of the file/var/lib/rancher/k3s/server/node-token
from the 1st master node installation. - Set
--node-name
tomaster3
- Set
--server
to the private static IP address of the 1st master node.curl -sfL https://assets.master.k3s.getvisibility.com/k3s/k3s.sh | K3S_TOKEN=SHARED_SECRET INSTALL_K3S_VERSION="v1.26.10+k3s1" K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=master3 --server https://<ip or hostname of master1>:6443
Check the node status:
kubectl get nodes
1st worker node
- Replace
K3S_TOKEN
with the contents of the file/var/lib/rancher/k3s/server/node-token
from the 1st master node installation. - Set
--node-name
toworker1
. - Set
--server
to the private static IP address of the 1st master node.curl -sfL https://assets.master.k3s.getvisibility.com/k3s/k3s.sh | K3S_TOKEN=SHARED_SECRET INSTALL_K3S_VERSION="v1.26.10+k3s1" K3S_KUBECONFIG_MODE="644" sh -s - agent --node-name=worker1 --server https://<ip or hostname of any master node>:6443
Joining additional worker nodes
You may create as many additional worker nodes as you want.
- Replace
K3S_TOKEN
with the contents of the file/var/lib/rancher/k3s/server/node-token
from the 1st master node installation. - Update
--node-name
with your worker node name(Ex: worker2, worker3 etc.) - Set
--server
to the private static IP address of the 1st master node.curl -sfL https://assets.master.k3s.getvisibility.com/k3s/k3s.sh | K3S_TOKEN=SHARED_SECRET INSTALL_K3S_VERSION="v1.26.10+k3s1" K3S_KUBECONFIG_MODE="644" sh -s - agent --node-name=workerX --server https://<ip or hostname of any master node>:6443
Check the node status: kubectl get nodes
.
Register HA K3s Cluster to RancherDraft comment: Dipshikha.Basu
This section is internal only done by TS.
This section is internal only done by TS.
You may run the registration command that you generated using Rancher UI or through license manager. You should see all master and worker nodes in your cluster through the Machine
Pools
on the Rancher dashboard:
Step1 and 2 to be done on all nodes. Step 3 to be done once at the end after installing k3s on the last node.