Multiple Node Installation (High Availability)
Prerequisites
Protocol | Port | Description |
---|---|---|
TCP | 6443 | Kubernetes API Server |
UDP | 8472 | Required for Flannel VXLAN |
TCP | 2379-2380 | embedded etcd |
TCP | 10250 | metrics-server for HPA |
TCP | 9796 | Prometheus node exporter |
TCP | 80 | Private Docker Registry |
- Domain Name Service (DNS) configured
- Network Time Protocol (NTP) configured
- Fixed private IPv4 address
- Globally unique node name (use
--node-name
when installing K3s in a VM to set a static node name)
Firewall Rules for External Communication
Protocol | Port | Description |
---|---|---|
TCP | 443 | FDC backend |
The user must not access the K3s nodes directly, instead, there should be a load balancer sitting between the end user and all the K3s nodes (master and worker nodes):
The load balancer must operate at Layer 4 of the OSI model and listen for connections on port 443. After the load balancer receives a connection request, it selects a target from the target group (which can be any of the master or worker nodes in the cluster) and then attempts to open a TCP connection to the selected target (node) on port 443.
The load balancer must have health checks enabled which are used to monitor the health of the registered targets (nodes in the cluster) so that the load balancer can send requests to healthy nodes only.
- Timeout: 10 seconds
- Healthy threshold: 3 consecutive health check successes
- Unhealthy threshold: 3 consecutive health check failures
- Interval: 30 seconds
- Balance mode: round-robin
VM Count
At least 4 machines are required to provide high availability of the Forcepoint platform. The HA setup supports a single-node failure.
Install K3s
- Make sure you have
/usr/local/bin
configured in your PATH:export PATH=$PATH:/usr/local/bin
). All the commands must be executed asroot
user. - The commands have been tested on Ubuntu Server 20.04 LTS, SUSE Linux Enterprise Server 15 SP4 and RHEL 8.6.
- For RHEL, K3s needs the following package to be installed:
k3s-selinux
(repo rancher-k3s-common-stable) and its dependenciescontainer-selinux
(repo rhel-8-appstream-rhui-rpms) andpolicycoreutils-python-utils
(repo rhel-8-baseos-rhui-rpms). Also,firewalld
nm-cloud-setup.service
andnm-cloud-setup.timer
must be disabled and the server restarted before the installation, click here for more information.
- Create at least 4 VMs with the same specs.
- Extract the downloaded file:
tar -xf gv-platform-$VERSION.tar
to all the VMs - Create a local DNS entry
private-docker-registry.local
across all the nodes resolving to the master1 node:cat >> /etc/hosts << EOF <Master1_node_VM_IP> private-docker-registry.local EOF
- Prepare the K3s for air-gap installation
files:
$ mkdir -p /var/lib/rancher/k3s/agent/images/ $ gunzip -c assets/k3s-airgap-images-amd64.tar.gz > /var/lib/rancher/k3s/agent/images/airgap-images.tar $ cp assets/k3s /usr/local/bin && chmod +x /usr/local/bin/k3s $ tar -xzf assets/helm-v3.8.2-linux-amd64.tar.gz && cp linux-amd64/helm /usr/local/bin
- Update the
registries.yaml
file across all the nodes.$ mkdir -p /etc/rancher/k3s $ cp assets/registries.yaml /etc/rancher/k3s/
- Install K3s in the 1st master node:To get started launch a server node using the
cluster-init
flag:cat scripts/k3s.sh | INSTALL_K3S_SKIP_DOWNLOAD=true K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=master1 --cluster-init
Check for your first master node status, it should have theReady
state:kubectl get nodes
Use the following command to copy the TOKEN from this node that will be used to join the other nodes to the cluster:cat /var/lib/rancher/k3s/server/node-token
Also, copy the IP address of the 1st master node which will be used by the other nodes to join the cluster.
- Install K3s in the 2nd master node:
/var/lib/rancher/k3s/server/node-token
from the 1st master node to the K3S_TOKEN
variable.- Set
--node-name
to “master2” - Set
--server
to the IP address of the 1st master nodecat scripts/k3s.sh | K3S_TOKEN=$K3S_TOKEN INSTALL_K3S_SKIP_DOWNLOAD=true K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=master2 --server https://<ip or hostname of any master node>:6443
Check the node status:kubectl get nodes
- Install K3s in the 3rd master node:
Run the following command and assign the contents of the file:
/var/lib/rancher/k3s/server/node-token
from the 1st master node to theK3S_TOKEN
variable.Set
--node-name
to “master3”Set--server
to the IP address of the 1st master node.cat scripts/k3s.sh | K3S_TOKEN=$K3S_TOKEN INSTALL_K3S_SKIP_DOWNLOAD=true K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=master3 --server https://<ip or hostname of any master node>:6443
Check the node status:
- Install K3s in the 1st worker node:Use the same approach to install K3s and to connect the worker node to the cluster group. The installation parameter would be different in this case. Run the following command: Set
--node-name
to “worker1” (where n is the nth number of the worker node)cat scripts/k3s.sh | $K3S_TOKEN INSTALL_K3S_SKIP_DOWNLOAD=true K3S_TOKEN=$K3S_TOKEN K3S_KUBECONFIG_MODE="644" sh -s - agent --node-name=worker1 --server https://<ip or hostname of any master node>:6443
Check the node status:kubectl get nodes
Deploy Private Docker Registry and Import Docker images
- Extract and Import the Docker images locally to the
master1
node.$ mkdir /tmp/import $ for f in images/*.gz; do IMG=$(basename "${f}" .gz); gunzip -c "${f}" > /tmp/import/"${IMG}"; done $ for f in /tmp/import/*.tar; do ctr -n=k8s.io images import "${f}"; done
- Install
gv-private-registry
helm chart in the master1 node:Replace$VERSION
with the version that is present in the bundle that has been downloaded. To check all the charts that have been download runls charts
.$ helm upgrade --install gv-private-registry charts/gv-private-registry-$VERSION.tgz --wait \ --timeout=10m0s \ --kubeconfig /etc/rancher/k3s/k3s.yaml
- Tag and push the docker images to the local private docker registry deployed in the
master1
node:$ sh scripts/push-docker-images.sh
Install Helm charts
- Perform the following steps in the master1 Node.
- Replace
$VERSION
with the version that is present in the bundle that has been downloaded. To check all the charts that have been download runls charts
.
- Install Getvisibility Essentials and set the daily UTC backup hour (0-23) for performing backups. If you are installing Enterprise append --set eck-operator.enabled=true to the command in
order to enable (BROKEN LINK TO ELASTIC
SEARCH)
$ helm upgrade --install gv-essentials charts/gv-essentials-$VERSION.tgz --wait \ --timeout=10m0s --kubeconfig /etc/rancher/k3s/k3s.yaml \ --set global.high_available=true \ --set eck-operator.enabled=true \ --set minio.replicas=4 \ --set minio.mode=distributed \ --set consul.server.replicas=3 \ --set updateclusterid.enabled=false \ --set backup.hour=1
- Install Monitoring
CRD:
$ helm upgrade --install rancher-monitoring-crd charts/rancher-monitoring-crd-$VERSION.tgz --wait \ --kubeconfig /etc/rancher/k3s/k3s.yaml \ --namespace=cattle-monitoring-system \ --create-namespace
- Install
Monitoring:
$ helm upgrade --install rancher-monitoring charts/rancher-monitoring-$VERSION.tgz --wait \ --kubeconfig /etc/rancher/k3s/k3s.yaml \ --set global.high_available=true \ --namespace=cattle-monitoring-system \ --set loki-stack.loki.replicas=2 \ --set prometheus.prometheusSpec.replicas=2
- Check all pods are
Running
with the command:kubectl get pods -A
Install FDC Helm Chart
$VERSION
with the version that is present in the bundle that has been downloaded$RESELLER
with the reseller code (eithergetvisibility
orforcepoint
)$PRODUCT
with the product being installed (synergy
orenterprise
)$ helm upgrade --install gv-platform charts/gv-platform-$VERSION.tgz --wait \ --timeout=10m0s --kubeconfig /etc/rancher/k3s/k3s.yaml \ --set high_available=true \ --set-string clusterLabels.environment=prod \ --set-string clusterLabels.cluster_reseller=$RESELLER \ --set-string clusterLabels.cluster_name=mycluster \ --set-string clusterLabels.product=$PRODUCT
Install Kube-fledged
- Install
gv-kube-fledged
helm chart. Replace$VERSION
with the version that is present in the bundle that has been downloaded. To check all the charts that have been download runls charts
.$ helm upgrade --install gv-kube-fledged charts/gv-kube-fledged-$VERSION.tgz -n kube-fledged \ --timeout=10m0s \ --kubeconfig /etc/rancher/k3s/k3s.yaml \ --create-namespace
- Create and deploy
imagecache.yaml
$ sh scripts/create-imagecache-file.sh $ kubectl apply -f scripts/imagecache.yaml
Install custom artifacts
Models and other artifacts, like custom agent versions or custom consul configuration can be shipped inside auto deployable bundles. The procedure to install custom artifact bundles on an HA cluster is the same as in the single node cluster case. Take a look at the guide for single-node clusters above.