Upgrading K3s to 1.26

Note: For the Platform Team: Local Cluster K3s Upgrade

If you are upgrading K3s of the local cluster, you would need to remove the existing PodSecurityPolicy resources.

We have only one of them under the chart aws-node-termination-handler
  1. Patch the helm Chart to disable the psp resource.
    kubectl patch helmchart aws-node-termination-handler -n kube-system --type='json' -p='[{"op": "add", "path": "/spec/set/rbac.pspEnabled", "value": "false"}]'
  2. This will trigger the removal of the PSP resource.

The traefik is deployed as daemonset in the local clusters. You would need to restart the daemonset instead when following the steps given in Post Upgrade Patch (broken link)

  • Deploy the system-upgrade-controller:
    kubectl apply -f https://assets.master.k3s.getvisibility.com/system-upgrade-controller/v0.13.1/system-upgrade-controller.yaml
  • Create the upgrade plan
    Note: The key version has the version of the K3s that the cluster will be upgraded to.
    cat > upgrade-plan-server.yaml << EOF
    ---
    # Server plan
    apiVersion: upgrade.cattle.io/v1
    kind: Plan
    metadata:
      name: server-plan
      namespace: system-upgrade
    spec:
      concurrency: 1
      cordon: true
      nodeSelector:
        matchExpressions:
        - key: node-role.kubernetes.io/control-plane
          operator: In
          values:
          - "true"
      serviceAccountName: system-upgrade
      upgrade:
        image: rancher/k3s-upgrade
      version: v1.26.10+k3s1
    EOF

    If you are also running a worker node then execute this too:

    cat > upgrade-plan-agent.yaml << EOF
    ---
    # Agent plan
    apiVersion: upgrade.cattle.io/v1
    kind: Plan
    metadata:
      name: agent-plan
      namespace: system-upgrade
    spec:
      concurrency: 1
      cordon: true
      nodeSelector:
        matchExpressions:
        - key: node-role.kubernetes.io/control-plane
          operator: DoesNotExist
      prepare:
        args:
        - prepare
        - server-plan
        image: rancher/k3s-upgrade
      serviceAccountName: system-upgrade
      upgrade:
        image: rancher/k3s-upgrade
      version: v1.26.10+k3s1
    EOF
  • Run the upgrade plan:
    kubectl apply -f upgrade-plan-server.yaml
    In the case of a Worker node execute this too:
    kubectl apply -f upgrade-plan-agent.yaml
  • Once the plan is executed, all pods will restart and take a few minutes to recover Check the status of all the pods:
    watch kubectl get pods -A
  • Check if the K3s version has been upgraded:
    kubectl get nodes
  • Delete the system-upgrade-controller:
    kubectl delete -f https://assets.master.k3s.getvisibility.com/system-upgrade-controller/v0.13.1/system-upgrade-controller.yaml

Reference: Apply upgrade: https://docs.k3s.io/upgrades/automated#install-the-system-upgrade-controller