This is applicable when there is a cluster showing as “unavailable“ after the user configured a proxy on the server.
Note: Replace $PROXY_IP
with the IP:PORT of the corporate proxy server and $NODE_IP
with the IP or CIDR of the server running
Kubernetes.
Steps
-
Run
env
on the user’s server to determine what is the proxy IP. Ensure that the following line is checked:
-
Open the
file /etc/systemd/system/k3s.service.env
and append the following lines:
http_proxy=http://X.X.X.X
Note: It is important to use correct IP addresses in the place of placeholders $PROXY_IP and $NODE_IP below.
http_proxy="$PROXY_IP"
https_proxy="$PROXY_IP"
no_proxy="$NODE_IP,localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local"
-
Restart k3s:
systemctl restart k3s.service
-
Go to the Rancher dashboard Cluster Management > Clusters and click on Edit Config for the cluster:
-
Go to Advanced Options:
-
Configure the following Agent Environment Variables and press Save:
Note: Remember to use correct IP addresses in the place of placeholders $PROXY_IP
and $NODE_IP
below.
HTTP_PROXY: $PROXY_IP
HTTPS_PROXY: $PROXY_IP
NO_PROXY: $NODE_IP,localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local
-
Run the command:
kubectl edit deployment -n cattle-system cattle-cluster-agent -o yaml
-
Type letter “i“ to insert text and on the env section, type the following lines:
- name: HTTP_PROXY
value: $PROXY_IP
- name: HTTPS_PROXY
value: $PROXY_IP
- name: NO_PROXY
value: $NODE_IP,localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local
Example:

Save by pressing Esc and then typing "wq".
-
Do the same on the fleet-agent by running the command.
-
Repeat Step 6.
-
After applying all the changes, wait for the cluster to show as Online on Rancher.
Configure Dashboard
In order for the connectors to support proxy settings, you will need to enable it in the configuration page.