Changing a cluster CIDR isn't a simple task. I managed to reproduce your scenario and I managed to change it using the following steps.
Changing an IP pool
The process is as follows :
- Install calicoctl as a Kubernetes pod (Source)
- Add a new IP pool (Source).
- Disable the old IP pool. This prevents new IPAM allocations from the old IP pool without affecting the networking of existing workloads.
- Change nodes
podCIDR
parameter (Source)
- Change
--cluster-cidr
on kube-controller-manager.yaml
on master node. (Credits to OP on that)
- Recreate all existing workloads that were assigned an address from the old IP pool.
- Remove the old IP pool.
Let’s get started.
In this example, we are going to replace 192.168.0.0/16
to 10.0.0.0/8
.
- Installing calicoctl as a Kubernetes pod
$ kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml
Setting an alias:
$ alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl "
Add a new IP pool:
calicoctl create -f -<<EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: new-pool
spec:
cidr: 10.0.0.0/8
ipipMode: Always
natOutgoing: true
EOF
We should now have two enabled IP pools, which we can see when running calicoctl get ippool -o wide
:
NAME CIDR NAT IPIPMODE DISABLED
default-ipv4-ippool 192.168.0.0/16 true Always false
new-pool 10.0.0.0/8 true Always false
Disable the old IP pool.
First save the IP pool definition to disk:
calicoctl get ippool -o yaml > pool.yaml
pool.yaml
should look like this:
apiVersion: projectcalico.org/v3
items:
- apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: default-ipv4-ippool
spec:
cidr: 192.168.0.0/16
ipipMode: Always
natOutgoing: true
- apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: new-pool
spec:
cidr: 10.0.0.0/8
ipipMode: Always
natOutgoing: true
Note: Some extra cluster-specific information has been redacted to improve readibility.
Edit the file, adding disabled: true
to the default-ipv4-ippool
IP pool:
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:5
name: default-ipv4-ippool
spec:
cidr: 192.168.0.0/16
ipipMode: Always
natOutgoing: true
disabled: true
Apply the changes:
calicoctl apply -f pool.yaml
We should see the change reflected in the output of calicoctl get ippool -o wide
:
NAME CIDR NAT IPIPMODE DISABLED
default-ipv4-ippool 192.168.0.0/16 true Always true
new-pool 10.0.0.0/8 true Always false
Change nodes podCIDR
parameter:
Override podCIDR
parameter on the particular k8s Node resource with a new IP source range, desirable way with the following commands:
$ kubectl get no kubeadm-0 -o yaml > file.yaml; sed -i "s~192.168.0.0/24~10.0.0.0/16~" file.yaml; kubectl delete no kubeadm-0 && kubectl create -f file.yaml
$ kubectl get no kubeadm-1 -o yaml > file.yaml; sed -i "s~192.168.1.0/24~10.1.0.0/16~" file.yaml; kubectl delete no kubeadm-1 && kubectl create -f file.yaml
$ kubectl get no kubeadm-2 -o yaml > file.yaml; sed -i "s~192.168.2.0/24~10.2.0.0/16~" file.yaml; kubectl delete no kubeadm-2 && kubectl create -f file.yaml
We had to perform this action for every node we have. Pay attention to the IP Ranges, they are different from one node to the other.
Change CIDR on kubeadm-config ConfigMap and kube-controller-manager.yaml
Edit kubeadm-config ConfigMap and change podSubnet to the new IP Range:
kubectl -n kube-system edit cm kubeadm-config
Also, change the --cluster-cidr
on /etc/kubernetes/manifests/kube-controller-manager.yaml located in the master node.
$ sudo cat /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=10.0.0.0/8
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --node-cidr-mask-size=24
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --use-service-account-credentials=true
Recreate all existing workloads using IPs from the disabled pool. In this example, kube-dns is the only workload networked by Calico:
kubectl delete pod -n kube-system kube-dns-6f4fd4bdf-8q7zp
Check that the new workload now has an address in the new IP pool by running calicoctl get wep --all-namespaces
:
NAMESPACE WORKLOAD NODE NETWORKS INTERFACE
kube-system kube-dns-6f4fd4bdf-8q7zp vagrant 10.0.24.8/32 cali800a63073ed
Delete the old IP pool:
calicoctl delete pool default-ipv4-ippool
Creating it correctly from scratch
To deploy a cluster under a specific IP range using Kubeadm and Calico you need to init the cluster with --pod-network-cidr=192.168.0.0/24
(where 192.168.0.0/24
is your desired range) and than you need to tune the Calico manifest before applying it in your fresh cluster.
To tune Calico before applying, you have to download it's yaml file and change the network range.
- Download the Calico networking manifest for the Kubernetes.
$ curl https://docs.projectcalico.org/manifests/calico.yaml -O
- If you are using pod CIDR
192.168.0.0/24
, skip to the next step. If you are using a different pod CIDR, use the following commands to set an environment variable called POD_CIDR
containing your pod CIDR and replace 192.168.0.0/24
in the manifest with your pod CIDR.
$ POD_CIDR="<your-pod-cidr>" \
sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml
- Apply the manifest using the following command.
$ kubectl apply -f calico.yaml