I'm trying to limit the number of pods per each node from my cluster. I managed to add a global limit per node from kubeadm init with config file:
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
networking:
podSubnet: <subnet>
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: 10
This is not quite well because the limit is applied even on master node (where multiple kube-system pods are running and the number of pods here may increase over 10). I would like to keep the default value at init and change the value at join on each node. I have found something:
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: 10
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: JoinConfiguration
discovery:
bootstrapToken:
apiServerEndpoint: "<api_endpoint>"
token: "<token>"
unsafeSkipCAVerification: true
but, even if no error/warning appears, it seems that the value of maxPods is ignored. I can create more than 10 pods for that specific node.
Also kubectl get node <node> -o yaml
returns status.capacity.pods
with its default value (110).
How can I proceed in order to have this pods limit applied per each node?
I would like to mention that I have basic/limited knowledge related to Kubernetes.
Thank you!