How to change running pods limits in Kubernetes?
Asked Answered
A

1

13

I have a self made Kubernetes cluster consisting of VMs. My problem is, that the coredns pods are always go in CrashLoopBackOff state, and after a while they go back to Running as nothing happened.. One solution that I found and could not try yet, is changing the default memory limit from 170Mi to something higher. As I'm not an expert in this, I thought this is not a hard thing, but I don't know how to change a running pod's configuration. It may be impossible, but there must be a way to recreate them with new configuration. I tried with kubectl patch, and looked up rolling-update too, but I just can't figure it out. How can I change the limit?

Here is the relevant part of the pod's data:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.projectcalico.org/podIP: 176.16.0.12/32
  creationTimestamp: 2018-11-18T10:29:53Z
  generateName: coredns-78fcdf6894-
  labels:
    k8s-app: kube-dns
    pod-template-hash: "3497892450"
  name: coredns-78fcdf6894-gnlqw
  namespace: kube-system
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: coredns-78fcdf6894
    uid: e3349719-eb1c-11e8-9000-080027bbdf83
  resourceVersion: "73564"
  selfLink: /api/v1/namespaces/kube-system/pods/coredns-78fcdf6894-gnlqw
  uid: e34930db-eb1c-11e8-9000-080027bbdf83
spec:
  containers:
  - args:
    - -conf
    - /etc/coredns/Corefile
  image: k8s.gcr.io/coredns:1.1.3
  imagePullPolicy: IfNotPresent
  livenessProbe:
    failureThreshold: 5
    httpGet:
      path: /health
      port: 8080
      scheme: HTTP
    initialDelaySeconds: 60
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  name: coredns
  ports:
  - containerPort: 53
    name: dns
    protocol: UDP
  - containerPort: 53
    name: dns-tcp
    protocol: TCP
  - containerPort: 9153
    name: metrics
    protocol: TCP
  resources:
    limits:
      memory: 170Mi
    requests:
      cpu: 100m
      memory: 70Mi

EDIT: It turned out, that in Ubuntu the Network Manager's dnsmasq drives the Corends pods crazy, so in /etc/NetworkManager/NetworkManager.conf I commented out the dnsmasq line, reboot and everything is okay.

Allayne answered 23/11, 2018 at 14:38 Comment(0)
M
17

You must edit coredns pod's template in coredns deployment definition:

kubectl edit deployment -n kube-system coredns

Once your default editor is opened with coredns deployment, in the templateSpec you will find part which is responsible for setting memory and cpu limits.

Magdamagdaia answered 23/11, 2018 at 14:44 Comment(2)
Thanks, I could not find an easy description like this.Allayne
Usually coredns is deployed as static pod. That means the coredns manifest file is physically stored in the master, if you usr kubeadm to bootstrap your cluster. Modifying a static pod file will automatically restart the pods. This goes the same for kube controller manager, kube-scheduler.Teresa

© 2022 - 2024 — McMap. All rights reserved.