FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) were unschedulable
Asked Answered
R

2

9

I'm recently getting started on kubernetes. I have encoutered FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) were unschedulable and am not sure what is happening.

Is it because there are not enough resources for a new pod on my node and i need to increase it? It doesn't look like it's using 100% of the memory or cpu yet though.

Here is my pod kubectl describe pods plex-kube-plex-986cc6d98-lwns7 --namespace plex

Name:           plex-kube-plex-986cc6d98-lwns7
Namespace:      plex
Priority:       0
Node:           <none>
Labels:         app=kube-plex
                pod-template-hash=986cc6d98
                release=plex
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/plex-kube-plex-986cc6d98
Init Containers:
  kube-plex-install:
    Image:      quay.io/munnerz/kube-plex:latest
    Port:       <none>
    Host Port:  <none>
    Command:
      cp
      /kube-plex
      /shared/kube-plex
    Environment:  <none>
    Mounts:
      /shared from shared (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from plex-kube-plex-token-txkbn (ro)
Containers:
  plex:
    Image:       plexinc/pms-docker:1.16.0.1226-7eb2c8f6f
    Ports:       32400/TCP, 32400/TCP, 32443/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP
    Liveness:    http-get http://:32400/identity delay=10s timeout=10s period=10s #success=1 #failure=3
    Readiness:   http-get http://:32400/identity delay=15s timeout=5s period=10s #success=1 #failure=3
    Environment:
      TZ:                    Europe/London
      PLEX_CLAIM:            [claim-PooPBMsbyEjyigT-_hec]
      PMS_INTERNAL_ADDRESS:  http://plex-kube-plex:32400
      PMS_IMAGE:             plexinc/pms-docker:1.16.0.1226-7eb2c8f6f
      KUBE_NAMESPACE:        plex (v1:metadata.namespace)
      TRANSCODE_PVC:         plex-kube-plex-transcode
      DATA_PVC:              plex-kube-plex-data
      CONFIG_PVC:            plex-kube-plex-config
    Mounts:
      /config from config (rw)
      /data from data (rw)
      /shared from shared (rw)
      /transcode from transcode (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from plex-kube-plex-token-txkbn (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  plex-kube-plex-data
    ReadOnly:   false
  config:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  plex-kube-plex-config
    ReadOnly:   false
  transcode:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  shared:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  plex-kube-plex-token-txkbn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  plex-kube-plex-token-txkbn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  beta.kubernetes.io/arch=amd64
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) were unschedulable.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) were unschedulable.

Here is my node through minikube kubectl describe node minikube --namespace plex

Name:               minikube
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=93af9c1e43cab9618e301bc9fa720c63d5efa393
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/updated_at=2020_05_03T16_34_44_0700
                    minikube.k8s.io/version=v1.9.2
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sun, 03 May 2020 16:34:38 +1000
Taints:             node.kubernetes.io/unschedulable:NoSchedule
Unschedulable:      true
Lease:
  HolderIdentity:  minikube
  AcquireTime:     <unset>
  RenewTime:       Thu, 18 Jun 2020 18:02:37 +1000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Thu, 18 Jun 2020 18:01:11 +1000   Sun, 03 May 2020 16:34:33 +1000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 18 Jun 2020 18:01:11 +1000   Sun, 03 May 2020 16:34:33 +1000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 18 Jun 2020 18:01:11 +1000   Sun, 03 May 2020 16:34:33 +1000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 18 Jun 2020 18:01:11 +1000   Sun, 03 May 2020 16:34:58 +1000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.17.0.2
  Hostname:    minikube
Capacity:
  cpu:                4
  ephemeral-storage:  120997584Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             8037176Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  120997584Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             8037176Ki
  pods:               110
System Info:
  Machine ID:                 21e345365a7e45a8ad5560eb273be8e5
  System UUID:                4b9e17f2-ea81-436d-bff9-1db34db18512
  Boot ID:                    6d7e3f0c-ce11-4860-a479-2d6dbfd72779
  Kernel Version:             4.15.0-101-generic
  OS Image:                   Ubuntu 19.10
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.2
  Kubelet Version:            v1.18.0
  Kube-Proxy Version:         v1.18.0
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (11 in total)
  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-66bff467f8-4psrb                      100m (2%)     0 (0%)      70Mi (0%)        170Mi (2%)     46d
  kube-system                 coredns-66bff467f8-jgpgh                      100m (2%)     0 (0%)      70Mi (0%)        170Mi (2%)     46d
  kube-system                 etcd-minikube                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         46d
  kube-system                 kindnet-jzf4m                                 100m (2%)     100m (2%)   50Mi (0%)        50Mi (0%)      46d
  kube-system                 kube-apiserver-minikube                       250m (6%)     0 (0%)      0 (0%)           0 (0%)         46d
  kube-system                 kube-controller-manager-minikube              200m (5%)     0 (0%)      0 (0%)           0 (0%)         46d
  kube-system                 kube-proxy-hffcf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         46d
  kube-system                 kube-scheduler-minikube                       100m (2%)     0 (0%)      0 (0%)           0 (0%)         46d
  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         46d
  kubernetes-dashboard        dashboard-metrics-scraper-84bfdf55ff-2jc84    0 (0%)        0 (0%)      0 (0%)           0 (0%)         43d
  kubernetes-dashboard        kubernetes-dashboard-bc446cc64-kfk8z          0 (0%)        0 (0%)      0 (0%)           0 (0%)         43d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                850m (21%)  100m (2%)
  memory             190Mi (2%)  390Mi (4%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:              <none>
Ramp answered 18/6, 2020 at 8:47 Comment(0)
B
5

This is because the node is marked as Unschedulable:true and it has got a taint node.kubernetes.io/unschedulable:NoSchedule

You can remove that taint and try

kubectl taint node minikube node.kubernetes.io/unschedulable:NoSchedule-
Brede answered 18/6, 2020 at 9:2 Comment(6)
But should i have a taint? I was just deploying a helms chart for a kube project, Why would this taint be here in the first place?Ramp
how did you setup kubernetes ? Generally master nodes by default has that taint to prevent workload being scheduled on master nodeBrede
It was on a default minikube installation, Should i add more nodes via minikube ?Ramp
If you have followed this the master node should not have a taint actually kubernetes.io/docs/setup/learning-environment/minikubeBrede
It keeps putting the taint back even though i have used the remove taint command, How can i check why its happening? I have also deployed an SQL database and minikube dashboard on Kubernetes without problems.Ramp
check cpu, memory capacity of the master node. If there is no capacity then also it becomes unschedulableBrede
M
4

Got the same issue and @Arghya Sadhu answer does help.

$ kubectl patch nodes minikube --patch '{"spec":{"unschedulable": false}}'

Was enough to update the annotation along.

Right after, I noticed kubectl has the cordon and uncordon to "Mark node as un/schedulable".

Marcenemarcescent answered 15/7, 2021 at 8:50 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.