My deployment pod was evicted due to memory consumption:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Evicted 1h kubelet, gke-XXX-default-pool-XXX The node was low on resource: memory. Container my-container was using 1700040Ki, which exceeds its request of 0.
Normal Killing 1h kubelet, gke-XXX-default-pool-XXX Killing container with id docker://my-container:Need to kill Pod
I tried to grant it more memory by adding the following to my deployment yaml
:
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
...
containers:
- name: my-container
image: my-container:latest
...
resources:
requests:
memory: "3Gi"
However, it failed to deploy:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4s (x5 over 13s) default-scheduler 0/3 nodes are available: 3 Insufficient memory.
Normal NotTriggerScaleUp 0s cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added)
The deployment requests only one container.
I'm using GKE
with autoscaling, the nodes in the default (and only) pool have 3.75 GB memory.
From trial and error, I found that the maximum memory I can request is "2Gi". Why can't I utilize the full 3.75 of a node with a single pod? Do I need nodes with bigger memory capacity?