How to stop AWS EKS Worker Instances
Asked Answered
S

5

20

I wonder if that would be possible to temporary stop the worker VM instances so they are not running at night time when I am not working on a cluster development. So far the only way I am aware of to "stop" the instances from running is to delete the cluster itself which I don't want to do. Any suggestions are highly appreciated.

P.S. Edited later

The cluster was created following steps outlined in this guide.

Soapberry answered 16/7, 2019 at 0:44 Comment(2)
How did you create your EKS worker nodes? Have you used the Cloud Formation template provided by this guide? If so, I think you can just update your Cloud Formation config and set NodeAutoScalingGroupDesiredCapacity to 0 (zero).Sapanwood
I created cluster by following the steps outlined in this guide: docs.aws.amazon.com/eks/latest/userguide/…Soapberry
V
15

Go to EC2 instances dashboard of your Node Group and from right panel in bottom click on Auto Scaling Groups then select your group by click on checkbox and click edit button and change Desired, Min & Max capacities to 0

Auto Scaling groups -> Group size

Venose answered 30/7, 2021 at 7:25 Comment(2)
This seems to be not possible anymore... The message from AWS Dashboard says: "The minimum allowed size for maximum number of nodes is 1."Holystone
I change all the capacities above to 0. It turned out all of my worker nodes was Terminated. @FisalCounterchange
I
18

I'm just learning myself but this might help. If you have eksctl installed, you can use it from the command line to scale your cluster. I scale mine down to the min size when I'm not using it:

eksctl get cluster
eksctl get nodegroup --cluster CLUSTERNAME
eksctl scale nodegroup --cluster CLUSTERNAME --name NODEGROUPNAME --nodes NEWSIZE

To completely scale down the nodes to zero use this (max=0 threw errors):

eksctl scale nodegroup --cluster CLUSTERNAME --name NODEGROUPNAME --nodes 0 --nodes-max 1 --nodes-min 0

Impetrate answered 1/3, 2020 at 17:59 Comment(0)
V
15

Go to EC2 instances dashboard of your Node Group and from right panel in bottom click on Auto Scaling Groups then select your group by click on checkbox and click edit button and change Desired, Min & Max capacities to 0

Auto Scaling groups -> Group size

Venose answered 30/7, 2021 at 7:25 Comment(2)
This seems to be not possible anymore... The message from AWS Dashboard says: "The minimum allowed size for maximum number of nodes is 1."Holystone
I change all the capacities above to 0. It turned out all of my worker nodes was Terminated. @FisalCounterchange
E
6

Edit the autoscaling group and set the instances to 0. This will shut down all worker nodes. Now you can use AWS Automation to schedule a repetitive action through automation documents that will be stopping/starting at given periods of time. You can't stop the master nodes as they are managed by AWS.

Eboni answered 28/5, 2020 at 8:47 Comment(0)
B
3

Take a look at the kube-downscaler which can be deployed to cluster to scale in and out the deployments based on time of day.

More cost reduction techniques in this blog.

Breadstuff answered 8/9, 2020 at 20:12 Comment(0)
C
0

you could use POD scaler to stop all the Pods...

apiVersion: batch/v1
kind: CronJob
metadata:
  name: scale-down-all-deployments
spec:
  schedule: "0 22 * * *"  # Run every day at 10:00 PM
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: scale-down-container
            image: kubectl
            command: ["sh", "-c", "for ns in $(kubectl get namespaces -o=jsonpath='{.items[*].metadata.name}'); do for dp in $(kubectl get deployments -n $ns -o=jsonpath='{.items[*].metadata.name}'); do kubectl scale --replicas=0 deployment $dp -n $ns; done; done"]
          restartPolicy: OnFailure
Corybantic answered 31/5 at 11:5 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.