How to completely uninstall kubernetes
Asked Answered
A

7

93

I installed kubernetes cluster using kubeadm following this guide. After some period of time, I decided to reinstall K8s but run into troubles with removing all related files and not finding any docs on official site how to remove cluster installed via kubeadm. Did somebody meet the same problems and know the proper way of removing all files and dependencies? Thank you in advance.

For more information, I removed kubeadm, kubectl and kubelet using apt-get purge/remove but when I started installing the cluster again I got next errors:

[preflight] Some fatal errors occurred:
    Port 6443 is in use
    Port 10251 is in use
    Port 10252 is in use
    /etc/kubernetes/manifests is not empty
    /var/lib/kubelet is not empty
    Port 2379 is in use
    /var/lib/etcd is not empty
Antler answered 22/6, 2017 at 11:35 Comment(1)
In Ubuntu 20.04 "snap remove microk8s" seems to do the job.Ioannina
S
87

use kubeadm reset command. this will un-configure the kubernetes cluster.

Savitt answered 22/6, 2017 at 13:56 Comment(3)
Thank you but I am looking for complete uninstall of kubeadm and all related dependencies to solve my root problem -- #44717722 ) Before reinstallation all works fine and I was able to see logs. So, I considered removing K8s completely from my machine after the second installation because I think some wrong installed dependencies left and made the same issue appears after next installations.Antler
then you need to remove the kubernets and docker rpms and re-install them.Savitt
My containers kept restarting. -f flag forced to reset and stopped container restarts. docker reset -fIntercommunion
M
176

In my "Ubuntu 16.04", I use next steps to completely remove and clean Kubernetes (installed with "apt-get"):

kubeadm reset
sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube*   
sudo apt-get autoremove  
sudo rm -rf ~/.kube

And restart the computer.

Morava answered 13/3, 2018 at 10:3 Comment(4)
I followed these steps, but now every time I open the terminal this message appears: kubectl: command not found Command 'minikube' not found, did you mean: command 'minitube' from deb minitube Try: sudo apt install <deb name>Pomiferous
@MichaelPacheco You probably have some remains of minikube in .bashrc or other configuration.Diarrhea
how to remove docker related images in one go? all starting with k8s.*Encyclopedist
Restart important as it will clear Iptables.Gossipmonger
S
87

use kubeadm reset command. this will un-configure the kubernetes cluster.

Savitt answered 22/6, 2017 at 13:56 Comment(3)
Thank you but I am looking for complete uninstall of kubeadm and all related dependencies to solve my root problem -- #44717722 ) Before reinstallation all works fine and I was able to see logs. So, I considered removing K8s completely from my machine after the second installation because I think some wrong installed dependencies left and made the same issue appears after next installations.Antler
then you need to remove the kubernets and docker rpms and re-install them.Savitt
My containers kept restarting. -f flag forced to reset and stopped container restarts. docker reset -fIntercommunion
G
21

If you are clearing the cluster so that you can start again, then, in addition to what @rib47 said, I also do the following to ensure my systems are in a state ready for kubeadm init again:

kubeadm reset -f
rm -rf /etc/cni /etc/kubernetes /var/lib/dockershim /var/lib/etcd /var/lib/kubelet /var/run/kubernetes ~/.kube/*
iptables -F && iptables -X
iptables -t nat -F && iptables -t nat -X
iptables -t raw -F && iptables -t raw -X
iptables -t mangle -F && iptables -t mangle -X
systemctl restart docker

You then need to re-install docker.io, kubeadm, kubectl, and kubelet to make sure they are at the latest versions for your distribution before you re-initialize the cluster.

EDIT: Discovered that calico adds firewall rules to the raw table so that needs clearing out as well.

Gabriello answered 30/6, 2020 at 13:8 Comment(0)
L
16
kubeadm reset 
/*On Debian base Operating systems you can use the following command.*/
# on debian base 
sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube* 


/*On CentOs distribution systems you can use the following command.*/
#on centos base
sudo yum remove kubeadm kubectl kubelet kubernetes-cni kube*


# on debian base
sudo apt-get autoremove

#on centos base
sudo yum autoremove

/For all/
sudo rm -rf ~/.kube
Loopy answered 18/3, 2020 at 6:14 Comment(2)
While this code may solve the question, including an explanation of how and why this solves the problem would really help to improve the quality of your post, and probably result in more up-votes. Remember that you are answering the question for readers in the future, not just the person asking now. Please edit your answer to add explanations and give an indication of what limitations and assumptions apply.Dronski
Lack little bit of explanation, but this answer should be topIconoscope
A
15

If wanting to make it easily repeatable, it would make sense to make this into a script. This is assuming you are using a Debian based OS:

#!/bin/sh
# Kube Admin Reset
kubeadm reset

# Remove all packages related to Kubernetes
apt remove -y kubeadm kubectl kubelet kubernetes-cni 
apt purge -y kube*

# Remove docker containers/ images ( optional if using docker)
docker image prune -a
systemctl restart docker
apt purge -y docker-engine docker docker.io docker-ce docker-ce-cli containerd containerd.io runc --allow-change-held-packages

# Remove parts

apt autoremove -y

# Remove all folder associated to kubernetes, etcd, and docker
rm -rf ~/.kube
rm -rf /etc/cni /etc/kubernetes /var/lib/dockershim /var/lib/etcd /var/lib/kubelet /var/lib/etcd2/ /var/run/kubernetes ~/.kube/* 
rm -rf /var/lib/docker /etc/docker /var/run/docker.sock
rm -f /etc/apparmor.d/docker /etc/systemd/system/etcd* 

# Delete docker group (optional)
groupdel docker

# Clear the iptables
iptables -F && iptables -X
iptables -t nat -F && iptables -t nat -X
iptables -t raw -F && iptables -t raw -X
iptables -t mangle -F && iptables -t mangle -X

NOTE:

This will destroy everything related to Kubernetes, etcd, and docker on the Node/server this command is run against!

Algeria answered 16/3, 2022 at 19:15 Comment(0)
F
12

The guide you linked now has a Tear Down section:

Talking to the master with the appropriate credentials, run:

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

Then, on the node being removed, reset all kubeadm installed state:

kubeadm reset
Flatworm answered 13/4, 2018 at 10:41 Comment(1)
Installed new ubuntu 18.04 - see kubernetes running - I dont know how it got installed. How do I delete - dont have kubeadm or kubectl in the system (that I can find)Bruit
S
8

I use the following scripts to completely uninstall an existing Kubernetes cluster and its running docker containers

sudo kubeadm reset

sudo apt purge kubectl kubeadm kubelet kubernetes-cni -y
sudo apt autoremove
sudo rm -fr /etc/kubernetes/; sudo rm -fr ~/.kube/; sudo rm -fr /var/lib/etcd; sudo rm -rf /var/lib/cni/

sudo systemctl daemon-reload

sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X

# remove all running docker containers
docker rm -f `docker ps -a | grep "k8s_" | awk '{print $1}'`
Suasion answered 12/4, 2021 at 8:38 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.