要使用重新安装我的本地集群kubeadm
,我运行了以下命令:
sudo apt-get purge -y kubeadm kubectl kubelet kubernetes-cni kubelet kube-apiserver kube-scheduler kube-controller-manager kube-proxy
sudo apt-get autoremove -y
然后重新启动我的机器。
奇怪的是,我仍然使用 ps 发现许多 kube 进程:
(base) ➜ ~ ps -ef | grep kube
root 9198 8816 3 12:46 ? 00:01:11 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=172.23.0.2 --node-labels= --pod-infra-container-image=registry.k8s.io/pause:3.9 --provider-id=kind://docker/kind/kind-control-plane --fail-swap-on=false --cgroup-root=/kubelet
root 9480 9312 2 12:46 ? 00:00:52 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=kind --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --enable-hostpath-provisioner=true --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/16 --use-service-account-credentials=true
root 9513 9313 0 12:46 ? 00:00:11 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true
root 9550 9352 6 12:46 ? 00:02:13 kube-apiserver --advertise-address=172.23.0.2 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --runtime-config= --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/16 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
root 9557 9344 3 12:46 ? 00:01:06 etcd --advertise-client-urls=https://172.23.0.2:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://172.23.0.2:2380 --initial-cluster=kind-control-plane=https://172.23.0.2:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://172.23.0.2:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://172.23.0.2:2380 --name=kind-control-plane --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
root 10082 10017 0 12:46 ? 00:00:00 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=kind-control-plane
但是以 root 身份,我无法在目录中找到这些文件(例如 kube-proxy、kubelet、kube-apiserver 和 kube-scheduler):
(base) ➜ ~ ll /usr/bin/kubelet
ls: cannot access '/usr/bin/kubelet': No such file or directory
(base) ➜ ~ kubectl
zsh: command not found: kubectl
(base) ➜ ~
(base) ➜ ~ sudo kube-apiserver --help
sudo: kube-apiserver: command not found
(base) ➜ ~ cd /usr/bin
(base) ➜ bin ls kubelet
ls: cannot access 'kubelet': No such file or directory
(base) ➜ bin ll kube-controller-manager
ls: cannot access 'kube-controller-manager': No such file or directory
(base) ➜ bin ll kube-apiserver
ls: cannot access 'kube-apiserver': No such file or directory
有人能帮忙解释一下这些过程是如何启动的吗?谢谢。
您面临的挑战是,即使您使用 kubeadm purge 和 apt-get autoremove 删除 kubernetes 包,处理这些进程的 systemd 服务仍在运行并尝试启动它们。
这意味着使用 apt-get-purge,您成功完全删除了 kubelet、kube-apiserver 等 kubernetes 二进制文件。但是,管理这些二进制文件的系统服务可能仍处于启用状态并努力运行它们,尽管它们的文件丢失了。
/etc /systemd/system/是可以找到 kubernetes 组件的 systemd 服务的典型位置。
为了处理这种情况,请
sudo systemctl stop kubelet kube-apiserver kube- controller- manager kube- scheduler kube- proxy etcd
终止所有正在运行的服务进程。还要考虑禁用它们。要禁用任何这些服务,只需键入
sudo systemctl disable kubelet kube-apiserbver kube- controller- manager kube-scheduler jube -proxy etcd
以便在启动 PC 后不再启用它们。另外,停止 Kubernetes 后,请检查是否还有剩余的配置文件或文件夹。这些文件或文件夹将放置在 /etc/kubernetes 或 /var/lib/kubernetes 等目录中。
如果您严格遵循这些说明,通过正确禁用所有主要的剩余 Kubernetes 服务或清理所有内容,然后通过 Kubeadm 重新准备重新安装过程。