AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / user-546485

Xu Shijie's questions

Martin Hope
Xu Shijie
Asked: 2024-07-27 13:26:09 +0800 CST

清除删除后 kubelet、kubeschedule 和 etcd 的安装位置

  • 5

要使用重新安装我的本地集群kubeadm,我运行了以下命令:

sudo apt-get purge -y kubeadm kubectl kubelet kubernetes-cni  kubelet kube-apiserver kube-scheduler kube-controller-manager kube-proxy
sudo apt-get autoremove -y

然后重新启动我的机器。

奇怪的是,我仍然使用 ps 发现许多 kube 进程:

(base) ➜  ~ ps -ef | grep kube 
root        9198    8816  3 12:46 ?        00:01:11 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=172.23.0.2 --node-labels= --pod-infra-container-image=registry.k8s.io/pause:3.9 --provider-id=kind://docker/kind/kind-control-plane --fail-swap-on=false --cgroup-root=/kubelet
root        9480    9312  2 12:46 ?        00:00:52 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=kind --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --enable-hostpath-provisioner=true --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/16 --use-service-account-credentials=true
root        9513    9313  0 12:46 ?        00:00:11 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true
root        9550    9352  6 12:46 ?        00:02:13 kube-apiserver --advertise-address=172.23.0.2 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --runtime-config= --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/16 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
root        9557    9344  3 12:46 ?        00:01:06 etcd --advertise-client-urls=https://172.23.0.2:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://172.23.0.2:2380 --initial-cluster=kind-control-plane=https://172.23.0.2:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://172.23.0.2:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://172.23.0.2:2380 --name=kind-control-plane --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
root       10082   10017  0 12:46 ?        00:00:00 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=kind-control-plane

但是以 root 身份,我无法在目录中找到这些文件(例如 kube-proxy、kubelet、kube-apiserver 和 kube-scheduler):

(base) ➜  ~ ll /usr/bin/kubelet                
ls: cannot access '/usr/bin/kubelet': No such file or directory
(base) ➜  ~ kubectl 
zsh: command not found: kubectl
(base) ➜  ~ 
(base) ➜  ~ sudo kube-apiserver --help            
sudo: kube-apiserver: command not found
(base) ➜  ~ cd /usr/bin          
(base) ➜  bin ls kubelet 
ls: cannot access 'kubelet': No such file or directory
(base) ➜  bin ll kube-controller-manager                
ls: cannot access 'kube-controller-manager': No such file or directory
(base) ➜  bin ll kube-apiserver                       
ls: cannot access 'kube-apiserver': No such file or directory

有人能帮忙解释一下这些过程是如何启动的吗?谢谢。

kubernetes
  • 1 个回答
  • 60 Views
Martin Hope
Xu Shijie
Asked: 2024-07-26 15:02:55 +0800 CST

Ray Cluster 设置“proxyconnect tcp:拨号 tcp 连接被拒绝”

  • 6

在设置 Ray Cluster 期间重复出现“proxyconnect tcp:拨号 tcp 127.0.0.1:1082:连接:连接被拒绝”,那么在哪里可以使用代理配置 K8S?

按照Ray CLuster 快速入门说明进行操作:

helm repo add kuberay https://ray-project.github.io/kuberay-helm/
helm repo update

# Install both CRDs and KubeRay operator v1.1.1.
helm install kuberay-operator kuberay/kuberay-operator --version 1.1.1

# Confirm that the operator is running in the namespace `default`.
kubectl get pods
# NAME                                READY   STATUS    RESTARTS   AGE
# kuberay-operator-7fbdbf8c89-pt8bk   1/1     Running   0          27s 

在步骤2中,我得到了一个ErrImagePull状态窗格,以及实际的输出:

(base) ➜  ~ helm install kuberay-operator kuberay/kuberay-operator --version 1.0.0                                                                                                                     [36/197]
NAME: kuberay-operator                                                                                                                                                                                         
LAST DEPLOYED: Fri Jul 26 08:56:30 2024                                                                                                                                                                        
NAMESPACE: default                                                                                                                                                                                             
STATUS: deployed                                                                                                                                                                                               
REVISION: 1
TEST SUITE: None        
(base) ➜  ~ kubectl get pods                                                                           
NAME                                READY   STATUS         RESTARTS   AGE
kuberay-operator-5d64d88fdb-shrkv   0/1     ErrImagePull   0          10s
(base) ➜  ~ kubectl describe pod kuberay-operator-5d64d88fdb-shrkv 
Name:             kuberay-operator-5d64d88fdb-shrkv 
Namespace:        default              
Priority:         0                
Service Account:  kuberay-operator                                                                     
Node:             kind-control-plane/172.23.0.2                                                        
Start Time:       Fri, 26 Jul 2024 08:56:31 +0800
Labels:           app.kubernetes.io/component=kuberay-operator    
                  app.kubernetes.io/instance=kuberay-operator     
                  app.kubernetes.io/name=kuberay-operator                                                                                                                                                      
                  pod-template-hash=5d64d88fdb         
.....
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  22s               default-scheduler  Successfully assigned default/kuberay-operator-5d64d88fdb-shrkv to kind-control-plane
  Normal   BackOff    21s               kubelet            Back-off pulling image "kuberay/operator:v1.0.0"
  Warning  Failed     21s               kubelet            Error: ImagePullBackOff
  Normal   Pulling    6s (x2 over 21s)  kubelet            Pulling image "kuberay/operator:v1.0.0"
  Warning  Failed     6s (x2 over 21s)  kubelet            Failed to pull image "kuberay/operator:v1.0.0": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/kuberay/operator:v1.0.0"
: failed to resolve reference "docker.io/kuberay/operator:v1.0.0": failed to do request: Head "https://registry-1.docker.io/v2/kuberay/operator/manifests/v1.0.0": proxyconnect tcp: dial tcp 127.0.0.1:1082: connect: connection refused
  Warning  Failed     6s (x2 over 21s)  kubelet            Error: ErrImagePull

难题的问题是消息:proxyconnect tcp:dial tcp 127.0.0.1:1082:connect:连接被拒绝

我尝试了以下方法但没有找到任何代理配置:

(base) ➜  ~  echo $HTTP_PROXY

(base) ➜  ~  echo $HTTPS_PROXY

(base) ➜  ~ cat /etc/environment                 
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
(base) ➜  ~ cat /etc/docker/daemon.json 
{
    "registry-mirrors": [
        "https://5wxalzzb.mirror.aliyuncs.com",    
        "https://hub-mirror.c.163.com",
        "https://mirror.iscas.ac.cn",
        "https://docker.m.daocloud.io"
    ],
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "nvidia-container-runtime"
        }
    }
}

为了缓解 pod 问题,我启动了本地端口 1082,这是一个没有 AuthZ 和 AuthN 的 HTTP 代理,然后再次重新安装 kubera/operator,但代理的错误事件消息相同。

proxy
  • 1 个回答
  • 154 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve