AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / unix / 问题

问题[kubernetes](unix)

Martin Hope
Marc Le Bihan
Asked: 2025-04-18 00:16:46 +0800 CST

多机 Vagrant 是模拟 Kubernetes 集群的好选择吗?

  • 4

我正在读一本关于自学 Kubernetes 的书。

书中有很多章节讲解如何操作 Kubernetes 集群,并建议读者如果条件允许,最好在云服务器上创建一个账户。或者,也可以尝试用几台 Raspberry Pi 搭建一个集群。但我不想要或者负担不起这些选择。我家里只有一台电脑,仅此而已。

有什么原因会阻止我创建一个多机 Vagrant 来创建我书中提到的所有计算机吗?
我相信这可行……

我的问题很简单也很幼稚。但如果我肯定会遇到障碍或重大困难,我希望在选择这条错误的道路之前立即知道。谢谢!

kubernetes
  • 1 个回答
  • 39 Views
Martin Hope
Rafael Mora
Asked: 2025-03-10 10:17:12 +0800 CST

为什么我的网络连接被拒绝并且服务器之间的 ping 命令不起作用?

  • 5

集群信息:

kubectl version
Client Version: v1.29.14
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.14
Cloud being used: bare-metal
Installation method:
Host OS: AlmaLinux 8
CNI and version: Flannel ver: 0.26.4
CRI and version: cri-dockerd ver: 0.3.16

我有一个主节点并创建了我的第一个工作节点,在工作节点中执行命令 kubeadm join 之前,我可以从工作节点 ping 到主节点,反之亦然,没有任何问题,现在我已经执行了命令,kubeadm join ... 但我无法再在它们之间 ping 通,并且出现此错误:

[root@worker-1 ~]# kubectl get nodes -o wide
E0308 19:38:31.027307   59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s": dial tcp 198.58.126.88:6443: connect: connection refused
E0308 19:38:32.051145   59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s": dial tcp 198.58.126.88:6443: connect: connection refused
E0308 19:38:33.075350   59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s": dial tcp 198.58.126.88:6443: connect: connection refused
E0308 19:38:34.099160   59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s": dial tcp 198.58.126.88:6443: connect: connection refused
E0308 19:38:35.123011   59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s": dial tcp 198.58.126.88:6443: connect: connection refused
The connection to the server 198.58.126.88:6443 was refused - did you specify the right host or port?

从工作节点 ping 主节点:

[root@worker-1 ~]# ping 198.58.126.88
PING 198.58.126.88 (198.58.126.88) 56(84) bytes of data.
From 198.58.126.88 icmp_seq=1 Destination Port Unreachable
From 198.58.126.88 icmp_seq=2 Destination Port Unreachable
From 198.58.126.88 icmp_seq=3 Destination Port Unreachable

如果我运行这个:

[root@worker-1 ~]# iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X

ping命令开始起作用:

[root@worker-1 ~]# ping 198.58.126.88
PING 198.58.126.88 (198.58.126.88) 56(84) bytes of data.
64 bytes from 198.58.126.88: icmp_seq=1 ttl=64 time=0.030 ms
64 bytes from 198.58.126.88: icmp_seq=2 ttl=64 time=0.025 ms

(ping 命令对 IPv6 地址有效,但对 IPv4 地址无效)但大约一分钟后它再次被阻止:

[root@worker-1 ~]# ping 198.58.126.88
PING 198.58.126.88 (198.58.126.88) 56(84) bytes of data.
From 198.58.126.88 icmp_seq=1 Destination Port Unreachable
From 198.58.126.88 icmp_seq=2 Destination Port Unreachable
[root@worker-1 ~]# cat /etc/sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.ipv6.conf.default.forwarding=1
net.ipv6.conf.all.forwarding=1
[root@worker-1 ~]# cd /etc/systctl.d/
-bash: cd: /etc/systctl.d/: No such file or directory

工作节点中的端口6443/TCP已关闭,我尝试打开它但没有成功:

nmap 172.235.135.144 -p 6443                                                                                            ✔  2.7.4   06:19:47
Starting Nmap 7.95 ( https://nmap.org ) at 2025-03-11 16:22 -05
Nmap scan report for 172-235-135-144.ip.linodeusercontent.com (172.235.135.144)
Host is up (0.072s latency).

PORT     STATE  SERVICE
6443/tcp closed sun-sr-https

Nmap done: 1 IP address (1 host up) scanned in 0.26 seconds

主节点:

[root@master ~]# iptables -nvL
Chain INPUT (policy ACCEPT 1312K packets, 202M bytes)
 pkts bytes target     prot opt in     out     source               destination
1301K  201M KUBE-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0
1311K  202M KUBE-IPVS-FILTER  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes ipvs access filter */
1311K  202M KUBE-PROXY-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-proxy firewall rules */
1311K  202M KUBE-NODE-PORT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes health check rules */
   40  3520 ACCEPT     icmp --  *      *       198.58.126.88        0.0.0.0/0
    0     0 ACCEPT     icmp --  *      *       172.233.172.101      0.0.0.0/0

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
  950  181K KUBE-PROXY-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-proxy firewall rules */
  950  181K KUBE-FORWARD  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
  212 12626 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0
  212 12626 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      br-09363fc9af47  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
   20  1068 DOCKER     all  --  *      br-09363fc9af47  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  br-09363fc9af47 !br-09363fc9af47  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  br-09363fc9af47 br-09363fc9af47  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      br-05a2ea8c281b  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    4   184 DOCKER     all  --  *      br-05a2ea8c281b  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  br-05a2ea8c281b !br-05a2ea8c281b  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  br-05a2ea8c281b br-05a2ea8c281b  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      br-032fd1b78367  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      br-032fd1b78367  0.0.0.0/0            0.0.0.0/0
    9   504 ACCEPT     all  --  br-032fd1b78367 !br-032fd1b78367  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  br-032fd1b78367 br-032fd1b78367  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      br-ae1997e801f3  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      br-ae1997e801f3  0.0.0.0/0            0.0.0.0/0
  132  7920 ACCEPT     all  --  br-ae1997e801f3 !br-ae1997e801f3  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  br-ae1997e801f3 br-ae1997e801f3  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      br-9f6d34f7e48a  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
   14   824 DOCKER     all  --  *      br-9f6d34f7e48a  0.0.0.0/0            0.0.0.0/0
    4   240 ACCEPT     all  --  br-9f6d34f7e48a !br-9f6d34f7e48a  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  br-9f6d34f7e48a br-9f6d34f7e48a  0.0.0.0/0            0.0.0.0/0
   29  1886 FLANNEL-FWD  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* flanneld forward */

Chain OUTPUT (policy ACCEPT 1309K packets, 288M bytes)
 pkts bytes target     prot opt in     out     source               destination
1298K  286M KUBE-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0
1308K  288M KUBE-IPVS-OUT-FILTER  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes ipvs access filter */

Chain DOCKER (6 references)
 pkts bytes target     prot opt in     out     source               destination
   14   824 ACCEPT     tcp  --  !br-9f6d34f7e48a br-9f6d34f7e48a  0.0.0.0/0            172.24.0.2           tcp dpt:3001
    0     0 ACCEPT     tcp  --  !br-ae1997e801f3 br-ae1997e801f3  0.0.0.0/0            172.21.0.2           tcp dpt:3000
    4   184 ACCEPT     tcp  --  !br-05a2ea8c281b br-05a2ea8c281b  0.0.0.0/0            172.22.0.2           tcp dpt:4443
   12   700 ACCEPT     tcp  --  !br-09363fc9af47 br-09363fc9af47  0.0.0.0/0            172.19.0.2           tcp dpt:4443
    8   368 ACCEPT     tcp  --  !br-09363fc9af47 br-09363fc9af47  0.0.0.0/0            172.19.0.3           tcp dpt:443

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
 pkts bytes target     prot opt in     out     source               destination
  212 12626 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain DOCKER-ISOLATION-STAGE-2 (0 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain FLANNEL-FWD (1 references)
 pkts bytes target     prot opt in     out     source               destination
   29  1886 ACCEPT     all  --  *      *       10.244.0.0/16        0.0.0.0/0            /* flanneld forward */
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            10.244.0.0/16        /* flanneld forward */

Chain DOCKER-USER (1 references)
 pkts bytes target     prot opt in     out     source               destination
  212 12626 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain KUBE-FORWARD (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-NODE-PORT (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* Kubernetes health check node port */ match-set KUBE-HEALTH-CHECK-NODE-PORT dst

Chain KUBE-PROXY-FIREWALL (2 references)
 pkts bytes target     prot opt in     out     source               destination

Chain KUBE-SOURCE-RANGES-FIREWALL (0 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain KUBE-IPVS-FILTER (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-LOAD-BALANCER dst,dst
    2   104 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-CLUSTER-IP dst,dst
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-EXTERNAL-IP dst,dst
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-EXTERNAL-IP-LOCAL dst,dst
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-HEALTH-CHECK-NODE-PORT dst
    0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW match-set KUBE-IPVS-IPS dst reject-with icmp-port-unreachable

Chain KUBE-IPVS-OUT-FILTER (1 references)
 pkts bytes target     prot opt in     out     source               destination

Chain KUBE-FIREWALL (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      *      !127.0.0.0/8          127.0.0.0/8          /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT

Chain KUBE-KUBELET-CANARY (0 references)
 pkts bytes target     prot opt in     out     source               destination

工作节点:

[root@worker-1 ~]# iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
18469 1430K KUBE-IPVS-FILTER  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes ipvs access filter */
10534  954K KUBE-PROXY-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-proxy firewall rules */
10534  954K KUBE-NODE-PORT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes health check rules */
10767 1115K KUBE-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 KUBE-PROXY-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-proxy firewall rules */
    0     0 KUBE-FORWARD  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
    0     0 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
18359 1696K KUBE-IPVS-OUT-FILTER  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes ipvs access filter */
18605 1739K KUBE-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain DOCKER (1 references)
 pkts bytes target     prot opt in     out     source               destination

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER-ISOLATION-STAGE-2  all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain DOCKER-USER (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain KUBE-FIREWALL (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      *      !127.0.0.0/8          127.0.0.0/8          /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT

Chain KUBE-KUBELET-CANARY (0 references)
 pkts bytes target     prot opt in     out     source               destination

Chain KUBE-FORWARD (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-NODE-PORT (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* Kubernetes health check node port */ match-set KUBE-HEALTH-CHECK-NODE-PORT dst

Chain KUBE-PROXY-FIREWALL (2 references)
 pkts bytes target     prot opt in     out     source               destination

Chain KUBE-SOURCE-RANGES-FIREWALL (0 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain KUBE-IPVS-FILTER (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-LOAD-BALANCER dst,dst
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-CLUSTER-IP dst,dst
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-EXTERNAL-IP dst,dst
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-EXTERNAL-IP-LOCAL dst,dst
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-HEALTH-CHECK-NODE-PORT dst
   45  2700 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW match-set KUBE-IPVS-IPS dst reject-with icmp-port-unreachable

Chain KUBE-IPVS-OUT-FILTER (1 references)
 pkts bytes target     prot opt in     out     source               destination

如果我iptables -F INPUT在工作进程中运行,ping 命令将再次开始工作:

[root@worker-1 ~]# iptables -F INPUT
[root@worker-1 ~]# ping 198.58.126.88
PING 198.58.126.88 (198.58.126.88) 56(84) bytes of data.
64 bytes from 198.58.126.88: icmp_seq=1 ttl=64 time=0.054 ms
64 bytes from 198.58.126.88: icmp_seq=2 ttl=64 time=0.043 ms
64 bytes from 198.58.126.88: icmp_seq=3 ttl=64 time=0.037 ms
64 bytes from 198.58.126.88: icmp_seq=4 ttl=64 time=0.039 ms
64 bytes from 198.58.126.88: icmp_seq=5 ttl=64 time=0.023 ms
64 bytes from 198.58.126.88: icmp_seq=6 ttl=64 time=0.022 ms
64 bytes from 198.58.126.88: icmp_seq=7 ttl=64 time=0.070 ms
64 bytes from 198.58.126.88: icmp_seq=8 ttl=64 time=0.072 ms
^C
--- 198.58.126.88 ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7197ms
rtt min/avg/max/mdev = 0.022/0.045/0.072/0.017 ms

来自工作人员的 strace 命令:

[root@worker-1 ~]# iptables -F INPUT
[root@worker-1 ~]# strace -eopenat kubectl version
openat(AT_FDCWD, "/sys/kernel/mm/transparent_hugepage/hpage_pmd_size", O_RDONLY) = 3
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
openat(AT_FDCWD, "/usr/bin/kubectl", O_RDONLY|O_CLOEXEC) = 3
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
openat(AT_FDCWD, "/usr/local/sbin", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/local/bin", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/sbin", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/bin", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/root/bin", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/root/.kube/config", O_RDONLY|O_CLOEXEC) = 3
Client Version: v1.29.14
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
The connection to the server 198.58.126.88:6443 was refused - did you specify the right host or port?
+++ exited with 1 +++

在 worker 中执行 kubeadm join 命令之前和之后的 nftables 在此处输入图片描述

Chain KUBE-IPVS-FILTER (0 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere             match-set KUBE-LOAD-BALANCER dst,dst
RETURN     all  --  anywhere             anywhere             match-set KUBE-CLUSTER-IP dst,dst
RETURN     all  --  anywhere             anywhere             match-set KUBE-EXTERNAL-IP dst,dst
RETURN     all  --  anywhere             anywhere             match-set KUBE-EXTERNAL-IP-LOCAL dst,dst
RETURN     all  --  anywhere             anywhere             match-set KUBE-HEALTH-CHECK-NODE-PORT dst
REJECT     all  --  anywhere             anywhere             ctstate NEW match-set KUBE-IPVS-IPS dst reject-with icmp-port-unreachable
[root@worker-1 ~]# sudo iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N KUBE-FIREWALL
-N KUBE-KUBELET-CANARY
-N KUBE-FORWARD
-N KUBE-NODE-PORT
-N KUBE-PROXY-FIREWALL
-N KUBE-SOURCE-RANGES-FIREWALL
-N KUBE-IPVS-FILTER
-N KUBE-IPVS-OUT-FILTER
-A INPUT -m comment --comment "kubernetes ipvs access filter" -j KUBE-IPVS-FILTER
-A INPUT -m comment --comment "kube-proxy firewall rules" -j KUBE-PROXY-FIREWALL
-A INPUT -m comment --comment "kubernetes health check rules" -j KUBE-NODE-PORT
-A FORWARD -m comment --comment "kube-proxy firewall rules" -j KUBE-PROXY-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A OUTPUT -m comment --comment "kubernetes ipvs access filter" -j KUBE-IPVS-OUT-FILTER
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-NODE-PORT -m comment --comment "Kubernetes health check node port" -m set --match-set KUBE-HEALTH-CHECK-NODE-PORT dst -j ACCEPT
-A KUBE-SOURCE-RANGES-FIREWALL -j DROP
-A KUBE-IPVS-FILTER -m set --match-set KUBE-LOAD-BALANCER dst,dst -j RETURN
-A KUBE-IPVS-FILTER -m set --match-set KUBE-CLUSTER-IP dst,dst -j RETURN
-A KUBE-IPVS-FILTER -m set --match-set KUBE-EXTERNAL-IP dst,dst -j RETURN
-A KUBE-IPVS-FILTER -m set --match-set KUBE-EXTERNAL-IP-LOCAL dst,dst -j RETURN
-A KUBE-IPVS-FILTER -m set --match-set KUBE-HEALTH-CHECK-NODE-PORT dst -j RETURN
-A KUBE-IPVS-FILTER -m conntrack --ctstate NEW -m set --match-set KUBE-IPVS-IPS dst -j REJECT --reject-with icmp-port-unreachable

一旦 kubelet 服务开始运行,从工作节点到主节点的连接就会开始阻塞;如果 kubelet 服务停止了,那么我就可以从工作节点 ping 回主节点。

什么原因导致工作节点出现阻塞?谢谢。

kubernetes
  • 2 个回答
  • 67 Views
Martin Hope
Emad Khavaninzadeh
Asked: 2024-12-28 20:33:36 +0800 CST

检查 etcd 集群正常工作

  • 5

当我运行下面的命令时,我收到错误,退出代码为 1。有人能告诉我为什么会收到此错误以及如何修复它吗?谢谢

kubectl exec etcd-master -- etcdctl member list
{"level":"warn","ts":"2024-12-28T12:28:32.978637Z","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:63","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000370000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"error reading server preface: EOF\""}
Error: context deadline exceeded
command terminated with exit code 1

我的 etcd 日志也是这样的。我的防火墙已关闭,selinux 也是允许的。我的操作系统是 rocky linux 9.3。

{"level":"warn","ts":"2024-12-28T12:42:13.018454Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52198","server-name":"","error":"tls: first record does not look like a TLS handshake"}
{"level":"warn","ts":"2024-12-28T12:42:14.022364Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52214","server-name":"","error":"tls: first record does not look like a TLS handshake"}
{"level":"warn","ts":"2024-12-28T12:42:15.330400Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52222","server-name":"","error":"tls: first record does not look like a TLS handshake"}
{"level":"warn","ts":"2024-12-28T12:44:21.880909Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.56.101:42196","server-name":"","error":"tls: first record does not look like a TLS handshake"}
{"level":"warn","ts":"2024-12-28T12:44:22.883450Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.56.101:42210","server-name":"","error":"tls: first record does not look like a TLS handshake"}
{"level":"warn","ts":"2024-12-28T12:44:24.622810Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.56.101:42220","server-name":"","error":"tls: first record does not look like a TLS handshake"}
{"level":"warn","ts":"2024-12-28T12:44:26.821834Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.56.101:42222","server-name":"","error":"tls: first record does not look like a TLS handshake"}
{"level":"info","ts":"2024-12-28T12:46:45.062979Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":228269}
{"level":"info","ts":"2024-12-28T12:46:45.079467Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":228269,"took":"10.947531ms","hash":4017906930,"current-db-size-bytes":3809280,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":1859584,"current-db-size-in-use":"1.9 MB"}
{"level":"info","ts":"2024-12-28T12:46:45.079800Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4017906930,"revision":228269,"compact-revision":227809}
kubernetes
  • 1 个回答
  • 13 Views
Martin Hope
Dolphin
Asked: 2024-07-04 23:24:35 +0800 CST

无法使用 kubernets configmap

  • 5

在 kubernetes 1.29.x 集群中。首先确保命名空间包含 configmap:

➜  migration kubectl get configmaps -n reddwarf-cache

NAME                           DATA   AGE
cruise-redis-configuration     3      2y334d
cruise-redis-health            6      2y334d
cruise-redis-scripts           2      2y334d
kube-root-ca.crt               1      2y334d
reddwarf-redis-configuration   3      451d
reddwarf-redis-health          6      451d
reddwarf-redis-scripts         2      451d

当我尝试使用此命令获取 kubernets configmap 时:

➜  migration kubectl get configmaps -o yaml -n reddwarf-cache- > reddwarf-cache-configmap.yaml

➜  migration cat reddwarf-cache-configmap.yaml
apiVersion: v1
items: []
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

configmap 为空,我确定我可以访问 configmap,为什么无法从 kubernetes 集群获取 configmap yaml 配置?我也尝试过:

kubectl get cm -o yaml -n reddwarf-cache- > reddwarf-cache-configmap.yaml
kubernetes
  • 1 个回答
  • 11 Views
Martin Hope
Emad Khavaninzadeh
Asked: 2024-03-07 03:37:49 +0800 CST

kubernetes 容器部分中的 {} 是什么意思?

  • 5

我想知道Kubernetes 中的resources: {}in是什么pod.spec.containers.resources意思?

kubernetes
  • 1 个回答
  • 20 Views
Martin Hope
user3553913
Asked: 2020-01-02 22:55:13 +0800 CST

Linux 进程正在向 STDOUT 发送一些垃圾字符。没有连接到它的控制终端

  • 0

我有一个容器化的 unimrcp 服务器,它作为 kubernetes pod 运行。当我进入容器并做ps -ef它的输出是这样的:

[root@unimrcp-0 fd]# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0 99 13:13 ?        01:07:38 ./unimrcpserver
root        75     1  0 13:13 ?        00:00:00 [arping] <defunct>
root        76     1  0 13:13 ?        00:00:00 [arping] <defunct>
root       154     0  0 13:14 pts/0    00:00:00 /bin/bash
root       209   154  0 14:21 pts/0    00:00:00 ps -ef

另外,如果我这样做cat /proc/[pid]/fd/1,我会看到一些损坏的输出,如下所示:

未知指令:▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒

为什么进程没有附加控制终端。我已禁用 Unimrcp 记录到标准输出。CPU 利用率也为 99%。有人可以帮忙解决这个问题吗?

这是容器的入口点

#!/bin/sh
source /ip-conf.sh; set_control_media_network "UNIMRCP"
CONTROL_IP=$(get_control_ipv4)
MEDIA_IP=$(get_media_ipv4)
LOG_LEVEL=$(echo $LOG_LEVEL | tr -s " " | xargs)
LOG_OUTPUT=$(echo $LOG_OUTPUT | tr -s " " | xargs)
LOG_HEADERS=$(echo $LOG_HEADERS | tr -s " " | xargs)
sed -i 's+<priority>.*</priority>+''<priority>'$LOG_LEVEL'</priority>+g' 
/usr/local/unimrcp/conf/logger.xml
sed -i 's+<output>.*</output>+''<output>'$LOG_OUTPUT'</output>+g' 
/usr/local/unimrcp/conf/logger.xml
sed -i 's+<headers>.*</headers>+''<headers>'$LOG_HEADERS'</headers>+g' 
/usr/local/unimrcp/conf/logger.xml
sed -i 's+<!-- <ip>.*</ip> -->+''<ip>'$CONTROL_IP'</ip>+g' 
/usr/local/unimrcp/conf/unimrcpserver.xml
sed -i 's+<!-- <rtp-ip>.*</rtp-ip> -->+''<rtp-ip>'$MEDIA_IP'</rtp-ip>+g' 
/usr/local/unimrcp/conf/unimrcpserver.xml 
cd /usr/local/unimrcp/bin/
exec ./unimrcpserver

这是 unimrcp 容器内 /proc/1/fd/ 处 ls -l 的输出

total 0
lrwx------ 1 root root 64 Jan  2 12:04 0 -> /dev/null
l-wx------ 1 root root 64 Jan  2 12:04 1 -> pipe:[17601930]
l-wx------ 1 root root 64 Jan  2 12:04 10 -> pipe:[17605635]
lrwx------ 1 root root 64 Jan  2 12:04 11 -> socket:[17605636]
lrwx------ 1 root root 64 Jan  2 12:04 12 -> anon_inode:[eventpoll]
lrwx------ 1 root root 64 Jan  2 12:04 13 -> anon_inode:[eventfd]
lrwx------ 1 root root 64 Jan  2 12:04 14 -> anon_inode:[eventpoll]
lrwx------ 1 root root 64 Jan  2 12:04 15 -> anon_inode:[eventfd]
lrwx------ 1 root root 64 Jan  2 12:04 16 -> anon_inode:[eventpoll]
lrwx------ 1 root root 64 Jan  2 12:04 17 -> socket:[17602110]
lrwx------ 1 root root 64 Jan  2 12:04 18 -> socket:[17602111]
lrwx------ 1 root root 64 Jan  2 12:04 19 -> anon_inode:[eventpoll]
l-wx------ 1 root root 64 Jan  2 12:04 2 -> pipe:[17601931]
lrwx------ 1 root root 64 Jan  2 12:04 20 -> socket:[17603083]
lrwx------ 1 root root 64 Jan  2 12:04 21 -> socket:[17603084]
lr-x------ 1 root root 64 Jan  2 12:04 22 -> /dev/urandom
lrwx------ 1 root root 64 Jan  2 12:04 23 -> socket:[17603087]
lrwx------ 1 root root 64 Jan  2 12:04 24 -> socket:[17603088]
l-wx------ 1 root root 64 Jan  2 12:04 3 -> 
/usr/local/unimrcp/log/unimrcpserver_2020.01.02_12.04.08.988860.log
lrwx------ 1 root root 64 Jan  2 12:04 4 -> anon_inode:[eventpoll]
lr-x------ 1 root root 64 Jan  2 12:04 5 -> pipe:[17605633]
l-wx------ 1 root root 64 Jan  2 12:04 6 -> pipe:[17605633]
lrwx------ 1 root root 64 Jan  2 12:04 7 -> socket:[17605634]
lrwx------ 1 root root 64 Jan  2 12:04 8 -> anon_inode:[eventpoll]
lr-x------ 1 root root 64 Jan  2 12:04 9 -> pipe:[17605635]
linux kubernetes
  • 2 个回答
  • 215 Views
Martin Hope
PersianGulf
Asked: 2019-12-08 09:54:33 +0800 CST

将 YAML 转换为 JSON 时出错:yaml:第 10 行:未找到预期的密钥

  • 0

我有以下yaml文件:

---
apiVersion: v1
kind: pod
metadata:
    name: Tesing_for_Image_pull -----------> 1
    spec:
        containers:
        - name: mysql ------------------------> 2
          image: mysql ----------> 3
          imagePullPolicy: Always ------------->4
          command: ["echo", "SUCCESS"]  -------------------> 5

运行后kubectl create -f my_yaml.yaml出现以下错误:

error: error converting YAML to JSON: yaml: line 10: did not find expected key

更新:yamllint我收到以下错误:

root@debian:~# yamllint my_yaml.yaml
my_yaml.yaml
  8:9       error    wrong indentation: expected 12 but found 8  (indentation)
  11:41     error    syntax error: expected <block end>, but found '<scalar>'

我的问题在哪里,我该如何解决?

kubernetes yaml
  • 1 个回答
  • 30642 Views
Martin Hope
elp
Asked: 2019-01-08 11:23:16 +0800 CST

iptables 优先级

  • 0

部署所有 kubernetes 资源后port 443,我想打开. 我将它添加到我的白名单表中,但它仍然关闭。我的 80 端口也发生了同样的事情。在刷新所有表后,删除所有 kubernetes 资源并从头开始设置防火墙(包括白名单port 80),然后再次部署 kubernetesport 80终于打开了。

现在我更愿意理解为什么我不能打开port 443而不是再做一遍。我发现有一个表KUBE-FIREWALL(见下文),默认情况下会阻止所有内容。

这是我的主要问题:

KUBE-FIREWALL 的规则优先级是否比我的表 TCP 高?如果,我怎样才能改变优先级?


输入

Chain INPUT (policy DROP)
target     prot opt source               destination         
cali-INPUT  all  --  anywhere             anywhere             /* cali:Cz_u1IQiXIMmKD4c */
f2b-sshd   tcp  --  anywhere             anywhere             multiport dports ssh
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere            
DROP       all  --  anywhere             anywhere             ctstate INVALID
ACCEPT     icmp --  anywhere             anywhere             icmp echo-request ctstate NEW
UDP        udp  --  anywhere             anywhere             ctstate NEW
TCP        tcp  --  anywhere             anywhere             tcp flags:FIN,SYN,RST,ACK/SYN ctstate NEW
REJECT     udp  --  anywhere             anywhere             reject-with icmp-port-unreachable
REJECT     tcp  --  anywhere             anywhere             reject-with tcp-reset
REJECT     all  --  anywhere             anywhere             reject-with icmp-proto-unreachable

校准输入

Chain cali-INPUT (1 references)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere             /* cali:msRIDfJRWnYwzW4g */ mark match 0x10000/0x10000
cali-wl-to-host  all  --  anywhere             anywhere            [goto]  /* cali:y4fKWmWkTnYGshVX */
MARK       all  --  anywhere             anywhere             /* cali:JnMb-hdLugWL4jEZ */ MARK and 0xfff0ffff
cali-from-host-endpoint  all  --  anywhere             anywhere             /* cali:NPKZwKxJ-5imzORj */
ACCEPT     all  --  anywhere             anywhere             /* cali:aes7S4xZI-7Jyw63 */ /* Host endpoint policy accepted packet. */ mark match 0x10000/0x10000

KUBE-防火墙

Chain cali-INPUT (1 references)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere             /* cali:msRIDfJRWnYwzW4g */ mark match 0x10000/0x10000
cali-wl-to-host  all  --  anywhere             anywhere            [goto]  /* cali:y4fKWmWkTnYGshVX */
MARK       all  --  anywhere             anywhere             /* cali:JnMb-hdLugWL4jEZ */ MARK and 0xfff0ffff
cali-from-host-endpoint  all  --  anywhere             anywhere             /* cali:NPKZwKxJ-5imzORj */
ACCEPT     all  --  anywhere             anywhere             /* cali:aes7S4xZI-7Jyw63 */ /* Host endpoint policy accepted packet. */ mark match 0x10000/0x10000
claus@vmd33301:~$ sudo iptables -L KUBE-FIREWALL
Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000

TCP

Chain TCP (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ssh
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:http
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:https
iptables kubernetes
  • 1 个回答
  • 2719 Views
Martin Hope
Volker Raschek
Asked: 2018-03-07 02:35:12 +0800 CST

Kubernetes 导出端口

  • 1

我想将所有节点上的服务绑定到端口 80 和 443,这样我将通过 DNS 名称(kubernetes)重定向到通过 HTTP/S 将我直接重定向到服务的任何节点,然后再到部署(nginx )。但是,我不知道这是如何工作的,因为 NodePorts 的范围仅从 30000 到 32xxx。

这是我的设置

DNS-Name      IPv4
k8s-master    172.25.35.47
k8s-node-01   172.25.36.47
k8s-node-02   172.25.36.8
kubernetes    172.25.36.47
kubernetes    172.25.36.8

我的 yaml 文件

apiVersion: v1
kind: Service
metadata:
  name: proxy
spec:
  ports:
  - name: http
    nodePort: 80     
    port: 80            
    protocol: TCP      
    targetPort: 80     
  - name: https
    nodePort: 443     
    port: 443           
    protocol: TCP       
    targetPort: 443     
  selector:
    name: proxy
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: proxy
  labels:
    name: proxy
spec:
  selector:
    matchLabels:
      name: proxy
  replicas: 1
  template:
    metadata:
     labels:
       name: proxy
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - name: http
          containerPort: 80
          protocol: TCP
        - name: https
          containerPort: 443
          protocol: TCP

哪种类型的服务为我提供了公开此端口的功能,或者我如何实现我的心理设置?

沃尔克

port-forwarding kubernetes
  • 1 个回答
  • 664 Views
Martin Hope
Mohammed Ali
Asked: 2017-12-24 06:21:23 +0800 CST

无法从私有注册表中提取 kubernetes 中的图像

  • 2

我已经在 docker 中设置了一个私有注册表,可以通过域“makdom.ddns.net”访问,我可以在本地登录推送和拉取图像,即使是从 kubes 节点我也可以做到这一点,

但是当我编写一个 kubes 部署文件时,它无法从私有注册表中提取图像并且失败。

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: ssh-deployment
spec:
  template:
    metadata:
      labels:
        app: helloworld
    spec:
      containers:
      - name: ssh-demo
        image: makdom.ddns.net/my-ubuntu
        imagePullPolicy: IfNotPresent
        ports:
        - name: nodejs-port
          containerPort: 22
      imagePullSecrets:
      - name: myregistrykey

秘密:

DOCKER_REGISTRY_SERVER="https://makdom.ddns.net/v1/"
DOCKER_USER="user"
DOCKER_PASSWORD="password"
DOCKER_EMAIL="[email protected]" 

kubectl create secret docker-registry myregistrykey \
  --docker-server=$DOCKER_REGISTRY_SERVER \
  --docker-username=$DOCKER_USER \
  --docker-password=$DOCKER_PASSWORD \
  --docker-email=$DOCKER_EMAIL  

错误:

Events:
  Type     Reason                 Age               From                  Message
  ----     ------                 ----              ----                  -------
  Normal   Scheduled              1m                default-scheduler     Successfully assigned ssh-deployment-7b7c7bf977-m6stk to kubes-slave
  Normal   SuccessfulMountVolume  1m                kubelet, kubes-slave  MountVolume.SetUp succeeded for volume "default-token-mx7qq"
  Normal   Pulled                 1m (x3 over 1m)   kubelet, kubes-slave  Container image "makdom.ddns.net/my-ubuntu" already present on machine
  Normal   Created                1m (x3 over 1m)   kubelet, kubes-slave  Created container
  Normal   Started                1m (x3 over 1m)   kubelet, kubes-slave  Started container
  Normal   Pulling                34s (x2 over 1m)  kubelet, kubes-slave  pulling image "makdom.ddns.net/my-ubuntu"
  Warning  Failed                 34s (x2 over 1m)  kubelet, kubes-slave  Failed to pull image "makdom.ddns.net/my-ubuntu": rpc error: code = Unknown desc = Error: image my-ubuntu:latest not found
  Warning  Failed                 34s (x2 over 1m)  kubelet, kubes-slave  Error: ErrImagePull
  Warning  BackOff                19s (x6 over 1m)  kubelet, kubes-slave  Back-off restarting failed container
docker kubernetes
  • 1 个回答
  • 3330 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    模块 i915 可能缺少固件 /lib/firmware/i915/*

    • 3 个回答
  • Marko Smith

    无法获取 jessie backports 存储库

    • 4 个回答
  • Marko Smith

    如何将 GPG 私钥和公钥导出到文件

    • 4 个回答
  • Marko Smith

    我们如何运行存储在变量中的命令?

    • 5 个回答
  • Marko Smith

    如何配置 systemd-resolved 和 systemd-networkd 以使用本地 DNS 服务器来解析本地域和远程 DNS 服务器来解析远程域?

    • 3 个回答
  • Marko Smith

    dist-upgrade 后 Kali Linux 中的 apt-get update 错误 [重复]

    • 2 个回答
  • Marko Smith

    如何从 systemctl 服务日志中查看最新的 x 行

    • 5 个回答
  • Marko Smith

    Nano - 跳转到文件末尾

    • 8 个回答
  • Marko Smith

    grub 错误:你需要先加载内核

    • 4 个回答
  • Marko Smith

    如何下载软件包而不是使用 apt-get 命令安装它?

    • 7 个回答
  • Martin Hope
    user12345 无法获取 jessie backports 存储库 2019-03-27 04:39:28 +0800 CST
  • Martin Hope
    Carl 为什么大多数 systemd 示例都包含 WantedBy=multi-user.target? 2019-03-15 11:49:25 +0800 CST
  • Martin Hope
    rocky 如何将 GPG 私钥和公钥导出到文件 2018-11-16 05:36:15 +0800 CST
  • Martin Hope
    Evan Carroll systemctl 状态显示:“状态:降级” 2018-06-03 18:48:17 +0800 CST
  • Martin Hope
    Tim 我们如何运行存储在变量中的命令? 2018-05-21 04:46:29 +0800 CST
  • Martin Hope
    Ankur S 为什么 /dev/null 是一个文件?为什么它的功能不作为一个简单的程序来实现? 2018-04-17 07:28:04 +0800 CST
  • Martin Hope
    user3191334 如何从 systemctl 服务日志中查看最新的 x 行 2018-02-07 00:14:16 +0800 CST
  • Martin Hope
    Marko Pacak Nano - 跳转到文件末尾 2018-02-01 01:53:03 +0800 CST
  • Martin Hope
    Kidburla 为什么真假这么大? 2018-01-26 12:14:47 +0800 CST
  • Martin Hope
    Christos Baziotis 在一个巨大的(70GB)、一行、文本文件中替换字符串 2017-12-30 06:58:33 +0800 CST

热门标签

linux bash debian shell-script text-processing ubuntu centos shell awk ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve