AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / user-369380

laimison's questions

Martin Hope
laimison
Asked: 2021-11-09 06:01:04 +0800 CST

客户端能否像在 OpenVPN 中一样在 Wireguard 中运行 IPIP(协议 4)隧道?

  • 0

当两个子网通过 Wireguard 连接时,客户端可以使用 TCP/UDP/ICMP 相互通信。客户端能否像在 OpenVPN 中一样在 Wireguard 中运行 IPIP(协议 4)隧道?我打算从 OpenVPN 迁移到 Wireguard 并检查它是否可以工作。

谢谢

networking wireguard
  • 1 个回答
  • 113 Views
Martin Hope
laimison
Asked: 2021-03-23 17:29:27 +0800 CST

无法在 KVM 中启动虚拟机/域:无法获得“写入”锁定

  • 1

主机重启后,我无法启动虚拟机:

user@server-1:~$ virsh start docker-1
error: Failed to start domain docker-1
error: internal error: process exited while connecting to monitor: 2021-03-23T01:21:58.149079Z qemu-system-x86_64: -blockdev {"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}: Failed to get "write" lock
Is another process using the image [/apphd/prod/kvm/storage/docker-1-volume-hd.qcow2]?

文件未使用:

user@server-1:~$ sudo fuser -u /apphd/prod/kvm/storage/docker-1-volume-hd.qcow2
user@server-1:~$ sudo lsof | grep qcow
user@server-1:~$ virsh list
 Id   Name   State
--------------------

user@server-1:~$

我在 Ubuntu 18.04/qemu 2.11 上试过并升级到 Ubuntu 20.04/qemu 4.2.1

此升级无助于解决问题。

这个虚拟机非常大,因此无法轻松地从中创建新虚拟机,没有可用空间。

有什么帮助可以从这种情况中恢复并启动这个域吗?

谢谢


更新

附加锁的输出:

user@server-1:~$ sudo lslocks -u
COMMAND           PID  TYPE SIZE MODE  M      START        END PATH
blkmapd           583 POSIX   4B WRITE 0          0          0 /run/blkmapd.pid
rpcbind          1181 FLOCK      WRITE 0          0          0 /run/rpcbind.lock
lxcfs            1312 POSIX   5B WRITE 0          0          0 /run/lxcfs.pid
atd              1456 POSIX   5B WRITE 0          0          0 /run/atd.pid
whoopsie         1454 FLOCK      WRITE 0          0          0 /run/lock/whoopsie/lock
virtlogd         6143 POSIX   4B WRITE 0          0          0 /run/virtlogd.pid
multipathd       1106 POSIX   4B WRITE 0          0          0 /run/multipathd.pid
containerd       1401 FLOCK 128K WRITE 0          0          0 /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db
tracker-miner-f  1561 POSIX 3.6M READ  0 1073741826 1073742335 /var/lib/gdm3/.cache/tracker/meta.db
tracker-miner-f  1561 POSIX  32K READ  0        128        128 /var/lib/gdm3/.cache/tracker/meta.db-shm
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/network/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/interface/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/secrets/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/storage/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/nodedev/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/nwfilter/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/qemu/driver.pid
tracker-miner-f  8956 POSIX 3.6M READ  0 1073741826 1073742335 /home/user/.cache/tracker/meta.db
tracker-miner-f  8956 POSIX  32K READ  0        128        128 /home/user/.cache/tracker/meta.db-shm
dmeventd          581 POSIX   4B WRITE 0          0          0 /run/dmeventd.pid
cron             1445 FLOCK   5B WRITE 0          0          0 /run/crond.pid
gnome-shell      1713 FLOCK      WRITE 0          0          0 /run/user/126/wayland-0.lock
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirtd.pid

并附上工艺表:

user@server-1:~$ ps -ef
UID          PID    PPID  C STIME TTY          TIME CMD
root           1       0  0 01:11 ?        00:00:03 /sbin/init
root           2       0  0 01:11 ?        00:00:00 [kthreadd]
root           3       2  0 01:11 ?        00:00:00 [rcu_gp]
root           4       2  0 01:11 ?        00:00:00 [rcu_par_gp]
root           6       2  0 01:11 ?        00:00:00 [kworker/0:0H-kblockd]
root           9       2  0 01:11 ?        00:00:00 [mm_percpu_wq]
root          10       2  0 01:11 ?        00:00:00 [ksoftirqd/0]
root          11       2  0 01:11 ?        00:00:01 [rcu_sched]
root          12       2  0 01:11 ?        00:00:00 [migration/0]
root          13       2  0 01:11 ?        00:00:00 [idle_inject/0]
root          14       2  0 01:11 ?        00:00:00 [cpuhp/0]
root          15       2  0 01:11 ?        00:00:00 [cpuhp/1]
root          16       2  0 01:11 ?        00:00:00 [idle_inject/1]
root          17       2  0 01:11 ?        00:00:00 [migration/1]
root          18       2  0 01:11 ?        00:00:00 [ksoftirqd/1]
root          20       2  0 01:11 ?        00:00:00 [kworker/1:0H-kblockd]
root          21       2  0 01:11 ?        00:00:00 [cpuhp/2]
root          22       2  0 01:11 ?        00:00:00 [idle_inject/2]
root          23       2  0 01:11 ?        00:00:00 [migration/2]
root          24       2  0 01:11 ?        00:00:00 [ksoftirqd/2]
root          26       2  0 01:11 ?        00:00:00 [kworker/2:0H-kblockd]
root          27       2  0 01:11 ?        00:00:00 [cpuhp/3]
root          28       2  0 01:11 ?        00:00:00 [idle_inject/3]
root          29       2  0 01:11 ?        00:00:00 [migration/3]
root          30       2  0 01:11 ?        00:00:00 [ksoftirqd/3]
root          32       2  0 01:11 ?        00:00:00 [kworker/3:0H-events_highpri]
root          33       2  0 01:11 ?        00:00:00 [kdevtmpfs]
root          34       2  0 01:11 ?        00:00:00 [netns]
root          35       2  0 01:11 ?        00:00:00 [rcu_tasks_kthre]
root          36       2  0 01:11 ?        00:00:00 [kauditd]
root          38       2  0 01:11 ?        00:00:00 [khungtaskd]
root          39       2  0 01:11 ?        00:00:00 [oom_reaper]
root          40       2  0 01:11 ?        00:00:00 [writeback]
root          41       2  0 01:11 ?        00:00:00 [kcompactd0]
root          42       2  0 01:11 ?        00:00:00 [ksmd]
root          43       2  0 01:11 ?        00:00:00 [khugepaged]
root          89       2  0 01:11 ?        00:00:00 [kintegrityd]
root          90       2  0 01:11 ?        00:00:00 [kblockd]
root          91       2  0 01:11 ?        00:00:00 [blkcg_punt_bio]
root          93       2  0 01:11 ?        00:00:00 [tpm_dev_wq]
root          94       2  0 01:11 ?        00:00:00 [ata_sff]
root          95       2  0 01:11 ?        00:00:00 [md]
root          96       2  0 01:11 ?        00:00:00 [edac-poller]
root          97       2  0 01:11 ?        00:00:00 [devfreq_wq]
root          98       2  0 01:11 ?        00:00:00 [watchdogd]
root         101       2  0 01:11 ?        00:00:00 [kswapd0]
root         102       2  0 01:11 ?        00:00:00 [ecryptfs-kthrea]
root         104       2  0 01:11 ?        00:00:00 [kthrotld]
root         105       2  0 01:11 ?        00:00:00 [irq/122-aerdrv]
root         106       2  0 01:11 ?        00:00:00 [acpi_thermal_pm]
root         107       2  0 01:11 ?        00:00:00 [vfio-irqfd-clea]
root         111       2  0 01:11 ?        00:00:00 [ipv6_addrconf]
root         120       2  0 01:11 ?        00:00:00 [kstrp]
root         123       2  0 01:11 ?        00:00:00 [kworker/u9:0-xprtiod]
root         138       2  0 01:11 ?        00:00:00 [charger_manager]
root         197       2  0 01:11 ?        00:00:00 [cryptd]
root         224       2  0 01:11 ?        00:00:00 [scsi_eh_0]
root         225       2  0 01:11 ?        00:00:00 [scsi_tmf_0]
root         226       2  0 01:11 ?        00:00:00 [scsi_eh_1]
root         227       2  0 01:11 ?        00:00:00 [scsi_tmf_1]
root         228       2  0 01:11 ?        00:00:00 [scsi_eh_2]
root         229       2  0 01:11 ?        00:00:00 [scsi_tmf_2]
root         230       2  0 01:11 ?        00:00:00 [scsi_eh_3]
root         231       2  0 01:11 ?        00:00:00 [scsi_tmf_3]
root         232       2  0 01:11 ?        00:00:00 [scsi_eh_4]
root         233       2  0 01:11 ?        00:00:00 [scsi_tmf_4]
root         234       2  0 01:11 ?        00:00:00 [scsi_eh_5]
root         235       2  0 01:11 ?        00:00:00 [scsi_tmf_5]
root         241       2  0 01:11 ?        00:00:00 [kworker/0:1H]
root         245       2  0 01:11 ?        00:00:00 [scsi_eh_6]
root         246       2  0 01:11 ?        00:00:00 [scsi_tmf_6]
root         247       2  0 01:11 ?        00:00:02 [usb-storage]
root         248       2  0 01:11 ?        00:00:00 [scsi_eh_7]
root         249       2  0 01:11 ?        00:00:00 [scsi_tmf_7]
root         250       2  0 01:11 ?        00:00:00 [usb-storage]
root         251       2  0 01:11 ?        00:00:00 [kworker/3:1H-kblockd]
root         252       2  0 01:11 ?        00:00:00 [uas]
root         253       2  0 01:11 ?        00:00:00 [kworker/2:1H-kblockd]
root         254       2  0 01:11 ?        00:00:00 [kworker/1:1H-kblockd]
root         286       2  0 01:11 ?        00:00:00 [raid5wq]
root         287       2  0 01:11 ?        00:00:00 [kdmflush]
root         288       2  0 01:11 ?        00:00:00 [kdmflush]
root         290       2  0 01:11 ?        00:00:00 [kdmflush]
root         292       2  0 01:11 ?        00:00:00 [kdmflush]
root         297       2  0 01:11 ?        00:00:00 [kdmflush]
root         319       2  0 01:11 ?        00:00:00 [mdX_raid1]
root         326       2  0 01:11 ?        00:00:00 [kdmflush]
root         327       2  0 01:11 ?        00:00:00 [kdmflush]
root         328       2  0 01:11 ?        00:00:00 [kdmflush]
root         330       2  0 01:11 ?        00:00:00 [kdmflush]
root         331       2  0 01:11 ?        00:00:00 [kdmflush]
root         363       2  0 01:11 ?        00:00:00 [mdX_raid1]
root         476       2  0 01:11 ?        00:00:00 [jbd2/sda2-8]
root         477       2  0 01:11 ?        00:00:00 [ext4-rsv-conver]
root         552       2  0 01:11 ?        00:00:00 [rpciod]
root         553       2  0 01:11 ?        00:00:00 [xprtiod]
root         554       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-journald
root         581       1  0 01:11 ?        00:00:01 /sbin/dmeventd -f
root         583       1  0 01:11 ?        00:00:00 /usr/sbin/blkmapd
root         597       1  0 01:11 ?        00:00:01 /lib/systemd/systemd-udevd
root         635       2  0 01:11 ?        00:00:00 [irq/133-mei_me]
root         697       2  0 01:11 ?        00:00:00 [led_workqueue]
root        1102       2  0 01:11 ?        00:00:00 [kaluad]
root        1103       2  0 01:11 ?        00:00:00 [kmpath_rdacd]
root        1104       2  0 01:11 ?        00:00:00 [kmpathd]
root        1105       2  0 01:11 ?        00:00:00 [kmpath_handlerd]
root        1106       1  0 01:11 ?        00:00:04 /sbin/multipathd -d -s
root        1115       2  0 01:11 ?        00:00:00 [jbd2/dm-4-8]
root        1117       2  0 01:11 ?        00:00:00 [ext4-rsv-conver]
root        1120       2  0 01:11 ?        00:00:00 [loop0]
root        1126       2  0 01:11 ?        00:00:00 [loop1]
root        1129       2  0 01:11 ?        00:00:00 [loop2]
root        1131       2  0 01:11 ?        00:00:00 [jbd2/dm-9-8]
root        1132       2  0 01:11 ?        00:00:00 [ext4-rsv-conver]
root        1135       2  0 01:11 ?        00:00:00 [loop3]
root        1137       2  0 01:11 ?        00:00:00 [loop4]
root        1138       2  0 01:11 ?        00:00:00 [loop5]
root        1145       2  0 01:11 ?        00:00:00 [jbd2/sde1-8]
root        1146       2  0 01:11 ?        00:00:00 [ext4-rsv-conver]
systemd+    1176       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-networkd
root        1177       1  0 01:11 ?        00:00:00 /usr/sbin/rpc.idmapd
_rpc        1181       1  0 01:11 ?        00:00:00 /sbin/rpcbind -f -w
systemd+    1182       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-timesyncd
systemd+    1187       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-resolved
root        1296       1  0 01:11 ?        00:00:00 /usr/lib/accountsservice/accounts-daemon
root        1297       1  0 01:11 ?        00:00:00 /usr/sbin/acpid
avahi       1301       1  0 01:11 ?        00:00:00 avahi-daemon: running [server-1.local]
root        1302       1  0 01:11 ?        00:00:00 /usr/sbin/cupsd -l
message+    1303       1  0 01:11 ?        00:00:01 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
root        1304       1  0 01:11 ?        00:00:01 /usr/sbin/NetworkManager --no-daemon
root        1310       1  0 01:11 ?        00:00:02 /usr/sbin/irqbalance --foreground
root        1312       1  0 01:11 ?        00:00:00 /usr/bin/lxcfs /var/lib/lxcfs
root        1314       1  0 01:11 ?        00:00:00 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
root        1322       1  0 01:11 ?        00:00:02 /usr/lib/policykit-1/polkitd --no-debug
syslog      1329       1  0 01:11 ?        00:00:00 /usr/sbin/rsyslogd -n -iNONE
root        1335       1  0 01:11 ?        00:00:00 /usr/sbin/smartd -n
root        1340       1  0 01:11 ?        00:00:00 /usr/libexec/switcheroo-control
root        1341       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-logind
root        1342       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-machined
root        1343       1  0 01:11 ?        00:00:09 /usr/lib/udisks2/udisksd
root        1344       1  0 01:11 ?        00:00:00 /sbin/wpa_supplicant -u -s -O /run/wpa_supplicant
avahi       1353    1301  0 01:11 ?        00:00:00 avahi-daemon: chroot helper
root        1383       1  0 01:11 ?        00:00:00 /usr/sbin/cups-browsed
root        1386       1  0 01:11 ?        00:00:00 /usr/sbin/ModemManager --filter-policy=strict
root        1401       1  0 01:11 ?        00:02:22 /usr/bin/containerd
root        1416       1  0 01:11 ?        00:00:00 /usr/sbin/rpc.mountd --manage-gids
root        1445       1  0 01:11 ?        00:00:00 /usr/sbin/cron -f
whoopsie    1454       1  0 01:11 ?        00:00:00 /usr/bin/whoopsie -f
daemon      1456       1  0 01:11 ?        00:00:00 /usr/sbin/atd -f
root        1457       2  0 01:11 ?        00:00:00 [kworker/u9:1-xprtiod]
root        1458       1  0 01:11 ?        00:00:00 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
root        1460       2  0 01:11 ?        00:00:00 [lockd]
kernoops    1463       1  0 01:11 ?        00:00:01 /usr/sbin/kerneloops --test
kernoops    1474       1  0 01:11 ?        00:00:01 /usr/sbin/kerneloops
root        1477       1  0 01:11 ?        00:00:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
root        1486       1  0 01:11 ?        00:00:00 /usr/sbin/gdm3
root        1496    1486  0 01:11 ?        00:00:00 gdm-session-worker [pam/gdm-launch-environment]
gdm         1527       1  0 01:11 ?        00:00:00 /lib/systemd/systemd --user
gdm         1528    1527  0 01:11 ?        00:00:00 (sd-pam)
root        1552       2  0 01:11 ?        00:00:00 bpfilter_umh
gdm         1559    1527  0 01:11 ?        00:00:00 /usr/bin/pulseaudio --daemonize=no --log-target=journal
gdm         1561    1527  0 01:11 ?        00:00:00 /usr/libexec/tracker-miner-fs
gdm         1568    1496  0 01:11 tty1     00:00:00 /usr/lib/gdm3/gdm-wayland-session dbus-run-session -- gnome-session --autostart /usr/share/gdm/greeter/autostart
gdm         1577    1527  0 01:11 ?        00:00:00 /usr/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
gdm         1584    1568  0 01:11 tty1     00:00:00 dbus-run-session -- gnome-session --autostart /usr/share/gdm/greeter/autostart
gdm         1585    1584  0 01:11 tty1     00:00:00 dbus-daemon --nofork --print-address 4 --session
rtkit       1586       1  0 01:11 ?        00:00:00 /usr/libexec/rtkit-daemon
gdm         1589    1584  0 01:11 tty1     00:00:00 /usr/libexec/gnome-session-binary --systemd --autostart /usr/share/gdm/greeter/autostart
gdm         1590    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfsd
gdm         1600    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfsd-fuse /run/user/126/gvfs -f -o big_writes
gdm         1608    1527  0 01:11 ?        00:00:01 /usr/libexec/gvfs-udisks2-volume-monitor
gdm         1640    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfs-mtp-volume-monitor
gdm         1648    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfs-goa-volume-monitor
gdm         1653    1527  0 01:11 ?        00:00:00 /usr/libexec/goa-daemon
gdm         1686       1  0 01:11 tty1     00:00:00 /usr/libexec/dconf-service
gdm         1702    1527  0 01:11 ?        00:00:00 /usr/libexec/goa-identity-service
gdm         1711    1527  0 01:11 ?        00:00:01 /usr/libexec/gvfs-afc-volume-monitor
gdm         1713    1589  0 01:11 tty1     00:00:13 /usr/bin/gnome-shell
gdm         1723    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfs-gphoto2-volume-monitor
root        1729       1  0 01:11 ?        00:00:00 /usr/lib/upower/upowerd
root        1800       2  0 01:11 ?        00:00:00 [nfsd]
root        1801       2  0 01:11 ?        00:00:00 [nfsd]
root        1802       2  0 01:11 ?        00:00:00 [nfsd]
root        1803       2  0 01:11 ?        00:00:00 [nfsd]
root        1804       2  0 01:11 ?        00:00:00 [nfsd]
root        1805       2  0 01:11 ?        00:00:00 [nfsd]
root        1806       2  0 01:11 ?        00:00:00 [nfsd]
root        1807       2  0 01:11 ?        00:00:00 [nfsd]
gdm         1868       1  0 01:11 tty1     00:00:00 /usr/libexec/at-spi-bus-launcher
gdm         1874    1868  0 01:11 tty1     00:00:00 /usr/bin/dbus-daemon --config-file=/usr/share/defaults/at-spi2/accessibility.conf --nofork --print-address 3
gdm         1880    1713  0 01:11 tty1     00:00:00 /usr/bin/Xwayland :1024 -rootless -noreset -accessx -core -auth /run/user/126/.mutter-Xwaylandauth.XH3U00 -listen 4 -listen 5 -displayfd 6 -listen 7
libvirt+    1916       1  0 01:11 ?        00:00:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
root        1917    1916  0 01:11 ?        00:00:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
gdm         2003       1  0 01:11 tty1     00:00:00 /usr/libexec/xdg-permission-store
gdm         2052       1  0 01:11 tty1     00:00:00 /usr/bin/gjs /usr/share/gnome-shell/org.gnome.Shell.Notifications
gdm         2054       1  0 01:11 tty1     00:00:00 /usr/libexec/at-spi2-registryd --use-gnome-session
gdm         2066    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-sharing
gdm         2069    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-wacom
gdm         2070    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-color
gdm         2075    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-keyboard
gdm         2078    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-print-notifications
gdm         2079    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-rfkill
gdm         2084    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-smartcard
gdm         2090    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-datetime
gdm         2103    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-media-keys
gdm         2110    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-screensaver-proxy
gdm         2111    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-sound
gdm         2112    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-a11y-settings
gdm         2114    1589  0 01:11 tty1     00:00:03 /usr/libexec/gsd-housekeeping
gdm         2116    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-power
gdm         2179    1713  0 01:11 tty1     00:00:00 ibus-daemon --panel disable -r --xim
gdm         2183       1  0 01:11 tty1     00:00:00 /usr/libexec/gsd-printer
gdm         2185    2179  0 01:11 tty1     00:00:00 /usr/libexec/ibus-dconf
gdm         2192       1  0 01:11 tty1     00:00:00 /usr/libexec/ibus-x11 --kill-daemon
gdm         2199    2179  0 01:11 tty1     00:00:00 /usr/libexec/ibus-engine-simple
gdm         2202       1  0 01:11 tty1     00:00:00 /usr/libexec/ibus-portal
colord      2212       1  0 01:11 ?        00:00:00 /usr/libexec/colord
gdm         2268    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfsd-metadata
root        6057       1  0 01:18 ?        00:00:01 /usr/sbin/libvirtd
root        6143       1  0 01:19 ?        00:00:00 /usr/sbin/virtlogd
root        6562       2  0 01:34 ?        00:00:01 [kworker/2:3-events]
root        7924       2  0 06:06 ?        00:00:00 [loop6]
root        7981       1  0 06:06 ?        00:00:03 /usr/lib/snapd/snapd
root        8320       2  0 08:34 ?        00:00:00 [kworker/0:0-rcu_gp]
root        8891       2  0 09:30 ?        00:00:00 [kworker/1:0-events]
root        8919    1458  0 10:02 ?        00:00:00 sshd: user [priv]
user         8938       1  0 10:02 ?        00:00:00 /lib/systemd/systemd --user
user         8939    8938  0 10:02 ?        00:00:00 (sd-pam)
root        8951       2  0 10:02 ?        00:00:00 [kworker/0:2-events]
user         8954    8938  0 10:02 ?        00:00:00 /usr/bin/pulseaudio --daemonize=no --log-target=journal
user         8956    8938  0 10:02 ?        00:00:00 /usr/libexec/tracker-miner-fs
user         8958    8938  0 10:02 ?        00:00:00 /usr/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
user         8975    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfsd
user         8983    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfsd-fuse /run/user/1000/gvfs -f -o big_writes
user         8995    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfs-udisks2-volume-monitor
user         9007    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfs-mtp-volume-monitor
user         9011    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfs-goa-volume-monitor
user         9015    8938  0 10:02 ?        00:00:00 /usr/libexec/goa-daemon
user         9022    8938  0 10:02 ?        00:00:00 /usr/libexec/goa-identity-service
user         9029    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfs-afc-volume-monitor
user         9035    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfs-gphoto2-volume-monitor
user         9185    8919  0 10:02 ?        00:00:00 sshd: user@pts/0
user         9186    9185  0 10:02 pts/0    00:00:00 -bash
root        9258       2  0 10:13 ?        00:00:00 [kworker/3:3-events]
root        9259       2  0 10:13 ?        00:00:00 [kworker/3:4-cgroup_destroy]
root        9294       2  0 10:31 ?        00:00:00 [kworker/1:1]
root        9330       2  0 11:31 ?        00:00:00 [kworker/2:0-events]
root        9334       2  0 11:41 ?        00:00:00 [kworker/u8:2-events_freezable_power_]
root        9348       2  0 11:53 ?        00:00:00 [kworker/u8:0-events_power_efficient]
root        9352       2  0 12:07 ?        00:00:00 [kworker/u8:3-events_unbound]
root        9400       2  0 12:09 ?        00:00:00 [kworker/3:0-events]
root        9403       2  0 12:09 ?        00:00:00 [kworker/0:1-rcu_gp]
root        9413       2  0 12:09 ?        00:00:00 [kworker/3:1-cgroup_destroy]
root        9414       2  0 12:09 ?        00:00:00 [kworker/3:2-events]
root        9415       2  0 12:09 ?        00:00:00 [kworker/3:5-events]
root        9418       2  0 12:09 ?        00:00:00 [kworker/2:1]
root        9419       2  0 12:09 ?        00:00:00 [kworker/3:6]
root        9459       2  0 12:13 ?        00:00:00 [kworker/u8:1-events_unbound]
user         9463    9186  0 12:14 pts/0    00:00:00 ps -ef
user@server-1:~$

附加此 VM 的 XML 转储:

user@server-1:~$ virsh dumpxml docker-1
<domain type='kvm'>
  <name>docker-1</name>
  <uuid>dfb49ea5-f6e7-45d1-9422-e3ce97cf6320</uuid>
  <memory unit='KiB'>10485760</memory>
  <currentMemory unit='KiB'>10485760</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <os>
    <type arch='x86_64' machine='pc-i440fx-bionic'>hvm</type>
    <boot dev='hd'/>
    <boot dev='network'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <cpu mode='custom' match='exact' check='none'>
    <model fallback='forbid'>qemu64</model>
  </cpu>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/kvm-spice</emulator>
    <disk type='volume' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source pool='default' volume='docker-1-volume-resized.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/apphd/prod/kvm/storage/docker-1-volume-hd.qcow2'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/apphd/prod/kvm/storage/docker-1-volume-hd.qcow2'/>
      <target dev='vdc' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/apphd/prod/kvm/storage/docker-1-volume-hd.qcow2'/>
      <target dev='vdx' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/app/prod/kvm/storage/common-init-docker-1.iso'/>
      <target dev='hdd' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='1' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='piix3-uhci'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:01:00:00:00:01'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <console type='pty'>
      <target type='virtio' port='1'/>
    </console>
    <channel type='pty'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>
    <video>
      <model type='vga' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </rng>
  </devices>
</domain>

user@server-1:~$

kvm-virtualization libvirt qemu
  • 1 个回答
  • 4296 Views
Martin Hope
laimison
Asked: 2020-09-14 14:24:40 +0800 CST

Kubernetes Calico 网络:calicoctl 报告“reset by peer”和“bird: BGP: Unexpected connect from unknown address”

  • 1

这是一个在裸机上使用 Kubespray 构建的新集群。

calicoctl报告不Established状态的问题,StatefulSet成员之间无法相互通信,并且大多数Ingress请求大约需要 10 秒才能打开示例 Nginx 页面。

所有其他组件,例如 etcd、podsudo kubectl get cs和sudo kubectl cluster-info dump都可以。

master-1 (192.168.250.111) 和 node-1 (192.168.250.112) 上的 calico-node pod 在日志中报告没有错误

master-2 (192.168.240.111) 和 node-1 (192.168.240.112) 上的 calico-node pod 在日志中报告错误 bird: BGP: Unexpected connect from unknown address 192.168.240.240 (port 36597)- 此 IP 是 VPN 路由器的 IP(这些服务器的网关)

master-3 (192.168.230.111) 和 node-3 (192.168.230.112) 上的 calico-node pod 在日志中报告错误 bird: BGP: Unexpected connect from unknown address 192.168.230.230 (port 35029)- 此 IP 是 VPN 路由器的 IP(这些服务器的网关)

192.168.250.112(节点 1):

era@server-node-1:~$ sudo calicoctl node status
Calico process is running.

IPv4 BGP status
+-----------------+-------------------+-------+----------+--------------------------------+
|  PEER ADDRESS   |     PEER TYPE     | STATE |  SINCE   |              INFO              |
+-----------------+-------------------+-------+----------+--------------------------------+
| 192.168.250.111 | node-to-node mesh | up    | 19:54:47 | Established                    |
| 192.168.240.111 | node-to-node mesh | start | 19:54:35 | Active Socket: Connection      |
|                 |                   |       |          | reset by peer                  |
| 192.168.230.111 | node-to-node mesh | up    | 20:42:31 | Established                    |
| 192.168.240.112 | node-to-node mesh | start | 19:54:35 | Active Socket: Connection      |
|                 |                   |       |          | reset by peer                  |
| 192.168.230.112 | node-to-node mesh | up    | 20:42:30 | Established                    |
+-----------------+-------------------+-------+----------+--------------------------------+

IPv6 BGP status
No IPv6 peers found.

era@server-node-1:~$

192.168.240.112(节点 2):

era@server-node-2:~$ sudo calicoctl node status
Calico process is running.

IPv4 BGP status
+-----------------+-------------------+-------+----------+--------------------------------+
|  PEER ADDRESS   |     PEER TYPE     | STATE |  SINCE   |              INFO              |
+-----------------+-------------------+-------+----------+--------------------------------+
| 192.168.250.111 | node-to-node mesh | start | 19:52:09 | Passive                        |
| 192.168.240.111 | node-to-node mesh | up    | 19:54:37 | Established                    |
| 192.168.230.111 | node-to-node mesh | start | 19:52:09 | Active Socket: Connection      |
|                 |                   |       |          | reset by peer                  |
| 192.168.250.112 | node-to-node mesh | start | 19:52:09 | Passive                        |
| 192.168.230.112 | node-to-node mesh | start | 19:52:09 | Active Socket: Connection      |
|                 |                   |       |          | reset by peer                  |
+-----------------+-------------------+-------+----------+--------------------------------+

IPv6 BGP status
No IPv6 peers found.

era@server-node-2:~$

192.168.230.112(节点 3):

era@server-node-3:~$ sudo calicoctl node status
Calico process is running.

IPv4 BGP status
+-----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS   |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+-----------------+-------------------+-------+----------+-------------+
| 192.168.250.111 | node-to-node mesh | up    | 20:42:31 | Established |
| 192.168.240.111 | node-to-node mesh | start | 19:51:59 | Passive     |
| 192.168.230.111 | node-to-node mesh | up    | 19:54:25 | Established |
| 192.168.250.112 | node-to-node mesh | up    | 20:42:30 | Established |
| 192.168.240.112 | node-to-node mesh | start | 19:51:59 | Passive     |
+-----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

era@server-node-3:~$

我试图设置确切的网络接口,看看它是否有帮助 - 没有帮助:

era@server-master-1:~$ kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=interface=ens3
daemonset.apps/calico-node env updated

尝试使用nc179 测试从任何节点和主节点到任何节点和主节点的端口,他们成功了。

Ubuntu 18.04 用于操作系统。

有什么建议可以在 Calico 中调试以解决问题吗?任何提示对于更接近解决方案都是有用的。

更新

我发现问题与丢失的路线相关。

下面是 192.168.250.112 的输出。所以它无法到达 192.168.240.x 中的节点和主节点,因为没有路由:

era@server-node-1:~$ ip route | grep tun
10.233.76.0/24 via 192.168.230.112 dev tunl0 proto bird onlink
10.233.77.0/24 via 192.168.230.111 dev tunl0 proto bird onlink
10.233.79.0/24 via 192.168.250.111 dev tunl0 proto bird onlink
era@server-node-1:~$ sudo calicoctl node status
Calico process is running.

IPv4 BGP status
+-----------------+-------------------+-------+----------+--------------------------------+
|  PEER ADDRESS   |     PEER TYPE     | STATE |  SINCE   |              INFO              |
+-----------------+-------------------+-------+----------+--------------------------------+
| 192.168.250.111 | node-to-node mesh | up    | 21:39:05 | Established                    |
| 192.168.240.111 | node-to-node mesh | start | 19:54:35 | Connect Socket: Connection     |
|                 |                   |       |          | reset by peer                  |
| 192.168.230.111 | node-to-node mesh | up    | 20:42:31 | Established                    |
| 192.168.240.112 | node-to-node mesh | start | 19:54:35 | Connect Socket: Connection     |
|                 |                   |       |          | reset by peer                  |
| 192.168.230.112 | node-to-node mesh | up    | 20:42:30 | Established                    |
+-----------------+-------------------+-------+----------+--------------------------------+

IPv6 BGP status
No IPv6 peers found.

era@server-node-1:~$

下面是 192.168.240.112 的输出。所以它无法到达 192.168.250.x 和 192.168.230.x 中的节点和主节点,因为没有路由:

era@server-node-2:~$ ip r | grep tunl
10.233.66.0/24 via 192.168.240.111 dev tunl0 proto bird onlink
era@server-node-2:~$ sudo calicoctl node status
Calico process is running.

IPv4 BGP status
+-----------------+-------------------+-------+----------+--------------------------------+
|  PEER ADDRESS   |     PEER TYPE     | STATE |  SINCE   |              INFO              |
+-----------------+-------------------+-------+----------+--------------------------------+
| 192.168.250.111 | node-to-node mesh | start | 19:52:10 | Passive                        |
| 192.168.240.111 | node-to-node mesh | up    | 19:54:38 | Established                    |
| 192.168.230.111 | node-to-node mesh | start | 22:05:18 | Active Socket: Connection      |
|                 |                   |       |          | reset by peer                  |
| 192.168.250.112 | node-to-node mesh | start | 19:52:10 | Passive                        |
| 192.168.230.112 | node-to-node mesh | start | 22:05:22 | Active Socket: Connection      |
|                 |                   |       |          | reset by peer                  |
+-----------------+-------------------+-------+----------+--------------------------------+

IPv6 BGP status
No IPv6 peers found.

era@server-node-2:~$

下面是 192.168.230.112 的输出。所以它无法到达 192.168.240.x 中的节点和主节点,因为没有路由:

era@server-node-3:~$ ip r | grep tunl
10.233.77.0/24 via 192.168.230.111 dev tunl0 proto bird onlink
10.233.79.0/24 via 192.168.250.111 dev tunl0 proto bird onlink
10.233.100.0/24 via 192.168.250.112 dev tunl0 proto bird onlink
era@server-node-3:~$ sudo calicoctl node status
Calico process is running.

IPv4 BGP status
+-----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS   |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+-----------------+-------------------+-------+----------+-------------+
| 192.168.250.111 | node-to-node mesh | up    | 21:36:50 | Established |
| 192.168.240.111 | node-to-node mesh | start | 19:51:59 | Passive     |
| 192.168.230.111 | node-to-node mesh | up    | 19:54:25 | Established |
| 192.168.250.112 | node-to-node mesh | up    | 20:42:30 | Established |
| 192.168.240.112 | node-to-node mesh | start | 19:51:59 | Passive     |
+-----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

era@server-node-3:~$

那么为什么这些路线不存在以及如何通过添加它们来改变这种行为呢?如果我手动添加,路线会自动删除。

kubernetes calico
  • 1 个回答
  • 1899 Views
Martin Hope
laimison
Asked: 2020-02-10 16:19:11 +0800 CST

调试 Kubernetes 中的 DNS 解析问题

  • 4

我在 Ubuntu 18.04 上使用 Kubespray 构建了一个 Kubernetes 集群,并且面临 DNS 问题,因此基本上容器无法通过它们的主机名进行通信。

有效的事情:

  • 通过 IP 地址进行容器通信
  • 互联网正在从容器中工作
  • 能够解决kubernetes.default

Kubernetes 主控:

root@k8s-1:~# cat /etc/resolv.conf | grep -v ^\\#
nameserver 127.0.0.53
search home
root@k8s-1:~# 

荚:

root@k8s-1:~# kubectl exec dnsutils cat /etc/resolv.conf
nameserver 169.254.25.10
search default.svc.cluster.local svc.cluster.local cluster.local home
options ndots:5
root@k8s-1:~# 

CoreDNS pod 是健康的:

root@k8s-1:~# kubectl get pods --namespace=kube-system -l k8s-app=kube-dns        
NAME                       READY   STATUS    RESTARTS   AGE
coredns-58687784f9-8rmlw   1/1     Running   0          35m
coredns-58687784f9-hp8hp   1/1     Running   0          35m
root@k8s-1:~#

CoreDNS pod 的日志:

root@k8s-1:~# kubectl describe pods --namespace=kube-system -l k8s-app=kube-dns | tail -n 2
  Normal   Started           35m                 kubelet, k8s-2     Started container coredns
  Warning  DNSConfigForming  12s (x33 over 35m)  kubelet, k8s-2     Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 4.2.2.1 4.2.2.2 208.67.220.220

root@k8s-1:~# kubectl logs --namespace=kube-system coredns-58687784f9-8rmlw
.:53
2020-02-09T22:56:14.390Z [INFO] plugin/reload: Running configuration MD5 = b9d55fc86b311e1d1a0507440727efd2
2020-02-09T22:56:14.391Z [INFO] CoreDNS-1.6.0
2020-02-09T22:56:14.391Z [INFO] linux/amd64, go1.12.7, 0a218d3
CoreDNS-1.6.0
linux/amd64, go1.12.7, 0a218d3
root@k8s-1:~#

root@k8s-1:~# kubectl logs --namespace=kube-system coredns-58687784f9-hp8hp
.:53
2020-02-09T22:56:20.388Z [INFO] plugin/reload: Running configuration MD5 = b9d55fc86b311e1d1a0507440727efd2
2020-02-09T22:56:20.388Z [INFO] CoreDNS-1.6.0
2020-02-09T22:56:20.388Z [INFO] linux/amd64, go1.12.7, 0a218d3
CoreDNS-1.6.0
linux/amd64, go1.12.7, 0a218d3
root@k8s-1:~#

CoreDNS 似乎暴露了:

root@k8s-1:~# kubectl get svc --namespace=kube-system | grep coredns
coredns                ClusterIP   10.233.0.3      <none>        53/UDP,53/TCP,9153/TCP   37m
root@k8s-1:~#

root@k8s-1:~# kubectl get ep coredns --namespace=kube-system
NAME      ENDPOINTS                                                  AGE
coredns   10.233.64.2:53,10.233.65.3:53,10.233.64.2:53 + 3 more...   37m
root@k8s-1:~#

这些是我有问题的 pod - 所有集群都因为这个问题而受到影响:

root@k8s-1:~# kubectl get pods -o wide -n default
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
busybox                  1/1     Running   0          17m   10.233.66.7   k8s-3   <none>           <none>
dnsutils                 1/1     Running   0          50m   10.233.66.5   k8s-3   <none>           <none>
nginx-86c57db685-p8zhc   1/1     Running   0          43m   10.233.64.3   k8s-1   <none>           <none>
nginx-86c57db685-st7rw   1/1     Running   0          47m   10.233.66.6   k8s-3   <none>           <none>
root@k8s-1:~# 

能够通过 IP 地址使用 DNS 和容器访问互联网:

root@k8s-1:~# kubectl exec -it nginx-86c57db685-st7rw -- sh -c "ping 10.233.64.3"
PING 10.233.64.3 (10.233.64.3) 56(84) bytes of data.
64 bytes from 10.233.64.3: icmp_seq=1 ttl=62 time=0.481 ms
64 bytes from 10.233.64.3: icmp_seq=2 ttl=62 time=0.551 ms
...

root@k8s-1:~# kubectl exec -it nginx-86c57db685-st7rw -- sh -c "ping google.com"
PING google.com (172.217.21.174) 56(84) bytes of data.
64 bytes from fra07s64-in-f174.1e100.net (172.217.21.174): icmp_seq=1 ttl=61 time=77.9 ms
...

root@k8s-1:~# kubectl exec -it nginx-86c57db685-st7rw -- sh -c "ping kubernetes.default"
PING kubernetes.default.svc.cluster.local (10.233.0.1) 56(84) bytes of data.
64 bytes from kubernetes.default.svc.cluster.local (10.233.0.1): icmp_seq=1 ttl=64 time=0.030 ms
64 bytes from kubernetes.default.svc.cluster.local (10.233.0.1): icmp_seq=2 ttl=64 time=0.069 ms
...

实际问题:

root@k8s-1:~# kubectl exec -it nginx-86c57db685-st7rw -- sh -c "ping nginx-86c57db685-p8zhc"
ping: nginx-86c57db685-p8zhc: Name or service not known
command terminated with exit code 2
root@k8s-1:~#

root@k8s-1:~# kubectl exec -it nginx-86c57db685-st7rw -- sh -c "ping dnsutils"
ping: dnsutils: Name or service not known
command terminated with exit code 2
root@k8s-1:~#

oot@k8s-1:~# kubectl exec -ti busybox -- nslookup nginx-86c57db685-p8zhc
Server:     169.254.25.10
Address:    169.254.25.10:53

** server can't find nginx-86c57db685-p8zhc.default.svc.cluster.local: NXDOMAIN

*** Can't find nginx-86c57db685-p8zhc.svc.cluster.local: No answer
*** Can't find nginx-86c57db685-p8zhc.cluster.local: No answer
*** Can't find nginx-86c57db685-p8zhc.home: No answer
*** Can't find nginx-86c57db685-p8zhc.default.svc.cluster.local: No answer
*** Can't find nginx-86c57db685-p8zhc.svc.cluster.local: No answer
*** Can't find nginx-86c57db685-p8zhc.cluster.local: No answer
*** Can't find nginx-86c57db685-p8zhc.home: No answer

command terminated with exit code 1
root@k8s-1:~#

我是否遗漏了什么或如何使用主机名修复容器之间的通信?

非常感谢

更新

更多检查:

root@k8s-1:~# kubectl exec -ti dnsutils -- nslookup kubernetes.default
Server:     169.254.25.10
Address:    169.254.25.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.233.0.1

我创建了 StatefulSet:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/web/web.yaml

我可以 ping 服务“nginx”:

root@k8s-1:~/kplay# k exec dnsutils -it nslookup nginx
Server:     169.254.25.10
Address:    169.254.25.10#53

Name:   nginx.default.svc.cluster.local
Address: 10.233.66.8
Name:   nginx.default.svc.cluster.local
Address: 10.233.64.3
Name:   nginx.default.svc.cluster.local
Address: 10.233.65.5
Name:   nginx.default.svc.cluster.local
Address: 10.233.66.6

使用 FQDN 时还可以联系 statefulset 成员

root@k8s-1:~/kplay# k exec dnsutils -it nslookup web-0.nginx.default.svc.cluster.local
Server:     169.254.25.10
Address:    169.254.25.10#53

Name:   web-0.nginx.default.svc.cluster.local
Address: 10.233.65.5

root@k8s-1:~/kplay# k exec dnsutils -it nslookup web-1.nginx.default.svc.cluster.local
Server:     169.254.25.10
Address:    169.254.25.10#53

Name:   web-1.nginx.default.svc.cluster.local
Address: 10.233.66.8

但不只使用主机名:

root@k8s-1:~/kplay# k exec dnsutils -it nslookup web-0
Server:     169.254.25.10
Address:    169.254.25.10#53

** server can't find web-0: NXDOMAIN

command terminated with exit code 1
root@k8s-1:~/kplay# k exec dnsutils -it nslookup web-1
Server:     169.254.25.10
Address:    169.254.25.10#53

** server can't find web-1: NXDOMAIN

command terminated with exit code 1
root@k8s-1:~/kplay#

它们都生活在同一个命名空间中:

root@k8s-1:~/kplay# k get pods -n default
NAME                     READY   STATUS    RESTARTS   AGE
busybox                  1/1     Running   22         22h
dnsutils                 1/1     Running   22         22h
nginx-86c57db685-p8zhc   1/1     Running   0          22h
nginx-86c57db685-st7rw   1/1     Running   0          22h
web-0                    1/1     Running   0          11m
web-1                    1/1     Running   0          10m

另一个确认我能够 ping 服务的测试:

kubectl create deployment --image nginx some-nginx
kubectl scale deployment --replicas 2 some-nginx
kubectl expose deployment some-nginx --port=12345 --type=NodePort

root@k8s-1:~/kplay# k exec dnsutils -it nslookup some-nginx
Server:     169.254.25.10
Address:    169.254.25.10#53

Name:   some-nginx.default.svc.cluster.local
Address: 10.233.63.137

最后的想法

有趣的事实,但也许这就是 Kubernetes 应该如何工作的?如果想单独访问某个 pod,我可以访问服务主机名和 statefulset 成员。至少在我的 k8s 使用中(可能适用于每个人),如果不是 statefulset,则到达单个 pod 似乎不是很重要。

kubernetes ubuntu-18.04
  • 1 个回答
  • 7252 Views
Martin Hope
laimison
Asked: 2020-01-06 16:02:17 +0800 CST

私有云中的 Kubernetes MySQL 集群

  • 1

我对由 1 个主节点和 2 个辅助节点组成的 MySQL 集群感兴趣。

通常在公共云中,我们

  • 使用外部存储

  • 使用 RDS 等服务,以便在该服务之后处理复制和故障转移

  • 您可以在不同的节点上重新创建失败的 pod,因为存储和数据库未在您的任何 k8s 节点上运行

适用于私有云但不适用于 Kubernetes 的解决方案:

  • 通过使用本地存储

  • 通过使用 mysqlfailover 实用程序,它可以指定一个新的主节点

  • 通过更改“mysql-0”(主)的 DNS 记录并指示应用程序刷新 DNS,以便它可以在故障转移事件中看到新的主

探索 Kubernetes 解决方案:

  • 哪一个使用本地存储或 NFS?(如果是 NFS,你将如何在不同服务器之间建立集群?)

  • 通过使用https://github.com/oracle/mysql-operator、Percona、类似的解决方案甚至是相同的 mysqlfailover - 您更喜欢哪一个以及它如何处理故障转移情况?最好是开源选项。

如果我尝试合并当前工作的 mysqlfailover 解决方案并迁移到 Kubernetes,我可能需要设置 Node Affinity,以便 pod 正确连接其本地存储。

这个mysqlfailover机制也应该改进(起点在这里https://medium.com/@zzdjk6/step-by-step-setup-gtid-based-mysql-replica-and-automatic-failover-with-mysqlfailover-using -docker-489489d2922)因为它可以例如指定一个新的主mysql-1,而原来的(mysql-0)已关闭。根据我的理解,这可能不是最佳选择,因为在通常的架构中,我们总是希望 mysql-0 作为 StatefulSet 中的主节点,而 mysqlfailover 则完全相反。

那么,如果不解决现有问题,您会选择哪个选项?你会采取哪些步骤?你会使用哪些 MySQL 和 Kubernetes 组件?

非常感谢

mysql
  • 1 个回答
  • 783 Views
Martin Hope
laimison
Asked: 2018-12-19 04:25:18 +0800 CST

为哪个 RabbitMQ 版本选择哪个 Prometheus RabbitMQ Exporter?如何为任何版本自动安装此依赖项?

  • 0

在撰写我的问题时,最新的 RabbitMQ 版本是 3.7.9,最新的 prometheus_rabbitmq_exporter 是 3.7.2.4。所以他们是不同的。

我如何知道哪个版本的 prometheus_rabbitmq_exporter 与 RabbitMQ 兼容?

另一个问题是我在 3.3.5 有较旧的 RabbitMQ,如何找到正确版本的 prometheus_rabbitmq_exporter?

最好是非常简单的方法,因为我想自动化这个。

所以我的脚本可以处理任何 RabbitMQ 版本的 prometheus_rabbitmq_exporter。

非常感谢

linux
  • 1 个回答
  • 788 Views
Martin Hope
laimison
Asked: 2016-08-09 03:49:22 +0800 CST

使用 OpenVPN 连接两个网络(所有计算机)。这是否只能通过 OpenVPN 客户端的网关或任何计算机实现?

  • 0

一个例子:

LAN A - NO EXTERNAL IP - OPENVPN CLIENT:
192.168.1.1 gw
192.168.1.2 pc
192.168.1.3 pc

LAN B - EXTERNAL IP - OPENVPN SERVER:
10.0.0.1 gw
10.0.0.2 pc
10.0.0.3 pc

所有计算机应该能够相互访问。我需要在哪里设置 OpenVPN 客户端?那是只有 192.168.1.1 gw 可能还是 LAN A 上的任何计算机?

我希望得到一些细节的回答 NO 或 YES。

(我在路由器 192.168.1.1 上启动 OpenVPN 有一些限制,也无法在 192.168.1.x 的某些主机上安装 OpenVPN 客户端以单独连接到 OpenVPN 服务器)

linux openvpn iptables route
  • 1 个回答
  • 10503 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve