AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / server / 问题

问题[libvirt](server)

Martin Hope
Joshua Boniface
Asked: 2022-11-09 00:15:09 +0800 CST

Linux Livirt/KVM:默认将所有虚拟机放在cpuset中,无需重新配置

  • 5

我正在尝试建立一个系统,让我在一组主机上拥有一些进程,这些主机位于一个(屏蔽)cpuset 中,而其他所有内容(即所有虚拟机)都在另一个主机中。这里的目标是拥有一个超融合系统,其中虚拟机在任意一组 CPU 内核上运行,而存储进程在其他内核上运行,并具有隔离性。我选择了 cpuset 来完成这项任务,因为这似乎是“最简单”的方法。

但是,这似乎无法正常工作。首次设置 cset shield 时可以移动 VM,但稍后启动 VM 会导致如下错误:

libvirt: Cgroup error : Invalid value '0-31' for 'cpuset.cpus': Invalid argument

显然,它试图将虚拟机放在不允许的根 cset 中。这是一个非常默认的配置,只需指定 VM 中的核心数量,无需考虑单个 CPU ID 或任何此类调整(这是此设置工作的要求;节点可能是不对称的,具有不同的核心数量、拓扑等,而虚拟机可以在它们周围实时迁移)。

但总的来说,我想知道:是否可以将 Libvirt/KVM 配置为默认使用特定的 cpuset,而无需以任何方式重新配置 VM(无需手动配置 CPU 固定恶作剧),并且无需任何手动调整 per-工艺依据?

kvm-virtualization libvirt
  • 1 个回答
  • 13 Views
Martin Hope
Keith
Asked: 2022-04-18 05:11:56 +0800 CST

将公共 IPv4 自动分配给正在创建的每个 KVM 虚拟机?

  • 0

我想知道我将如何继续为正在创建的每个 vm 分配一个公共 ipv4。

设置:使用 libvirt 和 kvm 在 CentOS8 上具有 3 个 IP 的主机服务器使用 eth0 作为接口虚拟化 Bridge br0。

经过大量的试验和错误后,我设法通过使用网桥并将 IP 地址分配给来宾操作系统网络文件的接口来手动完成。

虽然我希望这将是自动的,考虑到如果我现在重新安装操作系统,它将回到没有 IP 地址和 id 必须连接到访客并每次手动编辑网络文件中的 ipv4 地址。我怎样才能避免这种情况?

目标:每个 IPv4 都被硬锁定到虚拟机,并且无论操作系统是否重新安装都将保留。

可选目标:如果主机操作系统中的任何 IPv4 未被使用,则应将其分配给下一个创建的 VM。

我每次都必须编写自己的软件来执行此操作还是有更简单的方法?

virtualization networking kvm-virtualization ip-address libvirt
  • 1 个回答
  • 185 Views
Martin Hope
zenlord
Asked: 2021-12-23 23:23:25 +0800 CST

QEMU - 在 Debian 主机的 dist-upgrade 之后,win10 客户状态丢失

  • 0

我正在运行一个带有 win10 来宾的无头 Debian 主机,我很少通过 vnc 登录。上周我将 Debian 从 Buster 升级到了 Bullseye,并且 QEMU 从 v3.1 升级到了 v5.2(libvirt 从 5.0 升级到了 7.0)。当然,我的尽职调查清单不包括从客人那里拍快照。当我现在登录系统时,Windows 安装程序会向我打招呼。

我熟悉 Debian,但对 QEMU/Libvirt 还是很陌生 - 任何指示如何尝试恢复我的来宾操作系统的状态?重新安装没什么大不了的,但我们活着就是为了学习 :)。

这是我使用的安装命令:

virt-install
--name Win10
--ram 2048
--cpu host
--hvm
--vcpus 2
--os-type windows
--os-variant win10
--disk /var/lib/libvirt/images/win10.qcow2,size=30,bus=virtio
--disk /var/lib/libvirt/boot/Win10_2004_English_x64.iso,device=cdrom,bus=sata
--disk /var/lib/libvirt/boot/virtio-win-0.1.171.iso,device=cdrom,bus=sata
--boot cdrom
--network bridge=br0
--graphics vnc,listen=0.0.0.0,port=5901
--noautoconsole
--check all=off

/编辑:澄清一下:我希望将我的客户操作系统的状态恢复到 dist 升级之前的状态。也许我需要依赖文件系统备份(我有),或者我需要更新 q​​emu/libvirt 配置?

libvirt qemu
  • 1 个回答
  • 44 Views
Martin Hope
Norman Pellet
Asked: 2021-08-03 01:53:17 +0800 CST

netplan + libvirt - 我应该设置 virbr0 虚拟网桥吗?

  • 0

据我了解,接口 virbr0 和 virbr0-nic 是由 libvirt 创建和管理的。


● 4: virbr0
       Link File: /lib/systemd/network/99-default.link
    Network File: n/a
            Type: ether
           State: no-carrier (unmanaged)
          Driver: bridge
      HW Address: 52:54:00:0f:26:e6
         Address: 192.168.122.1

● 5: virbr0-nic
       Link File: /lib/systemd/network/99-default.link
    Network File: n/a
            Type: ether
           State: off (unmanaged)
          Driver: tun
      HW Address: 52:54:00:0f:26:e6

但是 libvirt 没有在我的 netplan 文件夹中添加任何内容(也没有出现在 nmcli 或 /etc/network/interfaces 中)。我假设这些接口是由 libvirt 在守护进程启动时创建和启动的。

那么最好不要在 netplan 中指定它们,还是应该在我的配置中添加它们?

另外,从理论上讲,根据 netplan 配置的虚拟网桥会是什么样子?

networking kvm-virtualization bridge libvirt linux-networking
  • 1 个回答
  • 226 Views
Martin Hope
Romain Deterre
Asked: 2021-08-01 12:05:04 +0800 CST

使用 virt-manager 将 USB 设备重定向到虚拟机不起作用

  • 4

我有一个运行 Ubuntu 16.04 虚拟机(KVM 管理程序)的 Fedora 工作站。我想将 USB 设备重定向到 VM,但是从 virt-manager 中选择“虚拟机 | 重定向 USB 设备”时,出现以下错误:

spice-client-error-quark: Could not redirect <USB device name> at 1-4:
Error setting USB device node ACL: 'Not authorized' (0)

错误窗口有一个“详细信息”部分,仅显示“USB 重定向错误”。

到目前为止,这是我尝试过的,但没有成功:

  1. 按照这里的建议,我创建了一个包含以下内容的 /etc/udev/rules.d/50-spice.rules 文件,然后创建了一个`spice`组并将我的用户添加到该组

    SUBSYSTEM=="usb", GROUP="spice", MODE="0660"
    SUBSYSTEM=="usb_device", GROUP="spice", MODE="0660"
    
  2. 将 spice-gtk 从最新版本的 Fedora 33 (0.39-1) 降级到 0.38-3。

  3. 禁用 selinux

  4. sudo chmod 4755 /usr/libexec/spice-gtk-x86_64/spice-client-glib-usb-acl-helper

  5. 升级到带有 spice-gtk 0.39-2 的 Fedora 34

kvm-virtualization usb libvirt
  • 2 个回答
  • 3532 Views
Martin Hope
wtywtykk
Asked: 2021-06-01 10:34:34 +0800 CST

virsh 报告未知功能 amd-sev-es

  • 0

我更新了我的 centos,但我无法再启动任何虚拟机。它说:

error: failed to get emulator capabilities
error: internal error: unknown feature amd-sev-es

但我使用的是 Intel CPU (E5-2678v3),所以它不能具有 AMD 功能。如何禁用此功能?

删除 /var/cache/libvirt/qemu/capabilities/* 不起作用。“virsh domcapabilities”返回上面的错误。

版本:

Centos8 流

libvirt-6.0.0-35.module_el8.5.0+746+bbd5d70c.x86_64

内核-core-4.18.0-305.el8.x86_64

kvm-virtualization libvirt centos8
  • 2 个回答
  • 833 Views
Martin Hope
MrCalvin
Asked: 2021-04-26 12:47:42 +0800 CST

禁用 libvirt 存储池

  • -1

似乎每次我创建一个带有virt-install新存储池的虚拟机时都会创建。

但我根本不使用存储池!

我查看了 virt-install 手册,但可以找到防止这种情况发生的方法。还一直在寻找 libvirt/qemu 配置文件中的任何设置以以任何方式禁用存储池,但在那里也找不到任何东西。

如果我运行,virsh pool-capabilities我确实会得到一个带有一些“支持”属性的 XML 输出,将其设置为“否”可能会很有趣,但我再次找不到任何地方来编辑这些设置。

任何帮助,将不胜感激。

我主要通过直接编辑 VM XML 文件来编辑/创建/管理 VM,并且只使用本地存储(qcow2 文件和逻辑设备)。我也只使用 virsh(无 gui)从控制台管理虚拟机。我看不出使用存储池有什么好处。

编辑:我的 virt-install cmd:

virt-install \
--virt-type kvm \
--name SRV01 \ 
--metadata description="SRV2019" \
--vcpus 2 \
--memory 2048 \
--boot uefi \
--cpu host \
--os-variant win2k19 \
--features acpi=on \
--disk device=disk,path="/mnt/data-r1/vm/w2k16-01/Disk1.qcow2",format=qcow2,bus=virtio,cache=none,boot_order=1 \
--disk device=cdrom,path="/mnt/data-r1/vm/iso/WinSrv2016.iso",boot_order=2,bus=scsi,boot_order=6 \
--disk device=cdrom,path="/mnt/data-r1/vm/iso/virtio-win-0.1.190.iso",bus=sata \
--controller type=virtio-serial \
--controller type=scsi,model=virtio-scsi \
--network bridge=brLAN,model=virtio \
--graphics vnc,password=pass,port=5900,keymap=local,listen=0.0.0.0 \
--noautoconsole \
--video vga \
--memballoon none \
--noreboot
kvm-virtualization libvirt virsh
  • 1 个回答
  • 336 Views
Martin Hope
CH06
Asked: 2021-03-27 00:42:46 +0800 CST

在 qemu-kvm 中创建实例的 virt-install 错误

  • 0

操作系统:Debian 10.4 libvirtd 版本:5.0.0

你好!

我需要在 qemu-kvm 中创建一个实例。使用此命令:

virt-install --connect qemu:///system --virt-type kvm --name test01 --ram=2048 --vcpus=2 --disk /opt/test01/test01.img,bus=virtio,size=10 --pxe --boot uefi --noautoconsole --graphics none  --hvm  --network bridge:eth0  --description "Test VM with w2k16" --os-type=windows --debug

但他回来了:

[Fri, 26 Mar 2021 10:26:07 virt-install 1172] DEBUG (cli:253)   File "/usr/share/virt-manager/virt-install", line 955, in <module>
    sys.exit(main())
  File "/usr/share/virt-manager/virt-install", line 949, in main
    start_install(guest, installer, options)
  File "/usr/share/virt-manager/virt-install", line 625, in start_install
    fail(e, do_exit=False)
  File "/usr/share/virt-manager/virtinst/cli.py", line 253, in fail
    logging.debug("".join(traceback.format_stack()))

[Fri, 26 Mar 2021 10:26:07 virt-install 1172] ERROR (cli:254) Unable to add bridge eth0 port vnet0: Operation not supported
[Fri, 26 Mar 2021 10:26:07 virt-install 1172] DEBUG (cli:256) 
Traceback (most recent call last):
  File "/usr/share/virt-manager/virt-install", line 598, in start_install
    transient=options.transient)
  File "/usr/share/virt-manager/virtinst/installer.py", line 419, in start_install
    doboot, transient)
  File "/usr/share/virt-manager/virtinst/installer.py", line 362, in _create_guest
    domain = self.conn.createXML(install_xml or final_xml, 0)
  File "/usr/lib/python3/dist-packages/libvirt.py", line 3732, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirt.libvirtError: Unable to add bridge eth0 port vnet0: Operation not supported
[Fri, 26 Mar 2021 10:26:07 virt-install 1172] DEBUG (cli:267) Domain installation does not appear to have been successful.
If it was, you can restart your domain by running:
  virsh --connect qemu:///system start test01
otherwise, please restart your installation.
Domain installation does not appear to have been successful.
If it was, you can restart your domain by running:
  virsh --connect qemu:///system start test01
otherwise, please restart your installation.
root@ctng-flc-test01:/opt/test01#   virsh --connect qemu:///system start test01
error: failed to get domain 'test01'

br0 在 Debian 中没问题,他在物理网络中 ping 其他 IP。我可以使用 BR0 IP 建立和接收 ssh 连接。

我不明白什么是父错误:

Domain installation does not appear to have been successful.

或者

ERROR (cli:254) Unable to add bridge eth0 port vnet0: Operation not supported

以及如何解决它们。

再次感谢!

kvm-virtualization libvirt debian-buster virt-install
  • 1 个回答
  • 1535 Views
Martin Hope
laimison
Asked: 2021-03-23 17:29:27 +0800 CST

无法在 KVM 中启动虚拟机/域:无法获得“写入”锁定

  • 1

主机重启后,我无法启动虚拟机:

user@server-1:~$ virsh start docker-1
error: Failed to start domain docker-1
error: internal error: process exited while connecting to monitor: 2021-03-23T01:21:58.149079Z qemu-system-x86_64: -blockdev {"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}: Failed to get "write" lock
Is another process using the image [/apphd/prod/kvm/storage/docker-1-volume-hd.qcow2]?

文件未使用:

user@server-1:~$ sudo fuser -u /apphd/prod/kvm/storage/docker-1-volume-hd.qcow2
user@server-1:~$ sudo lsof | grep qcow
user@server-1:~$ virsh list
 Id   Name   State
--------------------

user@server-1:~$

我在 Ubuntu 18.04/qemu 2.11 上试过并升级到 Ubuntu 20.04/qemu 4.2.1

此升级无助于解决问题。

这个虚拟机非常大,因此无法轻松地从中创建新虚拟机,没有可用空间。

有什么帮助可以从这种情况中恢复并启动这个域吗?

谢谢


更新

附加锁的输出:

user@server-1:~$ sudo lslocks -u
COMMAND           PID  TYPE SIZE MODE  M      START        END PATH
blkmapd           583 POSIX   4B WRITE 0          0          0 /run/blkmapd.pid
rpcbind          1181 FLOCK      WRITE 0          0          0 /run/rpcbind.lock
lxcfs            1312 POSIX   5B WRITE 0          0          0 /run/lxcfs.pid
atd              1456 POSIX   5B WRITE 0          0          0 /run/atd.pid
whoopsie         1454 FLOCK      WRITE 0          0          0 /run/lock/whoopsie/lock
virtlogd         6143 POSIX   4B WRITE 0          0          0 /run/virtlogd.pid
multipathd       1106 POSIX   4B WRITE 0          0          0 /run/multipathd.pid
containerd       1401 FLOCK 128K WRITE 0          0          0 /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db
tracker-miner-f  1561 POSIX 3.6M READ  0 1073741826 1073742335 /var/lib/gdm3/.cache/tracker/meta.db
tracker-miner-f  1561 POSIX  32K READ  0        128        128 /var/lib/gdm3/.cache/tracker/meta.db-shm
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/network/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/interface/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/secrets/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/storage/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/nodedev/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/nwfilter/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/qemu/driver.pid
tracker-miner-f  8956 POSIX 3.6M READ  0 1073741826 1073742335 /home/user/.cache/tracker/meta.db
tracker-miner-f  8956 POSIX  32K READ  0        128        128 /home/user/.cache/tracker/meta.db-shm
dmeventd          581 POSIX   4B WRITE 0          0          0 /run/dmeventd.pid
cron             1445 FLOCK   5B WRITE 0          0          0 /run/crond.pid
gnome-shell      1713 FLOCK      WRITE 0          0          0 /run/user/126/wayland-0.lock
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirtd.pid

并附上工艺表:

user@server-1:~$ ps -ef
UID          PID    PPID  C STIME TTY          TIME CMD
root           1       0  0 01:11 ?        00:00:03 /sbin/init
root           2       0  0 01:11 ?        00:00:00 [kthreadd]
root           3       2  0 01:11 ?        00:00:00 [rcu_gp]
root           4       2  0 01:11 ?        00:00:00 [rcu_par_gp]
root           6       2  0 01:11 ?        00:00:00 [kworker/0:0H-kblockd]
root           9       2  0 01:11 ?        00:00:00 [mm_percpu_wq]
root          10       2  0 01:11 ?        00:00:00 [ksoftirqd/0]
root          11       2  0 01:11 ?        00:00:01 [rcu_sched]
root          12       2  0 01:11 ?        00:00:00 [migration/0]
root          13       2  0 01:11 ?        00:00:00 [idle_inject/0]
root          14       2  0 01:11 ?        00:00:00 [cpuhp/0]
root          15       2  0 01:11 ?        00:00:00 [cpuhp/1]
root          16       2  0 01:11 ?        00:00:00 [idle_inject/1]
root          17       2  0 01:11 ?        00:00:00 [migration/1]
root          18       2  0 01:11 ?        00:00:00 [ksoftirqd/1]
root          20       2  0 01:11 ?        00:00:00 [kworker/1:0H-kblockd]
root          21       2  0 01:11 ?        00:00:00 [cpuhp/2]
root          22       2  0 01:11 ?        00:00:00 [idle_inject/2]
root          23       2  0 01:11 ?        00:00:00 [migration/2]
root          24       2  0 01:11 ?        00:00:00 [ksoftirqd/2]
root          26       2  0 01:11 ?        00:00:00 [kworker/2:0H-kblockd]
root          27       2  0 01:11 ?        00:00:00 [cpuhp/3]
root          28       2  0 01:11 ?        00:00:00 [idle_inject/3]
root          29       2  0 01:11 ?        00:00:00 [migration/3]
root          30       2  0 01:11 ?        00:00:00 [ksoftirqd/3]
root          32       2  0 01:11 ?        00:00:00 [kworker/3:0H-events_highpri]
root          33       2  0 01:11 ?        00:00:00 [kdevtmpfs]
root          34       2  0 01:11 ?        00:00:00 [netns]
root          35       2  0 01:11 ?        00:00:00 [rcu_tasks_kthre]
root          36       2  0 01:11 ?        00:00:00 [kauditd]
root          38       2  0 01:11 ?        00:00:00 [khungtaskd]
root          39       2  0 01:11 ?        00:00:00 [oom_reaper]
root          40       2  0 01:11 ?        00:00:00 [writeback]
root          41       2  0 01:11 ?        00:00:00 [kcompactd0]
root          42       2  0 01:11 ?        00:00:00 [ksmd]
root          43       2  0 01:11 ?        00:00:00 [khugepaged]
root          89       2  0 01:11 ?        00:00:00 [kintegrityd]
root          90       2  0 01:11 ?        00:00:00 [kblockd]
root          91       2  0 01:11 ?        00:00:00 [blkcg_punt_bio]
root          93       2  0 01:11 ?        00:00:00 [tpm_dev_wq]
root          94       2  0 01:11 ?        00:00:00 [ata_sff]
root          95       2  0 01:11 ?        00:00:00 [md]
root          96       2  0 01:11 ?        00:00:00 [edac-poller]
root          97       2  0 01:11 ?        00:00:00 [devfreq_wq]
root          98       2  0 01:11 ?        00:00:00 [watchdogd]
root         101       2  0 01:11 ?        00:00:00 [kswapd0]
root         102       2  0 01:11 ?        00:00:00 [ecryptfs-kthrea]
root         104       2  0 01:11 ?        00:00:00 [kthrotld]
root         105       2  0 01:11 ?        00:00:00 [irq/122-aerdrv]
root         106       2  0 01:11 ?        00:00:00 [acpi_thermal_pm]
root         107       2  0 01:11 ?        00:00:00 [vfio-irqfd-clea]
root         111       2  0 01:11 ?        00:00:00 [ipv6_addrconf]
root         120       2  0 01:11 ?        00:00:00 [kstrp]
root         123       2  0 01:11 ?        00:00:00 [kworker/u9:0-xprtiod]
root         138       2  0 01:11 ?        00:00:00 [charger_manager]
root         197       2  0 01:11 ?        00:00:00 [cryptd]
root         224       2  0 01:11 ?        00:00:00 [scsi_eh_0]
root         225       2  0 01:11 ?        00:00:00 [scsi_tmf_0]
root         226       2  0 01:11 ?        00:00:00 [scsi_eh_1]
root         227       2  0 01:11 ?        00:00:00 [scsi_tmf_1]
root         228       2  0 01:11 ?        00:00:00 [scsi_eh_2]
root         229       2  0 01:11 ?        00:00:00 [scsi_tmf_2]
root         230       2  0 01:11 ?        00:00:00 [scsi_eh_3]
root         231       2  0 01:11 ?        00:00:00 [scsi_tmf_3]
root         232       2  0 01:11 ?        00:00:00 [scsi_eh_4]
root         233       2  0 01:11 ?        00:00:00 [scsi_tmf_4]
root         234       2  0 01:11 ?        00:00:00 [scsi_eh_5]
root         235       2  0 01:11 ?        00:00:00 [scsi_tmf_5]
root         241       2  0 01:11 ?        00:00:00 [kworker/0:1H]
root         245       2  0 01:11 ?        00:00:00 [scsi_eh_6]
root         246       2  0 01:11 ?        00:00:00 [scsi_tmf_6]
root         247       2  0 01:11 ?        00:00:02 [usb-storage]
root         248       2  0 01:11 ?        00:00:00 [scsi_eh_7]
root         249       2  0 01:11 ?        00:00:00 [scsi_tmf_7]
root         250       2  0 01:11 ?        00:00:00 [usb-storage]
root         251       2  0 01:11 ?        00:00:00 [kworker/3:1H-kblockd]
root         252       2  0 01:11 ?        00:00:00 [uas]
root         253       2  0 01:11 ?        00:00:00 [kworker/2:1H-kblockd]
root         254       2  0 01:11 ?        00:00:00 [kworker/1:1H-kblockd]
root         286       2  0 01:11 ?        00:00:00 [raid5wq]
root         287       2  0 01:11 ?        00:00:00 [kdmflush]
root         288       2  0 01:11 ?        00:00:00 [kdmflush]
root         290       2  0 01:11 ?        00:00:00 [kdmflush]
root         292       2  0 01:11 ?        00:00:00 [kdmflush]
root         297       2  0 01:11 ?        00:00:00 [kdmflush]
root         319       2  0 01:11 ?        00:00:00 [mdX_raid1]
root         326       2  0 01:11 ?        00:00:00 [kdmflush]
root         327       2  0 01:11 ?        00:00:00 [kdmflush]
root         328       2  0 01:11 ?        00:00:00 [kdmflush]
root         330       2  0 01:11 ?        00:00:00 [kdmflush]
root         331       2  0 01:11 ?        00:00:00 [kdmflush]
root         363       2  0 01:11 ?        00:00:00 [mdX_raid1]
root         476       2  0 01:11 ?        00:00:00 [jbd2/sda2-8]
root         477       2  0 01:11 ?        00:00:00 [ext4-rsv-conver]
root         552       2  0 01:11 ?        00:00:00 [rpciod]
root         553       2  0 01:11 ?        00:00:00 [xprtiod]
root         554       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-journald
root         581       1  0 01:11 ?        00:00:01 /sbin/dmeventd -f
root         583       1  0 01:11 ?        00:00:00 /usr/sbin/blkmapd
root         597       1  0 01:11 ?        00:00:01 /lib/systemd/systemd-udevd
root         635       2  0 01:11 ?        00:00:00 [irq/133-mei_me]
root         697       2  0 01:11 ?        00:00:00 [led_workqueue]
root        1102       2  0 01:11 ?        00:00:00 [kaluad]
root        1103       2  0 01:11 ?        00:00:00 [kmpath_rdacd]
root        1104       2  0 01:11 ?        00:00:00 [kmpathd]
root        1105       2  0 01:11 ?        00:00:00 [kmpath_handlerd]
root        1106       1  0 01:11 ?        00:00:04 /sbin/multipathd -d -s
root        1115       2  0 01:11 ?        00:00:00 [jbd2/dm-4-8]
root        1117       2  0 01:11 ?        00:00:00 [ext4-rsv-conver]
root        1120       2  0 01:11 ?        00:00:00 [loop0]
root        1126       2  0 01:11 ?        00:00:00 [loop1]
root        1129       2  0 01:11 ?        00:00:00 [loop2]
root        1131       2  0 01:11 ?        00:00:00 [jbd2/dm-9-8]
root        1132       2  0 01:11 ?        00:00:00 [ext4-rsv-conver]
root        1135       2  0 01:11 ?        00:00:00 [loop3]
root        1137       2  0 01:11 ?        00:00:00 [loop4]
root        1138       2  0 01:11 ?        00:00:00 [loop5]
root        1145       2  0 01:11 ?        00:00:00 [jbd2/sde1-8]
root        1146       2  0 01:11 ?        00:00:00 [ext4-rsv-conver]
systemd+    1176       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-networkd
root        1177       1  0 01:11 ?        00:00:00 /usr/sbin/rpc.idmapd
_rpc        1181       1  0 01:11 ?        00:00:00 /sbin/rpcbind -f -w
systemd+    1182       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-timesyncd
systemd+    1187       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-resolved
root        1296       1  0 01:11 ?        00:00:00 /usr/lib/accountsservice/accounts-daemon
root        1297       1  0 01:11 ?        00:00:00 /usr/sbin/acpid
avahi       1301       1  0 01:11 ?        00:00:00 avahi-daemon: running [server-1.local]
root        1302       1  0 01:11 ?        00:00:00 /usr/sbin/cupsd -l
message+    1303       1  0 01:11 ?        00:00:01 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
root        1304       1  0 01:11 ?        00:00:01 /usr/sbin/NetworkManager --no-daemon
root        1310       1  0 01:11 ?        00:00:02 /usr/sbin/irqbalance --foreground
root        1312       1  0 01:11 ?        00:00:00 /usr/bin/lxcfs /var/lib/lxcfs
root        1314       1  0 01:11 ?        00:00:00 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
root        1322       1  0 01:11 ?        00:00:02 /usr/lib/policykit-1/polkitd --no-debug
syslog      1329       1  0 01:11 ?        00:00:00 /usr/sbin/rsyslogd -n -iNONE
root        1335       1  0 01:11 ?        00:00:00 /usr/sbin/smartd -n
root        1340       1  0 01:11 ?        00:00:00 /usr/libexec/switcheroo-control
root        1341       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-logind
root        1342       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-machined
root        1343       1  0 01:11 ?        00:00:09 /usr/lib/udisks2/udisksd
root        1344       1  0 01:11 ?        00:00:00 /sbin/wpa_supplicant -u -s -O /run/wpa_supplicant
avahi       1353    1301  0 01:11 ?        00:00:00 avahi-daemon: chroot helper
root        1383       1  0 01:11 ?        00:00:00 /usr/sbin/cups-browsed
root        1386       1  0 01:11 ?        00:00:00 /usr/sbin/ModemManager --filter-policy=strict
root        1401       1  0 01:11 ?        00:02:22 /usr/bin/containerd
root        1416       1  0 01:11 ?        00:00:00 /usr/sbin/rpc.mountd --manage-gids
root        1445       1  0 01:11 ?        00:00:00 /usr/sbin/cron -f
whoopsie    1454       1  0 01:11 ?        00:00:00 /usr/bin/whoopsie -f
daemon      1456       1  0 01:11 ?        00:00:00 /usr/sbin/atd -f
root        1457       2  0 01:11 ?        00:00:00 [kworker/u9:1-xprtiod]
root        1458       1  0 01:11 ?        00:00:00 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
root        1460       2  0 01:11 ?        00:00:00 [lockd]
kernoops    1463       1  0 01:11 ?        00:00:01 /usr/sbin/kerneloops --test
kernoops    1474       1  0 01:11 ?        00:00:01 /usr/sbin/kerneloops
root        1477       1  0 01:11 ?        00:00:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
root        1486       1  0 01:11 ?        00:00:00 /usr/sbin/gdm3
root        1496    1486  0 01:11 ?        00:00:00 gdm-session-worker [pam/gdm-launch-environment]
gdm         1527       1  0 01:11 ?        00:00:00 /lib/systemd/systemd --user
gdm         1528    1527  0 01:11 ?        00:00:00 (sd-pam)
root        1552       2  0 01:11 ?        00:00:00 bpfilter_umh
gdm         1559    1527  0 01:11 ?        00:00:00 /usr/bin/pulseaudio --daemonize=no --log-target=journal
gdm         1561    1527  0 01:11 ?        00:00:00 /usr/libexec/tracker-miner-fs
gdm         1568    1496  0 01:11 tty1     00:00:00 /usr/lib/gdm3/gdm-wayland-session dbus-run-session -- gnome-session --autostart /usr/share/gdm/greeter/autostart
gdm         1577    1527  0 01:11 ?        00:00:00 /usr/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
gdm         1584    1568  0 01:11 tty1     00:00:00 dbus-run-session -- gnome-session --autostart /usr/share/gdm/greeter/autostart
gdm         1585    1584  0 01:11 tty1     00:00:00 dbus-daemon --nofork --print-address 4 --session
rtkit       1586       1  0 01:11 ?        00:00:00 /usr/libexec/rtkit-daemon
gdm         1589    1584  0 01:11 tty1     00:00:00 /usr/libexec/gnome-session-binary --systemd --autostart /usr/share/gdm/greeter/autostart
gdm         1590    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfsd
gdm         1600    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfsd-fuse /run/user/126/gvfs -f -o big_writes
gdm         1608    1527  0 01:11 ?        00:00:01 /usr/libexec/gvfs-udisks2-volume-monitor
gdm         1640    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfs-mtp-volume-monitor
gdm         1648    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfs-goa-volume-monitor
gdm         1653    1527  0 01:11 ?        00:00:00 /usr/libexec/goa-daemon
gdm         1686       1  0 01:11 tty1     00:00:00 /usr/libexec/dconf-service
gdm         1702    1527  0 01:11 ?        00:00:00 /usr/libexec/goa-identity-service
gdm         1711    1527  0 01:11 ?        00:00:01 /usr/libexec/gvfs-afc-volume-monitor
gdm         1713    1589  0 01:11 tty1     00:00:13 /usr/bin/gnome-shell
gdm         1723    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfs-gphoto2-volume-monitor
root        1729       1  0 01:11 ?        00:00:00 /usr/lib/upower/upowerd
root        1800       2  0 01:11 ?        00:00:00 [nfsd]
root        1801       2  0 01:11 ?        00:00:00 [nfsd]
root        1802       2  0 01:11 ?        00:00:00 [nfsd]
root        1803       2  0 01:11 ?        00:00:00 [nfsd]
root        1804       2  0 01:11 ?        00:00:00 [nfsd]
root        1805       2  0 01:11 ?        00:00:00 [nfsd]
root        1806       2  0 01:11 ?        00:00:00 [nfsd]
root        1807       2  0 01:11 ?        00:00:00 [nfsd]
gdm         1868       1  0 01:11 tty1     00:00:00 /usr/libexec/at-spi-bus-launcher
gdm         1874    1868  0 01:11 tty1     00:00:00 /usr/bin/dbus-daemon --config-file=/usr/share/defaults/at-spi2/accessibility.conf --nofork --print-address 3
gdm         1880    1713  0 01:11 tty1     00:00:00 /usr/bin/Xwayland :1024 -rootless -noreset -accessx -core -auth /run/user/126/.mutter-Xwaylandauth.XH3U00 -listen 4 -listen 5 -displayfd 6 -listen 7
libvirt+    1916       1  0 01:11 ?        00:00:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
root        1917    1916  0 01:11 ?        00:00:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
gdm         2003       1  0 01:11 tty1     00:00:00 /usr/libexec/xdg-permission-store
gdm         2052       1  0 01:11 tty1     00:00:00 /usr/bin/gjs /usr/share/gnome-shell/org.gnome.Shell.Notifications
gdm         2054       1  0 01:11 tty1     00:00:00 /usr/libexec/at-spi2-registryd --use-gnome-session
gdm         2066    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-sharing
gdm         2069    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-wacom
gdm         2070    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-color
gdm         2075    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-keyboard
gdm         2078    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-print-notifications
gdm         2079    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-rfkill
gdm         2084    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-smartcard
gdm         2090    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-datetime
gdm         2103    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-media-keys
gdm         2110    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-screensaver-proxy
gdm         2111    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-sound
gdm         2112    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-a11y-settings
gdm         2114    1589  0 01:11 tty1     00:00:03 /usr/libexec/gsd-housekeeping
gdm         2116    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-power
gdm         2179    1713  0 01:11 tty1     00:00:00 ibus-daemon --panel disable -r --xim
gdm         2183       1  0 01:11 tty1     00:00:00 /usr/libexec/gsd-printer
gdm         2185    2179  0 01:11 tty1     00:00:00 /usr/libexec/ibus-dconf
gdm         2192       1  0 01:11 tty1     00:00:00 /usr/libexec/ibus-x11 --kill-daemon
gdm         2199    2179  0 01:11 tty1     00:00:00 /usr/libexec/ibus-engine-simple
gdm         2202       1  0 01:11 tty1     00:00:00 /usr/libexec/ibus-portal
colord      2212       1  0 01:11 ?        00:00:00 /usr/libexec/colord
gdm         2268    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfsd-metadata
root        6057       1  0 01:18 ?        00:00:01 /usr/sbin/libvirtd
root        6143       1  0 01:19 ?        00:00:00 /usr/sbin/virtlogd
root        6562       2  0 01:34 ?        00:00:01 [kworker/2:3-events]
root        7924       2  0 06:06 ?        00:00:00 [loop6]
root        7981       1  0 06:06 ?        00:00:03 /usr/lib/snapd/snapd
root        8320       2  0 08:34 ?        00:00:00 [kworker/0:0-rcu_gp]
root        8891       2  0 09:30 ?        00:00:00 [kworker/1:0-events]
root        8919    1458  0 10:02 ?        00:00:00 sshd: user [priv]
user         8938       1  0 10:02 ?        00:00:00 /lib/systemd/systemd --user
user         8939    8938  0 10:02 ?        00:00:00 (sd-pam)
root        8951       2  0 10:02 ?        00:00:00 [kworker/0:2-events]
user         8954    8938  0 10:02 ?        00:00:00 /usr/bin/pulseaudio --daemonize=no --log-target=journal
user         8956    8938  0 10:02 ?        00:00:00 /usr/libexec/tracker-miner-fs
user         8958    8938  0 10:02 ?        00:00:00 /usr/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
user         8975    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfsd
user         8983    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfsd-fuse /run/user/1000/gvfs -f -o big_writes
user         8995    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfs-udisks2-volume-monitor
user         9007    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfs-mtp-volume-monitor
user         9011    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfs-goa-volume-monitor
user         9015    8938  0 10:02 ?        00:00:00 /usr/libexec/goa-daemon
user         9022    8938  0 10:02 ?        00:00:00 /usr/libexec/goa-identity-service
user         9029    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfs-afc-volume-monitor
user         9035    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfs-gphoto2-volume-monitor
user         9185    8919  0 10:02 ?        00:00:00 sshd: user@pts/0
user         9186    9185  0 10:02 pts/0    00:00:00 -bash
root        9258       2  0 10:13 ?        00:00:00 [kworker/3:3-events]
root        9259       2  0 10:13 ?        00:00:00 [kworker/3:4-cgroup_destroy]
root        9294       2  0 10:31 ?        00:00:00 [kworker/1:1]
root        9330       2  0 11:31 ?        00:00:00 [kworker/2:0-events]
root        9334       2  0 11:41 ?        00:00:00 [kworker/u8:2-events_freezable_power_]
root        9348       2  0 11:53 ?        00:00:00 [kworker/u8:0-events_power_efficient]
root        9352       2  0 12:07 ?        00:00:00 [kworker/u8:3-events_unbound]
root        9400       2  0 12:09 ?        00:00:00 [kworker/3:0-events]
root        9403       2  0 12:09 ?        00:00:00 [kworker/0:1-rcu_gp]
root        9413       2  0 12:09 ?        00:00:00 [kworker/3:1-cgroup_destroy]
root        9414       2  0 12:09 ?        00:00:00 [kworker/3:2-events]
root        9415       2  0 12:09 ?        00:00:00 [kworker/3:5-events]
root        9418       2  0 12:09 ?        00:00:00 [kworker/2:1]
root        9419       2  0 12:09 ?        00:00:00 [kworker/3:6]
root        9459       2  0 12:13 ?        00:00:00 [kworker/u8:1-events_unbound]
user         9463    9186  0 12:14 pts/0    00:00:00 ps -ef
user@server-1:~$

附加此 VM 的 XML 转储:

user@server-1:~$ virsh dumpxml docker-1
<domain type='kvm'>
  <name>docker-1</name>
  <uuid>dfb49ea5-f6e7-45d1-9422-e3ce97cf6320</uuid>
  <memory unit='KiB'>10485760</memory>
  <currentMemory unit='KiB'>10485760</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <os>
    <type arch='x86_64' machine='pc-i440fx-bionic'>hvm</type>
    <boot dev='hd'/>
    <boot dev='network'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <cpu mode='custom' match='exact' check='none'>
    <model fallback='forbid'>qemu64</model>
  </cpu>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/kvm-spice</emulator>
    <disk type='volume' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source pool='default' volume='docker-1-volume-resized.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/apphd/prod/kvm/storage/docker-1-volume-hd.qcow2'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/apphd/prod/kvm/storage/docker-1-volume-hd.qcow2'/>
      <target dev='vdc' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/apphd/prod/kvm/storage/docker-1-volume-hd.qcow2'/>
      <target dev='vdx' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/app/prod/kvm/storage/common-init-docker-1.iso'/>
      <target dev='hdd' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='1' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='piix3-uhci'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:01:00:00:00:01'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <console type='pty'>
      <target type='virtio' port='1'/>
    </console>
    <channel type='pty'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>
    <video>
      <model type='vga' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </rng>
  </devices>
</domain>

user@server-1:~$

kvm-virtualization libvirt qemu
  • 1 个回答
  • 4296 Views
Martin Hope
Blake Simmons
Asked: 2021-03-02 13:36:34 +0800 CST

为什么我的 docker 注册表拒绝来自本地网络中的虚拟机而不是主机的连接?

  • 1

对于上下文 - 我正在尝试在气隙环境中部署 OKD,这需要镜像镜像注册表。然后,在安装过程中,网络中的其他机器会从这个私有、安全的注册表中提取。

描述环境 - 运行注册表容器的主机正在运行 Centos 7.6。其他机器都是使用 libvirt 运行 Fedora coreOS 的虚拟机。虚拟机和主机使用使用 libvirt 创建的虚拟网络连接,其中包括虚拟机的 DHCP 设置(通过 virsh net-edit 配置),为它们提供静态 IP。主机还托管 DNS 服务器(绑定),据我所知,它配置正确,因为我可以使用其完全限定的域名从其他机器 ping 每台机器并访问特定端口(例如端口主机上的 apache 服务器监听)。使用 Podman 代替 Docker 进行 OKD 的容器管理,但据我所知,命令完全相同。

我使用以下命令在气隙环境中运行注册表:

sudo podman run --name mirror-registry -p 5000:5000 -v /opt/registry/data:/var/lib/registry:z \
-v /opt/registry/auth:/auth:z -v /opt/registry/certs:/certs:z -e REGISTRY_AUTH=htpasswd \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.pem -e REGISTRY_HTTP_TLS_KEY=/certs/registry-key.pem \
-d docker.io/library/registry:latest

可以使用curl -u username:password https://host-machine.example.local:5000/v2/_catalogwhich returns访问它{"repositories":[]}。我相信这证实了我的 TLS 和授权配置是正确的。但是,如果我将 ca.pem 文件(用于签署注册表使用的 SSL 证书)传输到虚拟网络上的 VM 之一,并尝试使用相同的curl命令,我会收到错误消息:

connect to 192.168.x.x port 5000 failed: Connection refused
Failed to connect to host-machine.example.local port 5000: Connection refused
Closing connection 0

这对我来说很奇怪,因为过去我已经能够使用这种方法从 VM 与注册表进行通信,但我不确定发生了什么变化。

经过进一步挖掘,似乎端口本身存在某种问题,但我无法确定问题出在哪里。例如,如果我sudo netstat -tulpn | grep LISTEN在主机上运行,​​我会收到一条指示 podman (conmon) 正在侦听正确端口的行:

tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 48337/conmon

但是,如果我测试该端口是否可以从 VM 访问,(nc -zvw5 192.168.x.x 5000)我会收到类似的错误:Ncat: Connection refused.如果我在主机上的任何其他侦听端口上使用相同的测试,则表明与这些端口的连接成功。

请注意,我已经完全禁用了 firewalld,据我所知,所有端口都是开放的。

我不确定问题出在我的 DNS 设置、虚拟网络还是注册表本身,我不太确定如何进一步诊断问题。任何见解将不胜感激。

网络定义:

<network connections='6'>
  <name>okd</name>
  <uuid>2ce10cce-9bb6-4d5d-950f-15427172b196</uuid>
  <bridge name='virbr1' stp='on' delay='0'/>
  <mac address='52:54:00:d9:d6:95'/>
  <domain name='okd'/>
  <ip address='192.168.2.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.2.200' end='192.168.2.254'/>
      <host mac='52:54:00:45:93:07' name='okd-bootstrap' ip='192.168.2.200'/>
      <host mac='52:54:00:f0:0a:1c' name='okd-master1' ip='192.168.2.201'/>
      <host mac='52:54:00:d1:29:9e' name='okd-master2' ip='192.168.2.202'/>
      <host mac='52:54:00:c9:a4:bb' name='okd-master3' ip='192.168.2.203'/>
      <host mac='52:54:00:25:5d:48' name='okd-worker1' ip='192.168.2.204'/>
      <host mac='52:54:00:1e:90:3c' name='okd-worker2' ip='192.168.2.205'/>
    </dhcp>
  </ip>
</network>
ssl libvirt docker-registry containers virtual-network
  • 1 个回答
  • 1116 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve