AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / server / 问题

问题[qemu](server)

Martin Hope
Oleg Neumyvakin
Asked: 2022-01-27 08:51:17 +0800 CST

如何使用自定义策略设置 SELinux 布尔值?

  • 0

我知道 SElinux 布尔值可以这样设置setsebool:

setsebool -P virt_qemu_ga_read_nonsecurity_files 1

但我想virt_qemu_ga_read_nonsecurity_files使用自定义 SELinux 策略设置这个布尔值。

甚至可能吗?我怎样才能做到这一点?

kvm-virtualization selinux qemu
  • 1 个回答
  • 143 Views
Martin Hope
zenlord
Asked: 2021-12-23 23:23:25 +0800 CST

QEMU - 在 Debian 主机的 dist-upgrade 之后,win10 客户状态丢失

  • 0

我正在运行一个带有 win10 来宾的无头 Debian 主机,我很少通过 vnc 登录。上周我将 Debian 从 Buster 升级到了 Bullseye,并且 QEMU 从 v3.1 升级到了 v5.2(libvirt 从 5.0 升级到了 7.0)。当然,我的尽职调查清单不包括从客人那里拍快照。当我现在登录系统时,Windows 安装程序会向我打招呼。

我熟悉 Debian,但对 QEMU/Libvirt 还是很陌生 - 任何指示如何尝试恢复我的来宾操作系统的状态?重新安装没什么大不了的,但我们活着就是为了学习 :)。

这是我使用的安装命令:

virt-install
--name Win10
--ram 2048
--cpu host
--hvm
--vcpus 2
--os-type windows
--os-variant win10
--disk /var/lib/libvirt/images/win10.qcow2,size=30,bus=virtio
--disk /var/lib/libvirt/boot/Win10_2004_English_x64.iso,device=cdrom,bus=sata
--disk /var/lib/libvirt/boot/virtio-win-0.1.171.iso,device=cdrom,bus=sata
--boot cdrom
--network bridge=br0
--graphics vnc,listen=0.0.0.0,port=5901
--noautoconsole
--check all=off

/编辑:澄清一下:我希望将我的客户操作系统的状态恢复到 dist 升级之前的状态。也许我需要依赖文件系统备份(我有),或者我需要更新 q​​emu/libvirt 配置?

libvirt qemu
  • 1 个回答
  • 44 Views
Martin Hope
freezed
Asked: 2021-09-24 13:40:11 +0800 CST

在 QEMU 上启动 Debian Live ISO

  • 1

情况:

服务器:

  • 仅通过 SSH 访问(无物理访问,无 KVM)
  • 一个网络引导操作系统(Debian/Jessie)
  • 3 个 2T 硬盘
  • 16G 内存

最终目标:

ZFS pool使用本地硬盘构建并在 ZFS root上安装Debian,netboot OS 缺少用于安装 ZFS 的软件包apt,这就是我想启动 Live Debian 的原因。

问题:

  1. 我wget在debian-live-11.0.0-amd64-standard.iso_/tmp
  2. 我安装了QEMU(通过apt),过多的选项让我感到困惑(我正在发现它)。我最先进的尝试是这样的:
qemu-system-x86_64 -curses -net nic -net user -m 1024M
    -drive file=/tmp/11-live-amd64-std.iso,media=cdrom -boot c

该选项通过 install iso-curses给出了正确的结果,当消息出现时,我使用访问菜单并将选项传递给它,然后它就会运行(截图)640 x 480 Graphic mode<esc>boot:grubinstall vga=normal fb=false

但是对于 Live iso,它不起作用(截图)

这些是我的问题:

  1. 我是否错过了任何QEMU选项来显示不应该使用此标准iso 图形化的输出?
  2. 我是否需要在控制台模式下使用(例如)GRUB 配置我的 live iso ?
  3. 我是否无法配置端口转发以QEMU通过 SSH 或 telnet 访问控制台?
  4. 有没有其他解决方案(没有QEMU)?

提前致谢

debian netboot qemu openzfs
  • 1 个回答
  • 630 Views
Martin Hope
tymur999
Asked: 2021-09-12 08:08:58 +0800 CST

适用于 Windows 10 VM (QEMU-KVM) 的 Virsh 控制台

  • 1

我正在尝试使用virsh console.

但是当我这样做时,我得到了许多其他人经历过的空控制台。

virsh console win10
Connected to domain 'win10'
Escape character is ^] (Ctrl + ])

而且我根本不会打字。无论如何,我可以在 VM 中进行配置以允许这样做,特别是针对 Windows 吗?谢谢

windows kvm-virtualization qemu
  • 2 个回答
  • 1317 Views
Martin Hope
kab00m
Asked: 2021-08-09 10:29:26 +0800 CST

KVM 上的 SLES 11 PV 虚拟机

  • 0

我有 SUSE 11 SP4 VM,最初它在 PV 模式下在 Xen 上工作。现在我将其移至 KVM。我常用的方法是在目标 VM 中启动任何 Linux,挂载目标操作系统的根目录,chroot 并重建 initramfs,然后将 VM 重新启动到目标操作系统。

SLES 11 SP4 似乎缺少一些东西,因为在那之后 initramfs 找不到任何 vbd 设备来挂载 root。但是,我已经设法通过 KVM 主机上的直接 qemu 命令运行它:

qemu-kvm -m 32768 -smp 8 -device virtio-net-pci,mac=42:5f:96:48:39:fa,netdev=vmnic -netdev tap,id=vmnic,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown -nographic -serial mon:stdio -drive file=/dev/lvm/vm,if=none,id=drive0,format=raw  -device virtio-blk-pci,drive=drive0,scsi=off

它工作正常。

KVM 配置(与磁盘相关)如下所示:

<devices>
  <emulator>/usr/bin/qemu-system-x86_64</emulator>
  <disk type="block" device="disk">
    <driver name="qemu" type="raw" cache="none" io="native"/>
    <source dev="/dev/lvm/vm"/>
    <target dev="vda" bus="virtio"/>
    <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
  </disk>
  <controller type="pci" index="3" model="pcie-root-port">
    <model name="pcie-root-port"/>
    <target chassis="3" port="0xa"/>
    <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x2"/>
  </controller>

而且我的 virt-manager 不允许我在这里进行重大更改。

我在这里可能错了,但我认为主要区别在于 PCI 设备结构,因此 initramfs 以一种方式工作,而不是另一种工作方式。我比较了 PCI 设备:

在通过 qemu 命令直接运行的 VM 上找到的设备树:

00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Device 1234:1111 (rev 02)
00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
00:04.0 SCSI storage controller: Red Hat, Inc Virtio block device

在任何其他 KVM 虚拟机(同一主机)上找到的设备树:

00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller
00:01.0 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.1 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.2 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.3 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.4 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.5 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.6 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.7 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:1f.0 ISA bridge: Intel Corporation 82801IB (ICH9) LPC Interface Controller (rev 02)
00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)
00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02)
01:00.0 Ethernet controller: Red Hat, Inc. Virtio network device (rev 01)
02:00.0 USB controller: Red Hat, Inc. QEMU XHCI Host Controller (rev 01)
03:00.0 SCSI storage controller: Red Hat, Inc. Virtio block device (rev 01)
04:00.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon (rev 01)
05:00.0 Unclassified device [00ff]: Red Hat, Inc. Virtio RNG (rev 01)
08:00.0 SCSI storage controller: Red Hat, Inc. Virtio block device (rev 01)

在这里我看到了不同之处:qemu 允许将存储连接到根 PCI 主机桥,但在 KVM 中它始终连接到 QEMU PCIe 根端口。

我的问题是:

  1. SLES 11 是否可能太旧而无法支持 QEMU PCIe Root 端口?
  2. 是否可以简化 VM 配置以将存储直接附加到主机桥?
  3. 我在目标环境中重建 initramfs,不向配置文件添加任何内容。重建 initramfs 时我是否遗漏了某些东西(挂钩或驱动程序)?
linux kvm-virtualization qemu sles11
  • 1 个回答
  • 41 Views
Martin Hope
Christian
Asked: 2021-07-29 23:46:40 +0800 CST

带有 Tap 接口的 QEMU VM 可以看到所有包来自管理程序而不是真实源 IP

  • 0

我已经使用 Alpine Linux 设置了一个非常简单的 Hypervisor,我的 VM 看到所有流量都来自 hypervisor 的 IP。

这也意味着如果 fail2ban 试图阻止攻击,它总是会阻止虚拟机管理程序 IP

如何让 VM 看到真实的 IP 地址,而不仅仅是管理程序的 IP?

接口设置

在 HV ( 192.168.5.5) 上,我有一个br0工作正常的桥接接口

# tun1 setup script on Hypervisor
iptables -t nat -A POSTROUTING -o br0 -j MASQUERADE
iptables -P FORWARD ACCEPT
ip tuntap add dev tap1 mode tap user root
ip link set dev tap1 up
ip link set tap1 master br0

qemu-system-x86_64 [..non related parameters removed ..] \
-device virtio-net-pci,netdev=network0,mac=02:1f:ba:26:d7:56 \
-netdev tap,id=network0,ifname=tap1,script=no,downscript=no

VM 可以访问 Internet,但它看到的所有流量都来自管理程序的 IP。

虚拟机只看到 HV IP

有人甚至试图使用我的服务器进行 DNS 放大攻击(尽管在我的 PFSense 防火墙上阻止了传出) DNS放大攻击

Fail2ban 也阻止了错误的 IP 显示阻塞的 HV ip 的 fail2ban 日志

kvm-virtualization tcpdump alpine qemu tun
  • 1 个回答
  • 379 Views
Martin Hope
os_user
Asked: 2021-06-04 15:25:32 +0800 CST

可以 ssh 进入 Ubuntu 20.04 虚拟机,但 domifaddr 显示没有 IP

  • 0

我按照本指南创建了一个 Ubuntu 20.04 虚拟机。我可以使用我在 network-config 文件中写入的 IP 地址 ssh 进入虚拟机,即 192.168.122.101,但是,当运行virsh domifaddr <domain>和时virsh net-dhcp-leases default,两者都没有显示,没有 IP 地址。

这只发生在 Ubuntu 20.04 云映像中,对于早期的 Ubuntu 版本和所有 CentOS 云映像,我可以使用上述命令查看分配的 IP 地址。

对此有什么想法吗?

virtualization kvm-virtualization virtual-machines ubuntu-20.04 qemu
  • 1 个回答
  • 338 Views
Martin Hope
laimison
Asked: 2021-03-23 17:29:27 +0800 CST

无法在 KVM 中启动虚拟机/域:无法获得“写入”锁定

  • 1

主机重启后,我无法启动虚拟机:

user@server-1:~$ virsh start docker-1
error: Failed to start domain docker-1
error: internal error: process exited while connecting to monitor: 2021-03-23T01:21:58.149079Z qemu-system-x86_64: -blockdev {"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}: Failed to get "write" lock
Is another process using the image [/apphd/prod/kvm/storage/docker-1-volume-hd.qcow2]?

文件未使用:

user@server-1:~$ sudo fuser -u /apphd/prod/kvm/storage/docker-1-volume-hd.qcow2
user@server-1:~$ sudo lsof | grep qcow
user@server-1:~$ virsh list
 Id   Name   State
--------------------

user@server-1:~$

我在 Ubuntu 18.04/qemu 2.11 上试过并升级到 Ubuntu 20.04/qemu 4.2.1

此升级无助于解决问题。

这个虚拟机非常大,因此无法轻松地从中创建新虚拟机,没有可用空间。

有什么帮助可以从这种情况中恢复并启动这个域吗?

谢谢


更新

附加锁的输出:

user@server-1:~$ sudo lslocks -u
COMMAND           PID  TYPE SIZE MODE  M      START        END PATH
blkmapd           583 POSIX   4B WRITE 0          0          0 /run/blkmapd.pid
rpcbind          1181 FLOCK      WRITE 0          0          0 /run/rpcbind.lock
lxcfs            1312 POSIX   5B WRITE 0          0          0 /run/lxcfs.pid
atd              1456 POSIX   5B WRITE 0          0          0 /run/atd.pid
whoopsie         1454 FLOCK      WRITE 0          0          0 /run/lock/whoopsie/lock
virtlogd         6143 POSIX   4B WRITE 0          0          0 /run/virtlogd.pid
multipathd       1106 POSIX   4B WRITE 0          0          0 /run/multipathd.pid
containerd       1401 FLOCK 128K WRITE 0          0          0 /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db
tracker-miner-f  1561 POSIX 3.6M READ  0 1073741826 1073742335 /var/lib/gdm3/.cache/tracker/meta.db
tracker-miner-f  1561 POSIX  32K READ  0        128        128 /var/lib/gdm3/.cache/tracker/meta.db-shm
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/network/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/interface/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/secrets/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/storage/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/nodedev/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/nwfilter/driver.pid
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirt/qemu/driver.pid
tracker-miner-f  8956 POSIX 3.6M READ  0 1073741826 1073742335 /home/user/.cache/tracker/meta.db
tracker-miner-f  8956 POSIX  32K READ  0        128        128 /home/user/.cache/tracker/meta.db-shm
dmeventd          581 POSIX   4B WRITE 0          0          0 /run/dmeventd.pid
cron             1445 FLOCK   5B WRITE 0          0          0 /run/crond.pid
gnome-shell      1713 FLOCK      WRITE 0          0          0 /run/user/126/wayland-0.lock
libvirtd         6057 POSIX   4B WRITE 0          0          0 /run/libvirtd.pid

并附上工艺表:

user@server-1:~$ ps -ef
UID          PID    PPID  C STIME TTY          TIME CMD
root           1       0  0 01:11 ?        00:00:03 /sbin/init
root           2       0  0 01:11 ?        00:00:00 [kthreadd]
root           3       2  0 01:11 ?        00:00:00 [rcu_gp]
root           4       2  0 01:11 ?        00:00:00 [rcu_par_gp]
root           6       2  0 01:11 ?        00:00:00 [kworker/0:0H-kblockd]
root           9       2  0 01:11 ?        00:00:00 [mm_percpu_wq]
root          10       2  0 01:11 ?        00:00:00 [ksoftirqd/0]
root          11       2  0 01:11 ?        00:00:01 [rcu_sched]
root          12       2  0 01:11 ?        00:00:00 [migration/0]
root          13       2  0 01:11 ?        00:00:00 [idle_inject/0]
root          14       2  0 01:11 ?        00:00:00 [cpuhp/0]
root          15       2  0 01:11 ?        00:00:00 [cpuhp/1]
root          16       2  0 01:11 ?        00:00:00 [idle_inject/1]
root          17       2  0 01:11 ?        00:00:00 [migration/1]
root          18       2  0 01:11 ?        00:00:00 [ksoftirqd/1]
root          20       2  0 01:11 ?        00:00:00 [kworker/1:0H-kblockd]
root          21       2  0 01:11 ?        00:00:00 [cpuhp/2]
root          22       2  0 01:11 ?        00:00:00 [idle_inject/2]
root          23       2  0 01:11 ?        00:00:00 [migration/2]
root          24       2  0 01:11 ?        00:00:00 [ksoftirqd/2]
root          26       2  0 01:11 ?        00:00:00 [kworker/2:0H-kblockd]
root          27       2  0 01:11 ?        00:00:00 [cpuhp/3]
root          28       2  0 01:11 ?        00:00:00 [idle_inject/3]
root          29       2  0 01:11 ?        00:00:00 [migration/3]
root          30       2  0 01:11 ?        00:00:00 [ksoftirqd/3]
root          32       2  0 01:11 ?        00:00:00 [kworker/3:0H-events_highpri]
root          33       2  0 01:11 ?        00:00:00 [kdevtmpfs]
root          34       2  0 01:11 ?        00:00:00 [netns]
root          35       2  0 01:11 ?        00:00:00 [rcu_tasks_kthre]
root          36       2  0 01:11 ?        00:00:00 [kauditd]
root          38       2  0 01:11 ?        00:00:00 [khungtaskd]
root          39       2  0 01:11 ?        00:00:00 [oom_reaper]
root          40       2  0 01:11 ?        00:00:00 [writeback]
root          41       2  0 01:11 ?        00:00:00 [kcompactd0]
root          42       2  0 01:11 ?        00:00:00 [ksmd]
root          43       2  0 01:11 ?        00:00:00 [khugepaged]
root          89       2  0 01:11 ?        00:00:00 [kintegrityd]
root          90       2  0 01:11 ?        00:00:00 [kblockd]
root          91       2  0 01:11 ?        00:00:00 [blkcg_punt_bio]
root          93       2  0 01:11 ?        00:00:00 [tpm_dev_wq]
root          94       2  0 01:11 ?        00:00:00 [ata_sff]
root          95       2  0 01:11 ?        00:00:00 [md]
root          96       2  0 01:11 ?        00:00:00 [edac-poller]
root          97       2  0 01:11 ?        00:00:00 [devfreq_wq]
root          98       2  0 01:11 ?        00:00:00 [watchdogd]
root         101       2  0 01:11 ?        00:00:00 [kswapd0]
root         102       2  0 01:11 ?        00:00:00 [ecryptfs-kthrea]
root         104       2  0 01:11 ?        00:00:00 [kthrotld]
root         105       2  0 01:11 ?        00:00:00 [irq/122-aerdrv]
root         106       2  0 01:11 ?        00:00:00 [acpi_thermal_pm]
root         107       2  0 01:11 ?        00:00:00 [vfio-irqfd-clea]
root         111       2  0 01:11 ?        00:00:00 [ipv6_addrconf]
root         120       2  0 01:11 ?        00:00:00 [kstrp]
root         123       2  0 01:11 ?        00:00:00 [kworker/u9:0-xprtiod]
root         138       2  0 01:11 ?        00:00:00 [charger_manager]
root         197       2  0 01:11 ?        00:00:00 [cryptd]
root         224       2  0 01:11 ?        00:00:00 [scsi_eh_0]
root         225       2  0 01:11 ?        00:00:00 [scsi_tmf_0]
root         226       2  0 01:11 ?        00:00:00 [scsi_eh_1]
root         227       2  0 01:11 ?        00:00:00 [scsi_tmf_1]
root         228       2  0 01:11 ?        00:00:00 [scsi_eh_2]
root         229       2  0 01:11 ?        00:00:00 [scsi_tmf_2]
root         230       2  0 01:11 ?        00:00:00 [scsi_eh_3]
root         231       2  0 01:11 ?        00:00:00 [scsi_tmf_3]
root         232       2  0 01:11 ?        00:00:00 [scsi_eh_4]
root         233       2  0 01:11 ?        00:00:00 [scsi_tmf_4]
root         234       2  0 01:11 ?        00:00:00 [scsi_eh_5]
root         235       2  0 01:11 ?        00:00:00 [scsi_tmf_5]
root         241       2  0 01:11 ?        00:00:00 [kworker/0:1H]
root         245       2  0 01:11 ?        00:00:00 [scsi_eh_6]
root         246       2  0 01:11 ?        00:00:00 [scsi_tmf_6]
root         247       2  0 01:11 ?        00:00:02 [usb-storage]
root         248       2  0 01:11 ?        00:00:00 [scsi_eh_7]
root         249       2  0 01:11 ?        00:00:00 [scsi_tmf_7]
root         250       2  0 01:11 ?        00:00:00 [usb-storage]
root         251       2  0 01:11 ?        00:00:00 [kworker/3:1H-kblockd]
root         252       2  0 01:11 ?        00:00:00 [uas]
root         253       2  0 01:11 ?        00:00:00 [kworker/2:1H-kblockd]
root         254       2  0 01:11 ?        00:00:00 [kworker/1:1H-kblockd]
root         286       2  0 01:11 ?        00:00:00 [raid5wq]
root         287       2  0 01:11 ?        00:00:00 [kdmflush]
root         288       2  0 01:11 ?        00:00:00 [kdmflush]
root         290       2  0 01:11 ?        00:00:00 [kdmflush]
root         292       2  0 01:11 ?        00:00:00 [kdmflush]
root         297       2  0 01:11 ?        00:00:00 [kdmflush]
root         319       2  0 01:11 ?        00:00:00 [mdX_raid1]
root         326       2  0 01:11 ?        00:00:00 [kdmflush]
root         327       2  0 01:11 ?        00:00:00 [kdmflush]
root         328       2  0 01:11 ?        00:00:00 [kdmflush]
root         330       2  0 01:11 ?        00:00:00 [kdmflush]
root         331       2  0 01:11 ?        00:00:00 [kdmflush]
root         363       2  0 01:11 ?        00:00:00 [mdX_raid1]
root         476       2  0 01:11 ?        00:00:00 [jbd2/sda2-8]
root         477       2  0 01:11 ?        00:00:00 [ext4-rsv-conver]
root         552       2  0 01:11 ?        00:00:00 [rpciod]
root         553       2  0 01:11 ?        00:00:00 [xprtiod]
root         554       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-journald
root         581       1  0 01:11 ?        00:00:01 /sbin/dmeventd -f
root         583       1  0 01:11 ?        00:00:00 /usr/sbin/blkmapd
root         597       1  0 01:11 ?        00:00:01 /lib/systemd/systemd-udevd
root         635       2  0 01:11 ?        00:00:00 [irq/133-mei_me]
root         697       2  0 01:11 ?        00:00:00 [led_workqueue]
root        1102       2  0 01:11 ?        00:00:00 [kaluad]
root        1103       2  0 01:11 ?        00:00:00 [kmpath_rdacd]
root        1104       2  0 01:11 ?        00:00:00 [kmpathd]
root        1105       2  0 01:11 ?        00:00:00 [kmpath_handlerd]
root        1106       1  0 01:11 ?        00:00:04 /sbin/multipathd -d -s
root        1115       2  0 01:11 ?        00:00:00 [jbd2/dm-4-8]
root        1117       2  0 01:11 ?        00:00:00 [ext4-rsv-conver]
root        1120       2  0 01:11 ?        00:00:00 [loop0]
root        1126       2  0 01:11 ?        00:00:00 [loop1]
root        1129       2  0 01:11 ?        00:00:00 [loop2]
root        1131       2  0 01:11 ?        00:00:00 [jbd2/dm-9-8]
root        1132       2  0 01:11 ?        00:00:00 [ext4-rsv-conver]
root        1135       2  0 01:11 ?        00:00:00 [loop3]
root        1137       2  0 01:11 ?        00:00:00 [loop4]
root        1138       2  0 01:11 ?        00:00:00 [loop5]
root        1145       2  0 01:11 ?        00:00:00 [jbd2/sde1-8]
root        1146       2  0 01:11 ?        00:00:00 [ext4-rsv-conver]
systemd+    1176       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-networkd
root        1177       1  0 01:11 ?        00:00:00 /usr/sbin/rpc.idmapd
_rpc        1181       1  0 01:11 ?        00:00:00 /sbin/rpcbind -f -w
systemd+    1182       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-timesyncd
systemd+    1187       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-resolved
root        1296       1  0 01:11 ?        00:00:00 /usr/lib/accountsservice/accounts-daemon
root        1297       1  0 01:11 ?        00:00:00 /usr/sbin/acpid
avahi       1301       1  0 01:11 ?        00:00:00 avahi-daemon: running [server-1.local]
root        1302       1  0 01:11 ?        00:00:00 /usr/sbin/cupsd -l
message+    1303       1  0 01:11 ?        00:00:01 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
root        1304       1  0 01:11 ?        00:00:01 /usr/sbin/NetworkManager --no-daemon
root        1310       1  0 01:11 ?        00:00:02 /usr/sbin/irqbalance --foreground
root        1312       1  0 01:11 ?        00:00:00 /usr/bin/lxcfs /var/lib/lxcfs
root        1314       1  0 01:11 ?        00:00:00 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
root        1322       1  0 01:11 ?        00:00:02 /usr/lib/policykit-1/polkitd --no-debug
syslog      1329       1  0 01:11 ?        00:00:00 /usr/sbin/rsyslogd -n -iNONE
root        1335       1  0 01:11 ?        00:00:00 /usr/sbin/smartd -n
root        1340       1  0 01:11 ?        00:00:00 /usr/libexec/switcheroo-control
root        1341       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-logind
root        1342       1  0 01:11 ?        00:00:00 /lib/systemd/systemd-machined
root        1343       1  0 01:11 ?        00:00:09 /usr/lib/udisks2/udisksd
root        1344       1  0 01:11 ?        00:00:00 /sbin/wpa_supplicant -u -s -O /run/wpa_supplicant
avahi       1353    1301  0 01:11 ?        00:00:00 avahi-daemon: chroot helper
root        1383       1  0 01:11 ?        00:00:00 /usr/sbin/cups-browsed
root        1386       1  0 01:11 ?        00:00:00 /usr/sbin/ModemManager --filter-policy=strict
root        1401       1  0 01:11 ?        00:02:22 /usr/bin/containerd
root        1416       1  0 01:11 ?        00:00:00 /usr/sbin/rpc.mountd --manage-gids
root        1445       1  0 01:11 ?        00:00:00 /usr/sbin/cron -f
whoopsie    1454       1  0 01:11 ?        00:00:00 /usr/bin/whoopsie -f
daemon      1456       1  0 01:11 ?        00:00:00 /usr/sbin/atd -f
root        1457       2  0 01:11 ?        00:00:00 [kworker/u9:1-xprtiod]
root        1458       1  0 01:11 ?        00:00:00 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
root        1460       2  0 01:11 ?        00:00:00 [lockd]
kernoops    1463       1  0 01:11 ?        00:00:01 /usr/sbin/kerneloops --test
kernoops    1474       1  0 01:11 ?        00:00:01 /usr/sbin/kerneloops
root        1477       1  0 01:11 ?        00:00:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
root        1486       1  0 01:11 ?        00:00:00 /usr/sbin/gdm3
root        1496    1486  0 01:11 ?        00:00:00 gdm-session-worker [pam/gdm-launch-environment]
gdm         1527       1  0 01:11 ?        00:00:00 /lib/systemd/systemd --user
gdm         1528    1527  0 01:11 ?        00:00:00 (sd-pam)
root        1552       2  0 01:11 ?        00:00:00 bpfilter_umh
gdm         1559    1527  0 01:11 ?        00:00:00 /usr/bin/pulseaudio --daemonize=no --log-target=journal
gdm         1561    1527  0 01:11 ?        00:00:00 /usr/libexec/tracker-miner-fs
gdm         1568    1496  0 01:11 tty1     00:00:00 /usr/lib/gdm3/gdm-wayland-session dbus-run-session -- gnome-session --autostart /usr/share/gdm/greeter/autostart
gdm         1577    1527  0 01:11 ?        00:00:00 /usr/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
gdm         1584    1568  0 01:11 tty1     00:00:00 dbus-run-session -- gnome-session --autostart /usr/share/gdm/greeter/autostart
gdm         1585    1584  0 01:11 tty1     00:00:00 dbus-daemon --nofork --print-address 4 --session
rtkit       1586       1  0 01:11 ?        00:00:00 /usr/libexec/rtkit-daemon
gdm         1589    1584  0 01:11 tty1     00:00:00 /usr/libexec/gnome-session-binary --systemd --autostart /usr/share/gdm/greeter/autostart
gdm         1590    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfsd
gdm         1600    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfsd-fuse /run/user/126/gvfs -f -o big_writes
gdm         1608    1527  0 01:11 ?        00:00:01 /usr/libexec/gvfs-udisks2-volume-monitor
gdm         1640    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfs-mtp-volume-monitor
gdm         1648    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfs-goa-volume-monitor
gdm         1653    1527  0 01:11 ?        00:00:00 /usr/libexec/goa-daemon
gdm         1686       1  0 01:11 tty1     00:00:00 /usr/libexec/dconf-service
gdm         1702    1527  0 01:11 ?        00:00:00 /usr/libexec/goa-identity-service
gdm         1711    1527  0 01:11 ?        00:00:01 /usr/libexec/gvfs-afc-volume-monitor
gdm         1713    1589  0 01:11 tty1     00:00:13 /usr/bin/gnome-shell
gdm         1723    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfs-gphoto2-volume-monitor
root        1729       1  0 01:11 ?        00:00:00 /usr/lib/upower/upowerd
root        1800       2  0 01:11 ?        00:00:00 [nfsd]
root        1801       2  0 01:11 ?        00:00:00 [nfsd]
root        1802       2  0 01:11 ?        00:00:00 [nfsd]
root        1803       2  0 01:11 ?        00:00:00 [nfsd]
root        1804       2  0 01:11 ?        00:00:00 [nfsd]
root        1805       2  0 01:11 ?        00:00:00 [nfsd]
root        1806       2  0 01:11 ?        00:00:00 [nfsd]
root        1807       2  0 01:11 ?        00:00:00 [nfsd]
gdm         1868       1  0 01:11 tty1     00:00:00 /usr/libexec/at-spi-bus-launcher
gdm         1874    1868  0 01:11 tty1     00:00:00 /usr/bin/dbus-daemon --config-file=/usr/share/defaults/at-spi2/accessibility.conf --nofork --print-address 3
gdm         1880    1713  0 01:11 tty1     00:00:00 /usr/bin/Xwayland :1024 -rootless -noreset -accessx -core -auth /run/user/126/.mutter-Xwaylandauth.XH3U00 -listen 4 -listen 5 -displayfd 6 -listen 7
libvirt+    1916       1  0 01:11 ?        00:00:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
root        1917    1916  0 01:11 ?        00:00:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
gdm         2003       1  0 01:11 tty1     00:00:00 /usr/libexec/xdg-permission-store
gdm         2052       1  0 01:11 tty1     00:00:00 /usr/bin/gjs /usr/share/gnome-shell/org.gnome.Shell.Notifications
gdm         2054       1  0 01:11 tty1     00:00:00 /usr/libexec/at-spi2-registryd --use-gnome-session
gdm         2066    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-sharing
gdm         2069    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-wacom
gdm         2070    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-color
gdm         2075    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-keyboard
gdm         2078    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-print-notifications
gdm         2079    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-rfkill
gdm         2084    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-smartcard
gdm         2090    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-datetime
gdm         2103    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-media-keys
gdm         2110    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-screensaver-proxy
gdm         2111    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-sound
gdm         2112    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-a11y-settings
gdm         2114    1589  0 01:11 tty1     00:00:03 /usr/libexec/gsd-housekeeping
gdm         2116    1589  0 01:11 tty1     00:00:00 /usr/libexec/gsd-power
gdm         2179    1713  0 01:11 tty1     00:00:00 ibus-daemon --panel disable -r --xim
gdm         2183       1  0 01:11 tty1     00:00:00 /usr/libexec/gsd-printer
gdm         2185    2179  0 01:11 tty1     00:00:00 /usr/libexec/ibus-dconf
gdm         2192       1  0 01:11 tty1     00:00:00 /usr/libexec/ibus-x11 --kill-daemon
gdm         2199    2179  0 01:11 tty1     00:00:00 /usr/libexec/ibus-engine-simple
gdm         2202       1  0 01:11 tty1     00:00:00 /usr/libexec/ibus-portal
colord      2212       1  0 01:11 ?        00:00:00 /usr/libexec/colord
gdm         2268    1527  0 01:11 ?        00:00:00 /usr/libexec/gvfsd-metadata
root        6057       1  0 01:18 ?        00:00:01 /usr/sbin/libvirtd
root        6143       1  0 01:19 ?        00:00:00 /usr/sbin/virtlogd
root        6562       2  0 01:34 ?        00:00:01 [kworker/2:3-events]
root        7924       2  0 06:06 ?        00:00:00 [loop6]
root        7981       1  0 06:06 ?        00:00:03 /usr/lib/snapd/snapd
root        8320       2  0 08:34 ?        00:00:00 [kworker/0:0-rcu_gp]
root        8891       2  0 09:30 ?        00:00:00 [kworker/1:0-events]
root        8919    1458  0 10:02 ?        00:00:00 sshd: user [priv]
user         8938       1  0 10:02 ?        00:00:00 /lib/systemd/systemd --user
user         8939    8938  0 10:02 ?        00:00:00 (sd-pam)
root        8951       2  0 10:02 ?        00:00:00 [kworker/0:2-events]
user         8954    8938  0 10:02 ?        00:00:00 /usr/bin/pulseaudio --daemonize=no --log-target=journal
user         8956    8938  0 10:02 ?        00:00:00 /usr/libexec/tracker-miner-fs
user         8958    8938  0 10:02 ?        00:00:00 /usr/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
user         8975    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfsd
user         8983    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfsd-fuse /run/user/1000/gvfs -f -o big_writes
user         8995    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfs-udisks2-volume-monitor
user         9007    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfs-mtp-volume-monitor
user         9011    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfs-goa-volume-monitor
user         9015    8938  0 10:02 ?        00:00:00 /usr/libexec/goa-daemon
user         9022    8938  0 10:02 ?        00:00:00 /usr/libexec/goa-identity-service
user         9029    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfs-afc-volume-monitor
user         9035    8938  0 10:02 ?        00:00:00 /usr/libexec/gvfs-gphoto2-volume-monitor
user         9185    8919  0 10:02 ?        00:00:00 sshd: user@pts/0
user         9186    9185  0 10:02 pts/0    00:00:00 -bash
root        9258       2  0 10:13 ?        00:00:00 [kworker/3:3-events]
root        9259       2  0 10:13 ?        00:00:00 [kworker/3:4-cgroup_destroy]
root        9294       2  0 10:31 ?        00:00:00 [kworker/1:1]
root        9330       2  0 11:31 ?        00:00:00 [kworker/2:0-events]
root        9334       2  0 11:41 ?        00:00:00 [kworker/u8:2-events_freezable_power_]
root        9348       2  0 11:53 ?        00:00:00 [kworker/u8:0-events_power_efficient]
root        9352       2  0 12:07 ?        00:00:00 [kworker/u8:3-events_unbound]
root        9400       2  0 12:09 ?        00:00:00 [kworker/3:0-events]
root        9403       2  0 12:09 ?        00:00:00 [kworker/0:1-rcu_gp]
root        9413       2  0 12:09 ?        00:00:00 [kworker/3:1-cgroup_destroy]
root        9414       2  0 12:09 ?        00:00:00 [kworker/3:2-events]
root        9415       2  0 12:09 ?        00:00:00 [kworker/3:5-events]
root        9418       2  0 12:09 ?        00:00:00 [kworker/2:1]
root        9419       2  0 12:09 ?        00:00:00 [kworker/3:6]
root        9459       2  0 12:13 ?        00:00:00 [kworker/u8:1-events_unbound]
user         9463    9186  0 12:14 pts/0    00:00:00 ps -ef
user@server-1:~$

附加此 VM 的 XML 转储:

user@server-1:~$ virsh dumpxml docker-1
<domain type='kvm'>
  <name>docker-1</name>
  <uuid>dfb49ea5-f6e7-45d1-9422-e3ce97cf6320</uuid>
  <memory unit='KiB'>10485760</memory>
  <currentMemory unit='KiB'>10485760</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <os>
    <type arch='x86_64' machine='pc-i440fx-bionic'>hvm</type>
    <boot dev='hd'/>
    <boot dev='network'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <cpu mode='custom' match='exact' check='none'>
    <model fallback='forbid'>qemu64</model>
  </cpu>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/kvm-spice</emulator>
    <disk type='volume' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source pool='default' volume='docker-1-volume-resized.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/apphd/prod/kvm/storage/docker-1-volume-hd.qcow2'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/apphd/prod/kvm/storage/docker-1-volume-hd.qcow2'/>
      <target dev='vdc' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/apphd/prod/kvm/storage/docker-1-volume-hd.qcow2'/>
      <target dev='vdx' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/app/prod/kvm/storage/common-init-docker-1.iso'/>
      <target dev='hdd' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='1' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='piix3-uhci'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:01:00:00:00:01'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <console type='pty'>
      <target type='virtio' port='1'/>
    </console>
    <channel type='pty'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>
    <video>
      <model type='vga' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </rng>
  </devices>
</domain>

user@server-1:~$

kvm-virtualization libvirt qemu
  • 1 个回答
  • 4296 Views
Martin Hope
Matheus Simon
Asked: 2021-01-26 09:37:01 +0800 CST

Arch Linux - QEMU 在 Windows 10 上覆盖系统制造商

  • 0

我已经在 QEMU 上使用 Windows 10 完全虚拟化了一个 VM,并且必须进行主机直通才能使其工作。

因为无法设置 SMBIOS 来反映主机,所以我想知道是否有任何方法可以更改System Manufacturer我的来宾操作系统中的密钥。

kvm-virtualization arch-linux qemu
  • 1 个回答
  • 1599 Views
Martin Hope
MrSnrub
Asked: 2021-01-18 19:58:35 +0800 CST

QEMU / KVM - 每个 VM 的专用 802.1q VLAN - 仅通过路由器通信

  • 0

我有一个带有多个 eth N接口(我的“大防火墙”)的 Linux 防火墙路由器(专用机器)。所有转发的流量都由一组iptables规则过滤(默认策略 DROP)。

还有另一台专用机器(“vmhost”)将使用 KVM / QEMU / libvirt / virsh 托管多个虚拟机。

防火墙路由器(物理服务器)和vm主机(另一台物理服务器)通过跳线直接连接(路由器的eth2 <-> vmhost的eth0)。

我不希望 vmhost 上的虚拟机能够通信

  • 对彼此
  • 或到 VM 主机

除了通过外部防火墙路由器。

因此,我在两侧(路由器和 vmhost)配置了多个 802.1q 标记的 VLAN:eth0.10、eth0.11 等(另一侧为 eth2.10、eth2.11、...),每个都有一个不同的/30子网(一个主机 IP = 路由器,另一台主机 IP = VM)。因此,每个虚拟机都有自己的标记 VLAN 和自己的子网。

我想用它来将 VM 流量从属于中央防火墙路由器的 iptables 规则。VM 只能访问明确允许的 IP 地址和端口。

如何将 VM 配置为绑定到专用 VLAN 接口(例如eth0.10)?关于net, netdev, nic, ...

我明确不想在虚拟机的网络或虚拟机和主机之间架起桥梁。

// 稍后添加:两台服务器都使用 Debian 10 amd64。

vlan kvm-virtualization libvirt qemu virsh
  • 1 个回答
  • 1335 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve