我在 Intel 机器(centos6.5)上启动了一个 kvm guest(centos6.5),使用 libvirt,guest 的 xml 如下
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>test-1</name>
<uuid>9377bce1-ae83-e356-ed15-919c8625fb4b</uuid>
<memory unit='KiB'>8388608</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static' current='2'>8</vcpu>
<os>
<type arch='x86_64' machine='rhel6.5.0'>hvm</type>
<boot dev='hd'/>
<boot dev='cdrom'/>
<bootmenu enable='yes'/>
<bios useserial='yes' rebootTimeout='0'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/data/vhosts//test-1.disk'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<controller type='usb' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:ea:12:d9'/>
<source bridge='br-ex'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'>
<listen type='address' address='0.0.0.0'/>
</graphics>
<video>
<model type='cirrus' vram='9216' heads='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
</devices>
<qemu:commandline>
<qemu:env name='SPICE_DEBUG_ALLOW_MC' value='1'/>
</qemu:commandline>
</domain>
现在我很困惑,尽管使用了“host-passthrough”,我仍然看不到guest中的L3缓存,只有L1,L2缓存,如下
[root@vm-kvm-115 results]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 26
Stepping: 5
CPU MHz: 2266.746
BogoMIPS: 4533.49
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
NUMA node0 CPU(s): 0,1
以下是我的物理机信息
[root@host-kvm-22 linux]# rpm -qa | grep libvirt
libvirt-client-0.10.2-54.el6_7.2.x86_64
libvirt-0.10.2-54.el6_7.2.x86_64
libvirt-devel-0.10.2-54.el6_7.2.x86_64
libvirt-python-0.10.2-54.el6_7.2.x86_64
[root@host-kvm-22 linux]# rpm -qa | grep qemu
qemu-img-0.12.1.2-2.479.el6_7.2.x86_64
gpxe-roms-qemu-0.9.7-6.14.el6.noarch
qemu-kvm-0.12.1.2-2.479.el6_7.2.x86_64
[root@host-kvm-22 linux]# uname -r
2.6.32-573.8.1.el6.x86_64
[root@host-kvm-22 linux]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 26
Stepping: 5
CPU MHz: 2266.743
BogoMIPS: 4532.68
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0-3,8-11
NUMA node1 CPU(s): 4-7,12-15
谁能告诉我如何在来宾上启用 L3 缓存?
FWIW,您误解了主机直通 CPU 模型的范围。它仅控制 CPU 及其功能标志的识别 - CPU 的某些方面仍未向来宾公开。例如,使用您那里的 XML,所有 8 个 CPU 作为同一 NUMA 节点中的单独套接字公开给来宾。相反,您的主机有 2 个 NUMA 节点,每个节点有两个插槽,每个插槽有四个核心。仅此一项可能就足以使主机中的 L3 缓存概念无法很好地映射到客户机中。
您可以在 XML 中设置虚拟 CPU 拓扑,但我仍然认为它不会使 L3 缓存出现。这也无关紧要,因为您允许 8 个虚拟 CPU 在所有 16 个主机 CPU 上浮动。由于您的主机 CPU 分布在 2 个 NUMA 节点上,因此您将在大部分时间获得跨 NUMA 节点内存访问,这会带来很高的延迟损失,这将消除缓存的任何好处。IOW,您最好专注于更有效的 VM 放置,通过使用 CPU 固定将来宾限制到单个主机 NUMA 节点。
L3 缓存支持从 2.8.0 版添加到 QEMU。请参阅Bugzilla。
向来宾公开 L3 缓存将提高性能,因为来宾 CPU 可以避免大量 IPI(处理器间中断)。请阅读本文了解更多信息。