内核:5.5.8-arch1-1
我正在尝试使用连接到我的物理接口的网桥来使虚拟网络正常工作。这是一个典型的设置,我什至没有尝试做任何奇怪的事情。
- 桥:
br0
- 物理接口:
enp6s0f0
问题是 Linux 没有将任何 IP 流量转发出物理接口。由于 ARP 解析工作,它双向转发ARP 流量,但没有 IP 流量从 enp6s0f0 发送出去。
我尝试过的事情:
- 添加
enp6s0f1
到网桥,提供enp7s0f0
给虚拟机,并使用电缆链接enp7s0f0
到enp6s0f1
- 相同的结果(转发 ARP 流量,不转发 IP 流量)
- 停止 docker 并刷新所有表
- 没变
- 禁用 rp_filter
- 没变
- 使用板载 NIC
- 没有变化(这实际上是初始设置,我将这个四端口卡放入其中以查看板载 NIC 是否导致问题)
- 从另一台机器 ping 虚拟机
- 我可以看到 echo-request 进来了,我可以看到它,
br0
但它没有转发到 VM 端口(vnet 端口或enp6s0f1
)
- 我可以看到 echo-request 进来了,我可以看到它,
- 在网桥上启用 STP(最初被禁用)
- 没变
○ → ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp6s0f0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
link/ether 00:10:18:85:1c:c0 brd ff:ff:ff:ff:ff:ff
inet6 fe80::210:18ff:fe85:1cc0/64 scope link
valid_lft forever preferred_lft forever
3: enp6s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:10:18:85:1c:c2 brd ff:ff:ff:ff:ff:ff
4: enp7s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:10:18:85:1c:c4 brd ff:ff:ff:ff:ff:ff
5: enp7s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:10:18:85:1c:c6 brd ff:ff:ff:ff:ff:ff
6: enp9s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether b4:2e:99:a6:22:f9 brd ff:ff:ff:ff:ff:ff
7: wlp8s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 08:71:90:4e:e9:77 brd ff:ff:ff:ff:ff:ff
8: br-183e1a17d7f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:ba:03:e1:9d brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global br-183e1a17d7f6
valid_lft forever preferred_lft forever
9: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:02:61:00:66 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
10: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:10:18:85:1c:c0 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.205/24 brd 192.168.1.255 scope global dynamic noprefixroute br0
valid_lft 9730sec preferred_lft 7930sec
inet6 fe80::210:18ff:fe85:1cc0/64 scope link
valid_lft forever preferred_lft forever
11: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UNKNOWN group default qlen 1000
link/ether fe:54:00:be:eb:3e brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:febe:eb3e/64 scope link
valid_lft forever preferred_lft forever
○ → brctl showstp br0
br0
bridge id 8000.001018851cc0
designated root 1000.44e4d9d88a00
root port 1 path cost 4
max age 19.99 bridge max age 19.99
hello time 1.99 bridge hello time 1.99
forward delay 14.99 bridge forward delay 14.99
ageing time 299.99
hello timer 0.00 tcn timer 0.00
topology change timer 0.00 gc timer 25.78
flags
enp6s0f0 (1)
port id 8001 state forwarding
designated root 1000.44e4d9d88a00 path cost 4
designated bridge 1000.44e4d9d88a00 message age timer 19.21
designated port 800d forward delay timer 0.00
designated cost 0 hold timer 0.00
flags
vnet0 (2)
port id 8002 state forwarding
designated root 1000.44e4d9d88a00 path cost 100
designated bridge 8000.001018851cc0 message age timer 0.00
designated port 8002 forward delay timer 0.00
designated cost 4 hold timer 0.22
flags
○ → bridge -d link show
2: enp6s0f0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 4
hairpin off guard off root_block off fastleave off learning on flood on mcast_flood on mcast_to_unicast off neigh_suppress off vlan_tunnel off isolated off enp6s0f0
8: br-183e1a17d7f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 master br-183e1a17d7f6 br-183e1a17d7f6
9: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 master docker0 docker0
10: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 br0
11: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 100
hairpin off guard off root_block off fastleave off learning on flood on mcast_flood on mcast_to_unicast off neigh_suppress off vlan_tunnel off isolated off vnet0
○ → sysctl net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1
○ → sysctl net.ipv4.conf.br0.forwarding
net.ipv4.conf.br0.forwarding = 1
看来,可能是由于 Docker 的 iptables 规则,您已经
br_netfilter
加载并激活了模块(即:sysctl net.bridge.bridge-nf-call-iptables
返回 1)。这使得桥接帧(以太网,第 2 层)受到iptables过滤(IP,第 3 层)的影响:例如,只要使用
physdev
匹配的 iptables,即使在其他网络命名空间中,也会自动加载此模块。有解释此模块引起的副作用的文档。将其用于桥接透明防火墙时会产生这些副作用。此外,没有它, iptables
physdev
匹配将无法正常工作(它根本不再匹配)。它还解释了如何防止其影响,尤其是在第 7 章中:而不是像这样在iptables上禁用此模块:
应该按照第 7 章中的说明调整其 iptables 规则以避免副作用。系统的其他未知部分将被破坏。
直到最近在 内核 5.3中,这个模块才知道命名空间,并且在加载它时突然在所有网络命名空间上启用它,从而在意外时造成各种麻烦。也是从那时起,也可以按网桥(
ip link set dev BRIDGE type bridge nf_call_iptables 1
)而不是按名称空间启用它。一旦工具(Docker ...)和内核(> = 5.3)遵循进化,只需在选定的网络名称空间和网桥中启用它就足够了,但今天可能还不行。另请注意,内核 5.3还继承了 nftables 可用的本机桥接状态防火墙,可能很快就会使该模块过时(一旦 VLAN 和 PPPoE 的桥接直接封装/解封装支持可用):