我在主机(Fedora 41)上使用 Mullvad VPN,它设置了一个 WireGuard 接口,wg0-mullvad
我希望来自和到命名空间的流量bl
绕过它,最终目标是从内部连接到我公司的 AnyConnect VPN bl
,然后通过 RDP 连接到我的办公室计算机,同时我的其余互联网流量都经过 Mullvad。
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:11:c0:d6 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute enp1s0
valid_lft 81301sec preferred_lft 81301sec
inet6 fec0::e777:52d6:5436:4997/64 scope site dynamic noprefixroute
valid_lft 86366sec preferred_lft 14366sec
inet6 fe80::bc4e:6885:3d23:b51b/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: veth-bl-root@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 62:fb:a5:45:d4:e3 brd ff:ff:ff:ff:ff:ff link-netns bl
inet 192.168.11.2/24 scope global veth-bl-root
valid_lft forever preferred_lft forever
inet6 fe80::60fb:a5ff:fe45:d4e3/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
4: vpn0: <NO-CARRIER,POINTOPOINT,MULTICAST,NOARP,UP> mtu 1207 qdisc fq_codel state DOWN group default qlen 500
link/none
10: wg0-mullvad: <POINTOPOINT,UP,LOWER_UP> mtu 1380 qdisc noqueue state UNKNOWN group default qlen 1000
link/none
inet 10.136.199.175/32 scope global wg0-mullvad
valid_lft forever preferred_lft forever
veth-bl-root
是连接根命名空间和我的命名空间的虚拟以太网接口bl
。你可以忽略vpn0
。这是我提到的公司 VPN,目前,我只能在 Mullvad 断开连接时使用它。
我按照本教程设置了命名空间并将其连接到 Internet,这意味着我启用了 IP 转发、数据包转发和 IP 伪装。当 Mullvad VPN 断开连接时,它可以工作,但是当我启用 VPN 时,请求会超时。我猜这是因为 IP 伪装直到数据包通过后才会发生enp1s0
,而 Mullvad VPN 可以防止这种情况发生:
$ ip route get 8.8.8.8 from 192.168.11.2
8.8.8.8 from 192.168.11.2 dev wg0-mullvad table 1836018789 uid 1000
cache
虽然实际上wg0-mullvad
这是默认路线,但似乎并没有这样设置。
$ ip route
default via 10.0.2.2 dev enp1s0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev enp1s0 proto kernel scope link src 10.0.2.15 metric 100
10.64.0.1 dev wg0-mullvad proto static
192.168.11.0/24 dev veth-bl-root proto kernel scope link src 192.168.11.2
$ ip rule list
0: from all lookup local
32764: from all lookup main suppress_prefixlength 0
32765: not from all fwmark 0x6d6f6c65 lookup 1836018789
32766: from all lookup main
32767: from all lookup default
$ ip route show table 1836018789
default dev wg0-mullvad proto static
由于它同时使用数据包标记和自己的路由表,因此我尝试了两种分别使用这两种方法的方法。
尝试修复
单独的路由表
首先,我尝试创建一个新的路由表,其中唯一的路由是通过enp1s0
,并添加了一条规则,指定来自命名空间 IP 地址的所有流量都应使用该表。
$ ip route show table bl
default via 10.0.2.15 dev enp1s0
$ ip rule
0: from all lookup local
32763: from 192.168.11.2/24 lookup bl
32764: from all lookup main suppress_prefixlength 0
32765: not from all fwmark 0x6d6f6c65 lookup 1836018789
32766: from all lookup main
32767: from all lookup default
由于它优先于 Mullvad 的规则,我希望它能够发挥作用。事实上,ip route get
它很有希望:
$ ip route get 8.8.8.8 from 192.168.11.2
8.8.8.8 from 192.168.11.2 dev enp1s0 table bl uid 1000
cache
不幸的是,请求超时了。
$ sudo ip netns exec bl traceroute -n 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 * * *
2 * * *
3 * * *
4 * * *
5 * * *
6 *^C
$ sudo ip netns exec bl ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2076ms
使用ping -I 192.168.11.2 8.8.8.8
同样失败:
$ sudo ping -I 192.168.11.2 8.8.8.8
PING 8.8.8.8 (8.8.8.8) from 192.168.11.2 : 56(84) bytes of data.
From 192.168.11.2 icmp_seq=1 Destination Port Unreachable
ping: sendmsg: Operation not permitted
From 192.168.11.2 icmp_seq=2 Destination Port Unreachable
ping: sendmsg: Operation not permitted
From 192.168.11.2 icmp_seq=3 Destination Port Unreachable
ping: sendmsg: Operation not permitted
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2075ms
tcpdump -n -i veth-bl-root icmp
抓到了数据包,但tcpdump -n -i enp1s0 icmp
什么也没找到。数据包仍然经过wg0-mullvad
:
# This is what happened when running `ping -c 3 8.8.8.8`.
$ sudo tcpdump -n -i wg0-mullvad icmp
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on wg0-mullvad, link-type RAW (Raw IP), snapshot length 262144 bytes
08:55:53.448220 IP 10.136.199.175 > 10.64.0.1: ICMP echo request, id 13627, seq 601, length 50
08:55:53.532425 IP 10.64.0.1 > 10.136.199.175: ICMP echo reply, id 13627, seq 601, length 50
08:55:59.449654 IP 10.136.199.175 > 10.64.0.1: ICMP echo request, id 13627, seq 602, length 50
08:55:59.534057 IP 10.64.0.1 > 10.136.199.175: ICMP echo reply, id 13627, seq 602, length 50
08:56:05.451552 IP 10.136.199.175 > 10.64.0.1: ICMP echo request, id 13627, seq 603, length 50
08:56:05.535728 IP 10.64.0.1 > 10.136.199.175: ICMP echo reply, id 13627, seq 603, length 50
^C
6 packets captured
6 packets received by filter
0 packets dropped by kernel
nft list ruleset
这是此阶段的输出:
table inet firewalld {
ct helper helper-netbios-ns-udp {
type "netbios-ns" protocol udp
l3proto ip
}
chain mangle_PREROUTING {
type filter hook prerouting priority mangle + 10; policy accept;
jump mangle_PREROUTING_POLICIES
}
chain mangle_PREROUTING_POLICIES {
iifname "enp1s0" jump mangle_PRE_policy_allow-host-ipv6
iifname "enp1s0" jump mangle_PRE_FedoraWorkstation
iifname "enp1s0" return
jump mangle_PRE_policy_allow-host-ipv6
jump mangle_PRE_FedoraWorkstation
return
}
chain nat_PREROUTING {
type nat hook prerouting priority dstnat + 10; policy accept;
jump nat_PREROUTING_POLICIES
}
chain nat_PREROUTING_POLICIES {
iifname "enp1s0" jump nat_PRE_policy_allow-host-ipv6
iifname "enp1s0" jump nat_PRE_FedoraWorkstation
iifname "enp1s0" return
jump nat_PRE_policy_allow-host-ipv6
jump nat_PRE_FedoraWorkstation
return
}
chain nat_POSTROUTING {
type nat hook postrouting priority srcnat + 10; policy accept;
jump nat_POSTROUTING_POLICIES
}
chain nat_POSTROUTING_POLICIES {
iifname "enp1s0" oifname "enp1s0" jump nat_POST_FedoraWorkstation
iifname "enp1s0" oifname "enp1s0" return
oifname "enp1s0" jump nat_POST_FedoraWorkstation
oifname "enp1s0" return
iifname "enp1s0" jump nat_POST_FedoraWorkstation
iifname "enp1s0" return
jump nat_POST_FedoraWorkstation
return
}
chain nat_OUTPUT {
type nat hook output priority dstnat + 10; policy accept;
jump nat_OUTPUT_POLICIES
}
chain nat_OUTPUT_POLICIES {
oifname "enp1s0" jump nat_OUT_FedoraWorkstation
oifname "enp1s0" return
jump nat_OUT_FedoraWorkstation
return
}
chain filter_PREROUTING {
type filter hook prerouting priority filter + 10; policy accept;
icmpv6 type { nd-router-advert, nd-neighbor-solicit } accept
meta nfproto ipv6 fib saddr . mark . iif oif missing drop
}
chain filter_INPUT {
type filter hook input priority filter + 10; policy accept;
ct state { established, related } accept
ct status dnat accept
iifname "lo" accept
ct state invalid drop
jump filter_INPUT_POLICIES
reject with icmpx admin-prohibited
}
chain filter_FORWARD {
type filter hook forward priority filter + 10; policy accept;
ct state { established, related } accept
ct status dnat accept
iifname "lo" accept
ct state invalid drop
ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } reject with icmpv6 addr-unreachable
jump filter_FORWARD_POLICIES
reject with icmpx admin-prohibited
}
chain filter_OUTPUT {
type filter hook output priority filter + 10; policy accept;
ct state { established, related } accept
oifname "lo" accept
ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } reject with icmpv6 addr-unreachable
jump filter_OUTPUT_POLICIES
}
chain filter_INPUT_POLICIES {
iifname "enp1s0" jump filter_IN_policy_allow-host-ipv6
iifname "enp1s0" jump filter_IN_FedoraWorkstation
iifname "enp1s0" reject with icmpx admin-prohibited
jump filter_IN_policy_allow-host-ipv6
jump filter_IN_FedoraWorkstation
reject with icmpx admin-prohibited
}
chain filter_FORWARD_POLICIES {
iifname "enp1s0" oifname "enp1s0" jump filter_FWD_FedoraWorkstation
iifname "enp1s0" oifname "enp1s0" reject with icmpx admin-prohibited
iifname "enp1s0" jump filter_FWD_FedoraWorkstation
iifname "enp1s0" reject with icmpx admin-prohibited
oifname "enp1s0" jump filter_FWD_FedoraWorkstation
oifname "enp1s0" reject with icmpx admin-prohibited
jump filter_FWD_FedoraWorkstation
reject with icmpx admin-prohibited
}
chain filter_OUTPUT_POLICIES {
oifname "enp1s0" jump filter_OUT_FedoraWorkstation
oifname "enp1s0" return
jump filter_OUT_FedoraWorkstation
return
}
chain filter_IN_FedoraWorkstation {
jump filter_IN_FedoraWorkstation_pre
jump filter_IN_FedoraWorkstation_log
jump filter_IN_FedoraWorkstation_deny
jump filter_IN_FedoraWorkstation_allow
jump filter_IN_FedoraWorkstation_post
meta l4proto { icmp, ipv6-icmp } accept
}
chain filter_IN_FedoraWorkstation_pre {
}
chain filter_IN_FedoraWorkstation_log {
}
chain filter_IN_FedoraWorkstation_deny {
}
chain filter_IN_FedoraWorkstation_allow {
ip6 daddr fe80::/64 udp dport 546 accept
tcp dport 22 accept
udp dport 137 ct helper set "helper-netbios-ns-udp"
udp dport 137 accept
udp dport 138 accept
ip daddr 224.0.0.251 udp dport 5353 accept
ip6 daddr ff02::fb udp dport 5353 accept
udp dport 1025-65535 accept
tcp dport 1025-65535 accept
}
chain filter_IN_FedoraWorkstation_post {
}
chain filter_OUT_FedoraWorkstation {
jump filter_OUT_FedoraWorkstation_pre
jump filter_OUT_FedoraWorkstation_log
jump filter_OUT_FedoraWorkstation_deny
jump filter_OUT_FedoraWorkstation_allow
jump filter_OUT_FedoraWorkstation_post
}
chain filter_OUT_FedoraWorkstation_pre {
}
chain filter_OUT_FedoraWorkstation_log {
}
chain filter_OUT_FedoraWorkstation_deny {
}
chain filter_OUT_FedoraWorkstation_allow {
}
chain filter_OUT_FedoraWorkstation_post {
}
chain nat_OUT_FedoraWorkstation {
jump nat_OUT_FedoraWorkstation_pre
jump nat_OUT_FedoraWorkstation_log
jump nat_OUT_FedoraWorkstation_deny
jump nat_OUT_FedoraWorkstation_allow
jump nat_OUT_FedoraWorkstation_post
}
chain nat_OUT_FedoraWorkstation_pre {
}
chain nat_OUT_FedoraWorkstation_log {
}
chain nat_OUT_FedoraWorkstation_deny {
}
chain nat_OUT_FedoraWorkstation_allow {
}
chain nat_OUT_FedoraWorkstation_post {
}
chain nat_POST_FedoraWorkstation {
jump nat_POST_FedoraWorkstation_pre
jump nat_POST_FedoraWorkstation_log
jump nat_POST_FedoraWorkstation_deny
jump nat_POST_FedoraWorkstation_allow
jump nat_POST_FedoraWorkstation_post
}
chain nat_POST_FedoraWorkstation_pre {
}
chain nat_POST_FedoraWorkstation_log {
}
chain nat_POST_FedoraWorkstation_deny {
}
chain nat_POST_FedoraWorkstation_allow {
}
chain nat_POST_FedoraWorkstation_post {
}
chain filter_FWD_FedoraWorkstation {
jump filter_FWD_FedoraWorkstation_pre
jump filter_FWD_FedoraWorkstation_log
jump filter_FWD_FedoraWorkstation_deny
jump filter_FWD_FedoraWorkstation_allow
jump filter_FWD_FedoraWorkstation_post
}
chain filter_FWD_FedoraWorkstation_pre {
}
chain filter_FWD_FedoraWorkstation_log {
}
chain filter_FWD_FedoraWorkstation_deny {
}
chain filter_FWD_FedoraWorkstation_allow {
oifname "enp1s0" accept
}
chain filter_FWD_FedoraWorkstation_post {
}
chain nat_PRE_FedoraWorkstation {
jump nat_PRE_FedoraWorkstation_pre
jump nat_PRE_FedoraWorkstation_log
jump nat_PRE_FedoraWorkstation_deny
jump nat_PRE_FedoraWorkstation_allow
jump nat_PRE_FedoraWorkstation_post
}
chain nat_PRE_FedoraWorkstation_pre {
}
chain nat_PRE_FedoraWorkstation_log {
}
chain nat_PRE_FedoraWorkstation_deny {
}
chain nat_PRE_FedoraWorkstation_allow {
}
chain nat_PRE_FedoraWorkstation_post {
}
chain mangle_PRE_FedoraWorkstation {
jump mangle_PRE_FedoraWorkstation_pre
jump mangle_PRE_FedoraWorkstation_log
jump mangle_PRE_FedoraWorkstation_deny
jump mangle_PRE_FedoraWorkstation_allow
jump mangle_PRE_FedoraWorkstation_post
}
chain mangle_PRE_FedoraWorkstation_pre {
}
chain mangle_PRE_FedoraWorkstation_log {
}
chain mangle_PRE_FedoraWorkstation_deny {
}
chain mangle_PRE_FedoraWorkstation_allow {
}
chain mangle_PRE_FedoraWorkstation_post {
}
chain filter_IN_policy_allow-host-ipv6 {
jump filter_IN_policy_allow-host-ipv6_pre
jump filter_IN_policy_allow-host-ipv6_log
jump filter_IN_policy_allow-host-ipv6_deny
jump filter_IN_policy_allow-host-ipv6_allow
jump filter_IN_policy_allow-host-ipv6_post
}
chain filter_IN_policy_allow-host-ipv6_pre {
}
chain filter_IN_policy_allow-host-ipv6_log {
}
chain filter_IN_policy_allow-host-ipv6_deny {
}
chain filter_IN_policy_allow-host-ipv6_allow {
icmpv6 type nd-neighbor-advert accept
icmpv6 type nd-neighbor-solicit accept
icmpv6 type nd-router-advert accept
icmpv6 type nd-redirect accept
}
chain filter_IN_policy_allow-host-ipv6_post {
}
chain nat_PRE_policy_allow-host-ipv6 {
jump nat_PRE_policy_allow-host-ipv6_pre
jump nat_PRE_policy_allow-host-ipv6_log
jump nat_PRE_policy_allow-host-ipv6_deny
jump nat_PRE_policy_allow-host-ipv6_allow
jump nat_PRE_policy_allow-host-ipv6_post
}
chain nat_PRE_policy_allow-host-ipv6_pre {
}
chain nat_PRE_policy_allow-host-ipv6_log {
}
chain nat_PRE_policy_allow-host-ipv6_deny {
}
chain nat_PRE_policy_allow-host-ipv6_allow {
}
chain nat_PRE_policy_allow-host-ipv6_post {
}
chain mangle_PRE_policy_allow-host-ipv6 {
jump mangle_PRE_policy_allow-host-ipv6_pre
jump mangle_PRE_policy_allow-host-ipv6_log
jump mangle_PRE_policy_allow-host-ipv6_deny
jump mangle_PRE_policy_allow-host-ipv6_allow
jump mangle_PRE_policy_allow-host-ipv6_post
}
chain mangle_PRE_policy_allow-host-ipv6_pre {
}
chain mangle_PRE_policy_allow-host-ipv6_log {
}
chain mangle_PRE_policy_allow-host-ipv6_deny {
}
chain mangle_PRE_policy_allow-host-ipv6_allow {
}
chain mangle_PRE_policy_allow-host-ipv6_post {
}
}
table ip filter {
chain FORWARD {
type filter hook forward priority filter; policy accept;
iifname "veth-bl-root" oifname "enp1s0" counter packets 3 bytes 252 accept
iifname "enp1s0" oifname "veth-bl-root" counter packets 3 bytes 252 accept
}
}
# Warning: table ip nat is managed by iptables-nft, do not touch!
table ip nat {
chain POSTROUTING {
type nat hook postrouting priority srcnat; policy accept;
ip saddr 192.168.11.0/24 oifname "enp1s0" counter packets 1 bytes 84 masquerade
}
}
table inet mullvad {
chain prerouting {
type filter hook prerouting priority -199; policy accept;
iif != "wg0-mullvad" ct mark 0x00000f41 meta mark set 0x6d6f6c65
ip saddr 193.138.218.220 udp sport 16734 meta mark set 0x6d6f6c65
}
chain output {
type filter hook output priority filter; policy drop;
oif "lo" accept
ct mark 0x00000f41 accept
udp sport 68 ip daddr 255.255.255.255 udp dport 67 accept
ip6 saddr fe80::/10 udp sport 546 ip6 daddr ff02::1:2 udp dport 547 accept
ip6 saddr fe80::/10 udp sport 546 ip6 daddr ff05::1:3 udp dport 547 accept
ip6 daddr ff02::2 icmpv6 type nd-router-solicit icmpv6 code no-route accept
ip6 daddr ff02::1:ff00:0/104 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
ip6 daddr fe80::/10 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
ip6 daddr fe80::/10 icmpv6 type nd-neighbor-advert icmpv6 code no-route accept
ip daddr 193.138.218.220 udp dport 16734 meta mark 0x6d6f6c65 accept
oif "wg0-mullvad" udp dport 53 ip daddr 10.64.0.1 accept
oif "wg0-mullvad" tcp dport 53 ip daddr 10.64.0.1 accept
udp dport 53 reject
tcp dport 53 reject with tcp reset
oif "wg0-mullvad" accept
reject
}
chain input {
type filter hook input priority filter; policy drop;
iif "lo" accept
ct mark 0x00000f41 accept
udp sport 67 udp dport 68 accept
ip6 saddr fe80::/10 udp sport 547 ip6 daddr fe80::/10 udp dport 546 accept
ip6 saddr fe80::/10 icmpv6 type nd-router-advert icmpv6 code no-route accept
ip6 saddr fe80::/10 icmpv6 type nd-redirect icmpv6 code no-route accept
ip6 saddr fe80::/10 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
icmpv6 type nd-neighbor-advert icmpv6 code no-route accept
ip saddr 193.138.218.220 udp sport 16734 ct state established accept
iif "wg0-mullvad" accept
}
chain forward {
type filter hook forward priority filter; policy drop;
ct mark 0x00000f41 accept
udp sport 68 ip daddr 255.255.255.255 udp dport 67 accept
udp sport 67 udp dport 68 accept
ip6 saddr fe80::/10 udp sport 546 ip6 daddr ff02::1:2 udp dport 547 accept
ip6 saddr fe80::/10 udp sport 546 ip6 daddr ff05::1:3 udp dport 547 accept
ip6 saddr fe80::/10 udp sport 547 ip6 daddr fe80::/10 udp dport 546 accept
ip6 daddr ff02::2 icmpv6 type nd-router-solicit icmpv6 code no-route accept
ip6 saddr fe80::/10 icmpv6 type nd-router-advert icmpv6 code no-route accept
ip6 saddr fe80::/10 icmpv6 type nd-redirect icmpv6 code no-route accept
ip6 daddr ff02::1:ff00:0/104 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
ip6 daddr fe80::/10 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
ip6 saddr fe80::/10 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
ip6 daddr fe80::/10 icmpv6 type nd-neighbor-advert icmpv6 code no-route accept
icmpv6 type nd-neighbor-advert icmpv6 code no-route accept
oif "wg0-mullvad" udp dport 53 ip daddr 10.64.0.1 accept
oif "wg0-mullvad" tcp dport 53 ip daddr 10.64.0.1 accept
udp dport 53 reject
tcp dport 53 reject with tcp reset
oif "wg0-mullvad" accept
iif "wg0-mullvad" ct state established accept
reject
}
chain mangle {
type route hook output priority mangle; policy accept;
oif "wg0-mullvad" udp dport 53 ip daddr 10.64.0.1 accept
oif "wg0-mullvad" tcp dport 53 ip daddr 10.64.0.1 accept
meta cgroup 5087041 ct mark set 0x00000f41 meta mark set 0x6d6f6c65
}
chain nat {
type nat hook postrouting priority srcnat; policy accept;
oif "wg0-mullvad" ct mark 0x00000f41 drop
oif != "lo" ct mark 0x00000f41 masquerade
}
}
标记数据包来自bl
由于 Mullvad 似乎忽略了标有 的数据包0x6d6f6c65
,我尝试添加一条规则,在来自 的数据包上加盖该标记bl
。
$ sudo nft add rule inet mullvad prerouting ip saddr 192.168.11.2 meta mark set 0x6d6f6c65
$ sudo nft list ruleset | grep mullvad
table inet mullvad {
chain prerouting {
type filter hook prerouting priority -199; policy accept;
iif != "wg0-mullvad" ct mark 0x00000f41 meta mark set 0x6d6f6c65
ip saddr 170.62.100.66 udp sport 10501 meta mark set 0x6d6f6c65
ip saddr 192.168.11.2 meta mark set 0x6d6f6c65
}
...
}
但这并没有什么作用。
$ ip route get 8.8.8.8 from 192.168.11.2
8.8.8.8 from 192.168.11.2 dev wg0-mullvad table 1836018789 uid 1000
cache
$ sudo ip netns exec bl traceroute -n 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 * * *
2 * * *
3 * * *
4 * * *
5 * * *
6 *^C
我是否犯了什么错误,或者我的整个方法是否有缺陷?
看起来 Mullvad 为分割隧道实现了 conntrack 标签,如此处所述。
你可以使用它,通过使用 conntrack mark 来标记你的流量
0x00000f41
,ct mark set 0x00000f41 meta mark set 0x6d6f6c65
尝试:话虽如此,您的“单独路由表”解决方案正在发挥作用,但您现在需要通过防火墙。
虽然我认为您应该使用 Mullvad 带来的解决方案,这样如果您需要调试任何东西,您会找到更多文档。