Eu uso o Mullvad VPN na minha máquina host (Fedora 41), que configura uma interface WireGuard, wg0-mullvad
e quero que o tráfego de e para o namespace bl
o ignore, com o objetivo final de conectar-me à VPN AnyConnect da minha empresa de dentro bl
e, depois, fazer RDP no meu computador do escritório, enquanto o resto do meu tráfego de Internet passa pelo Mullvad.
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:11:c0:d6 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute enp1s0
valid_lft 81301sec preferred_lft 81301sec
inet6 fec0::e777:52d6:5436:4997/64 scope site dynamic noprefixroute
valid_lft 86366sec preferred_lft 14366sec
inet6 fe80::bc4e:6885:3d23:b51b/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: veth-bl-root@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 62:fb:a5:45:d4:e3 brd ff:ff:ff:ff:ff:ff link-netns bl
inet 192.168.11.2/24 scope global veth-bl-root
valid_lft forever preferred_lft forever
inet6 fe80::60fb:a5ff:fe45:d4e3/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
4: vpn0: <NO-CARRIER,POINTOPOINT,MULTICAST,NOARP,UP> mtu 1207 qdisc fq_codel state DOWN group default qlen 500
link/none
10: wg0-mullvad: <POINTOPOINT,UP,LOWER_UP> mtu 1380 qdisc noqueue state UNKNOWN group default qlen 1000
link/none
inet 10.136.199.175/32 scope global wg0-mullvad
valid_lft forever preferred_lft forever
veth-bl-root
é a interface virtual ethernet conectando o namespace raiz ao meu namespace bl
. Você pode ignorar vpn0
. É a VPN da empresa que mencionei e, por enquanto, só posso usá-la quando o Mullvad estiver desconectado.
Eu segui este tutorial para configurar o namespace e conectá-lo à Internet, o que significa que habilitei o encaminhamento de IP, o encaminhamento de pacotes e o mascaramento de IP. Quando o Mullvad VPN é desconectado, ele funciona, mas quando eu habilito o VPN, as solicitações expiram. Eu acho que é porque o mascaramento de IP não acontece até que os pacotes sejam roteados por enp1s0
, e o Mullvad VPN impede isso:
$ ip route get 8.8.8.8 from 192.168.11.2
8.8.8.8 from 192.168.11.2 dev wg0-mullvad table 1836018789 uid 1000
cache
Embora na prática wg0-mullvad
seja a rota padrão, ela não parece ser definida como tal.
$ ip route
default via 10.0.2.2 dev enp1s0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev enp1s0 proto kernel scope link src 10.0.2.15 metric 100
10.64.0.1 dev wg0-mullvad proto static
192.168.11.0/24 dev veth-bl-root proto kernel scope link src 192.168.11.2
$ ip rule list
0: from all lookup local
32764: from all lookup main suppress_prefixlength 0
32765: not from all fwmark 0x6d6f6c65 lookup 1836018789
32766: from all lookup main
32767: from all lookup default
$ ip route show table 1836018789
default dev wg0-mullvad proto static
Como ele usa marcação de pacotes e sua própria tabela de roteamento, tentei duas abordagens que usavam cada uma dessas coisas.
Tentativas de correção
Tabela de roteamento separada
Primeiro, tentei criar uma nova tabela de roteamento, onde a única rota era através de enp1s0
, e adicionei uma regra especificando que todo o tráfego proveniente do endereço IP do namespace deveria usar essa tabela.
$ ip route show table bl
default via 10.0.2.15 dev enp1s0
$ ip rule
0: from all lookup local
32763: from 192.168.11.2/24 lookup bl
32764: from all lookup main suppress_prefixlength 0
32765: not from all fwmark 0x6d6f6c65 lookup 1836018789
32766: from all lookup main
32767: from all lookup default
Como tinha precedência sobre as regras de Mullvad, eu esperava que funcionasse. De fato, ip route get
era promissor:
$ ip route get 8.8.8.8 from 192.168.11.2
8.8.8.8 from 192.168.11.2 dev enp1s0 table bl uid 1000
cache
Infelizmente, os pedidos expiraram.
$ sudo ip netns exec bl traceroute -n 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 * * *
2 * * *
3 * * *
4 * * *
5 * * *
6 *^C
$ sudo ip netns exec bl ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2076ms
O uso ping -I 192.168.11.2 8.8.8.8
também não teve sucesso:
$ sudo ping -I 192.168.11.2 8.8.8.8
PING 8.8.8.8 (8.8.8.8) from 192.168.11.2 : 56(84) bytes of data.
From 192.168.11.2 icmp_seq=1 Destination Port Unreachable
ping: sendmsg: Operation not permitted
From 192.168.11.2 icmp_seq=2 Destination Port Unreachable
ping: sendmsg: Operation not permitted
From 192.168.11.2 icmp_seq=3 Destination Port Unreachable
ping: sendmsg: Operation not permitted
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2075ms
tcpdump -n -i veth-bl-root icmp
pegou os pacotes, mas tcpdump -n -i enp1s0 icmp
não encontrou nada. Os pacotes ainda estavam passando wg0-mullvad
:
# This is what happened when running `ping -c 3 8.8.8.8`.
$ sudo tcpdump -n -i wg0-mullvad icmp
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on wg0-mullvad, link-type RAW (Raw IP), snapshot length 262144 bytes
08:55:53.448220 IP 10.136.199.175 > 10.64.0.1: ICMP echo request, id 13627, seq 601, length 50
08:55:53.532425 IP 10.64.0.1 > 10.136.199.175: ICMP echo reply, id 13627, seq 601, length 50
08:55:59.449654 IP 10.136.199.175 > 10.64.0.1: ICMP echo request, id 13627, seq 602, length 50
08:55:59.534057 IP 10.64.0.1 > 10.136.199.175: ICMP echo reply, id 13627, seq 602, length 50
08:56:05.451552 IP 10.136.199.175 > 10.64.0.1: ICMP echo request, id 13627, seq 603, length 50
08:56:05.535728 IP 10.64.0.1 > 10.136.199.175: ICMP echo reply, id 13627, seq 603, length 50
^C
6 packets captured
6 packets received by filter
0 packets dropped by kernel
Aqui está a saída nft list ruleset
neste estágio:
table inet firewalld {
ct helper helper-netbios-ns-udp {
type "netbios-ns" protocol udp
l3proto ip
}
chain mangle_PREROUTING {
type filter hook prerouting priority mangle + 10; policy accept;
jump mangle_PREROUTING_POLICIES
}
chain mangle_PREROUTING_POLICIES {
iifname "enp1s0" jump mangle_PRE_policy_allow-host-ipv6
iifname "enp1s0" jump mangle_PRE_FedoraWorkstation
iifname "enp1s0" return
jump mangle_PRE_policy_allow-host-ipv6
jump mangle_PRE_FedoraWorkstation
return
}
chain nat_PREROUTING {
type nat hook prerouting priority dstnat + 10; policy accept;
jump nat_PREROUTING_POLICIES
}
chain nat_PREROUTING_POLICIES {
iifname "enp1s0" jump nat_PRE_policy_allow-host-ipv6
iifname "enp1s0" jump nat_PRE_FedoraWorkstation
iifname "enp1s0" return
jump nat_PRE_policy_allow-host-ipv6
jump nat_PRE_FedoraWorkstation
return
}
chain nat_POSTROUTING {
type nat hook postrouting priority srcnat + 10; policy accept;
jump nat_POSTROUTING_POLICIES
}
chain nat_POSTROUTING_POLICIES {
iifname "enp1s0" oifname "enp1s0" jump nat_POST_FedoraWorkstation
iifname "enp1s0" oifname "enp1s0" return
oifname "enp1s0" jump nat_POST_FedoraWorkstation
oifname "enp1s0" return
iifname "enp1s0" jump nat_POST_FedoraWorkstation
iifname "enp1s0" return
jump nat_POST_FedoraWorkstation
return
}
chain nat_OUTPUT {
type nat hook output priority dstnat + 10; policy accept;
jump nat_OUTPUT_POLICIES
}
chain nat_OUTPUT_POLICIES {
oifname "enp1s0" jump nat_OUT_FedoraWorkstation
oifname "enp1s0" return
jump nat_OUT_FedoraWorkstation
return
}
chain filter_PREROUTING {
type filter hook prerouting priority filter + 10; policy accept;
icmpv6 type { nd-router-advert, nd-neighbor-solicit } accept
meta nfproto ipv6 fib saddr . mark . iif oif missing drop
}
chain filter_INPUT {
type filter hook input priority filter + 10; policy accept;
ct state { established, related } accept
ct status dnat accept
iifname "lo" accept
ct state invalid drop
jump filter_INPUT_POLICIES
reject with icmpx admin-prohibited
}
chain filter_FORWARD {
type filter hook forward priority filter + 10; policy accept;
ct state { established, related } accept
ct status dnat accept
iifname "lo" accept
ct state invalid drop
ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } reject with icmpv6 addr-unreachable
jump filter_FORWARD_POLICIES
reject with icmpx admin-prohibited
}
chain filter_OUTPUT {
type filter hook output priority filter + 10; policy accept;
ct state { established, related } accept
oifname "lo" accept
ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } reject with icmpv6 addr-unreachable
jump filter_OUTPUT_POLICIES
}
chain filter_INPUT_POLICIES {
iifname "enp1s0" jump filter_IN_policy_allow-host-ipv6
iifname "enp1s0" jump filter_IN_FedoraWorkstation
iifname "enp1s0" reject with icmpx admin-prohibited
jump filter_IN_policy_allow-host-ipv6
jump filter_IN_FedoraWorkstation
reject with icmpx admin-prohibited
}
chain filter_FORWARD_POLICIES {
iifname "enp1s0" oifname "enp1s0" jump filter_FWD_FedoraWorkstation
iifname "enp1s0" oifname "enp1s0" reject with icmpx admin-prohibited
iifname "enp1s0" jump filter_FWD_FedoraWorkstation
iifname "enp1s0" reject with icmpx admin-prohibited
oifname "enp1s0" jump filter_FWD_FedoraWorkstation
oifname "enp1s0" reject with icmpx admin-prohibited
jump filter_FWD_FedoraWorkstation
reject with icmpx admin-prohibited
}
chain filter_OUTPUT_POLICIES {
oifname "enp1s0" jump filter_OUT_FedoraWorkstation
oifname "enp1s0" return
jump filter_OUT_FedoraWorkstation
return
}
chain filter_IN_FedoraWorkstation {
jump filter_IN_FedoraWorkstation_pre
jump filter_IN_FedoraWorkstation_log
jump filter_IN_FedoraWorkstation_deny
jump filter_IN_FedoraWorkstation_allow
jump filter_IN_FedoraWorkstation_post
meta l4proto { icmp, ipv6-icmp } accept
}
chain filter_IN_FedoraWorkstation_pre {
}
chain filter_IN_FedoraWorkstation_log {
}
chain filter_IN_FedoraWorkstation_deny {
}
chain filter_IN_FedoraWorkstation_allow {
ip6 daddr fe80::/64 udp dport 546 accept
tcp dport 22 accept
udp dport 137 ct helper set "helper-netbios-ns-udp"
udp dport 137 accept
udp dport 138 accept
ip daddr 224.0.0.251 udp dport 5353 accept
ip6 daddr ff02::fb udp dport 5353 accept
udp dport 1025-65535 accept
tcp dport 1025-65535 accept
}
chain filter_IN_FedoraWorkstation_post {
}
chain filter_OUT_FedoraWorkstation {
jump filter_OUT_FedoraWorkstation_pre
jump filter_OUT_FedoraWorkstation_log
jump filter_OUT_FedoraWorkstation_deny
jump filter_OUT_FedoraWorkstation_allow
jump filter_OUT_FedoraWorkstation_post
}
chain filter_OUT_FedoraWorkstation_pre {
}
chain filter_OUT_FedoraWorkstation_log {
}
chain filter_OUT_FedoraWorkstation_deny {
}
chain filter_OUT_FedoraWorkstation_allow {
}
chain filter_OUT_FedoraWorkstation_post {
}
chain nat_OUT_FedoraWorkstation {
jump nat_OUT_FedoraWorkstation_pre
jump nat_OUT_FedoraWorkstation_log
jump nat_OUT_FedoraWorkstation_deny
jump nat_OUT_FedoraWorkstation_allow
jump nat_OUT_FedoraWorkstation_post
}
chain nat_OUT_FedoraWorkstation_pre {
}
chain nat_OUT_FedoraWorkstation_log {
}
chain nat_OUT_FedoraWorkstation_deny {
}
chain nat_OUT_FedoraWorkstation_allow {
}
chain nat_OUT_FedoraWorkstation_post {
}
chain nat_POST_FedoraWorkstation {
jump nat_POST_FedoraWorkstation_pre
jump nat_POST_FedoraWorkstation_log
jump nat_POST_FedoraWorkstation_deny
jump nat_POST_FedoraWorkstation_allow
jump nat_POST_FedoraWorkstation_post
}
chain nat_POST_FedoraWorkstation_pre {
}
chain nat_POST_FedoraWorkstation_log {
}
chain nat_POST_FedoraWorkstation_deny {
}
chain nat_POST_FedoraWorkstation_allow {
}
chain nat_POST_FedoraWorkstation_post {
}
chain filter_FWD_FedoraWorkstation {
jump filter_FWD_FedoraWorkstation_pre
jump filter_FWD_FedoraWorkstation_log
jump filter_FWD_FedoraWorkstation_deny
jump filter_FWD_FedoraWorkstation_allow
jump filter_FWD_FedoraWorkstation_post
}
chain filter_FWD_FedoraWorkstation_pre {
}
chain filter_FWD_FedoraWorkstation_log {
}
chain filter_FWD_FedoraWorkstation_deny {
}
chain filter_FWD_FedoraWorkstation_allow {
oifname "enp1s0" accept
}
chain filter_FWD_FedoraWorkstation_post {
}
chain nat_PRE_FedoraWorkstation {
jump nat_PRE_FedoraWorkstation_pre
jump nat_PRE_FedoraWorkstation_log
jump nat_PRE_FedoraWorkstation_deny
jump nat_PRE_FedoraWorkstation_allow
jump nat_PRE_FedoraWorkstation_post
}
chain nat_PRE_FedoraWorkstation_pre {
}
chain nat_PRE_FedoraWorkstation_log {
}
chain nat_PRE_FedoraWorkstation_deny {
}
chain nat_PRE_FedoraWorkstation_allow {
}
chain nat_PRE_FedoraWorkstation_post {
}
chain mangle_PRE_FedoraWorkstation {
jump mangle_PRE_FedoraWorkstation_pre
jump mangle_PRE_FedoraWorkstation_log
jump mangle_PRE_FedoraWorkstation_deny
jump mangle_PRE_FedoraWorkstation_allow
jump mangle_PRE_FedoraWorkstation_post
}
chain mangle_PRE_FedoraWorkstation_pre {
}
chain mangle_PRE_FedoraWorkstation_log {
}
chain mangle_PRE_FedoraWorkstation_deny {
}
chain mangle_PRE_FedoraWorkstation_allow {
}
chain mangle_PRE_FedoraWorkstation_post {
}
chain filter_IN_policy_allow-host-ipv6 {
jump filter_IN_policy_allow-host-ipv6_pre
jump filter_IN_policy_allow-host-ipv6_log
jump filter_IN_policy_allow-host-ipv6_deny
jump filter_IN_policy_allow-host-ipv6_allow
jump filter_IN_policy_allow-host-ipv6_post
}
chain filter_IN_policy_allow-host-ipv6_pre {
}
chain filter_IN_policy_allow-host-ipv6_log {
}
chain filter_IN_policy_allow-host-ipv6_deny {
}
chain filter_IN_policy_allow-host-ipv6_allow {
icmpv6 type nd-neighbor-advert accept
icmpv6 type nd-neighbor-solicit accept
icmpv6 type nd-router-advert accept
icmpv6 type nd-redirect accept
}
chain filter_IN_policy_allow-host-ipv6_post {
}
chain nat_PRE_policy_allow-host-ipv6 {
jump nat_PRE_policy_allow-host-ipv6_pre
jump nat_PRE_policy_allow-host-ipv6_log
jump nat_PRE_policy_allow-host-ipv6_deny
jump nat_PRE_policy_allow-host-ipv6_allow
jump nat_PRE_policy_allow-host-ipv6_post
}
chain nat_PRE_policy_allow-host-ipv6_pre {
}
chain nat_PRE_policy_allow-host-ipv6_log {
}
chain nat_PRE_policy_allow-host-ipv6_deny {
}
chain nat_PRE_policy_allow-host-ipv6_allow {
}
chain nat_PRE_policy_allow-host-ipv6_post {
}
chain mangle_PRE_policy_allow-host-ipv6 {
jump mangle_PRE_policy_allow-host-ipv6_pre
jump mangle_PRE_policy_allow-host-ipv6_log
jump mangle_PRE_policy_allow-host-ipv6_deny
jump mangle_PRE_policy_allow-host-ipv6_allow
jump mangle_PRE_policy_allow-host-ipv6_post
}
chain mangle_PRE_policy_allow-host-ipv6_pre {
}
chain mangle_PRE_policy_allow-host-ipv6_log {
}
chain mangle_PRE_policy_allow-host-ipv6_deny {
}
chain mangle_PRE_policy_allow-host-ipv6_allow {
}
chain mangle_PRE_policy_allow-host-ipv6_post {
}
}
table ip filter {
chain FORWARD {
type filter hook forward priority filter; policy accept;
iifname "veth-bl-root" oifname "enp1s0" counter packets 3 bytes 252 accept
iifname "enp1s0" oifname "veth-bl-root" counter packets 3 bytes 252 accept
}
}
# Warning: table ip nat is managed by iptables-nft, do not touch!
table ip nat {
chain POSTROUTING {
type nat hook postrouting priority srcnat; policy accept;
ip saddr 192.168.11.0/24 oifname "enp1s0" counter packets 1 bytes 84 masquerade
}
}
table inet mullvad {
chain prerouting {
type filter hook prerouting priority -199; policy accept;
iif != "wg0-mullvad" ct mark 0x00000f41 meta mark set 0x6d6f6c65
ip saddr 193.138.218.220 udp sport 16734 meta mark set 0x6d6f6c65
}
chain output {
type filter hook output priority filter; policy drop;
oif "lo" accept
ct mark 0x00000f41 accept
udp sport 68 ip daddr 255.255.255.255 udp dport 67 accept
ip6 saddr fe80::/10 udp sport 546 ip6 daddr ff02::1:2 udp dport 547 accept
ip6 saddr fe80::/10 udp sport 546 ip6 daddr ff05::1:3 udp dport 547 accept
ip6 daddr ff02::2 icmpv6 type nd-router-solicit icmpv6 code no-route accept
ip6 daddr ff02::1:ff00:0/104 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
ip6 daddr fe80::/10 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
ip6 daddr fe80::/10 icmpv6 type nd-neighbor-advert icmpv6 code no-route accept
ip daddr 193.138.218.220 udp dport 16734 meta mark 0x6d6f6c65 accept
oif "wg0-mullvad" udp dport 53 ip daddr 10.64.0.1 accept
oif "wg0-mullvad" tcp dport 53 ip daddr 10.64.0.1 accept
udp dport 53 reject
tcp dport 53 reject with tcp reset
oif "wg0-mullvad" accept
reject
}
chain input {
type filter hook input priority filter; policy drop;
iif "lo" accept
ct mark 0x00000f41 accept
udp sport 67 udp dport 68 accept
ip6 saddr fe80::/10 udp sport 547 ip6 daddr fe80::/10 udp dport 546 accept
ip6 saddr fe80::/10 icmpv6 type nd-router-advert icmpv6 code no-route accept
ip6 saddr fe80::/10 icmpv6 type nd-redirect icmpv6 code no-route accept
ip6 saddr fe80::/10 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
icmpv6 type nd-neighbor-advert icmpv6 code no-route accept
ip saddr 193.138.218.220 udp sport 16734 ct state established accept
iif "wg0-mullvad" accept
}
chain forward {
type filter hook forward priority filter; policy drop;
ct mark 0x00000f41 accept
udp sport 68 ip daddr 255.255.255.255 udp dport 67 accept
udp sport 67 udp dport 68 accept
ip6 saddr fe80::/10 udp sport 546 ip6 daddr ff02::1:2 udp dport 547 accept
ip6 saddr fe80::/10 udp sport 546 ip6 daddr ff05::1:3 udp dport 547 accept
ip6 saddr fe80::/10 udp sport 547 ip6 daddr fe80::/10 udp dport 546 accept
ip6 daddr ff02::2 icmpv6 type nd-router-solicit icmpv6 code no-route accept
ip6 saddr fe80::/10 icmpv6 type nd-router-advert icmpv6 code no-route accept
ip6 saddr fe80::/10 icmpv6 type nd-redirect icmpv6 code no-route accept
ip6 daddr ff02::1:ff00:0/104 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
ip6 daddr fe80::/10 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
ip6 saddr fe80::/10 icmpv6 type nd-neighbor-solicit icmpv6 code no-route accept
ip6 daddr fe80::/10 icmpv6 type nd-neighbor-advert icmpv6 code no-route accept
icmpv6 type nd-neighbor-advert icmpv6 code no-route accept
oif "wg0-mullvad" udp dport 53 ip daddr 10.64.0.1 accept
oif "wg0-mullvad" tcp dport 53 ip daddr 10.64.0.1 accept
udp dport 53 reject
tcp dport 53 reject with tcp reset
oif "wg0-mullvad" accept
iif "wg0-mullvad" ct state established accept
reject
}
chain mangle {
type route hook output priority mangle; policy accept;
oif "wg0-mullvad" udp dport 53 ip daddr 10.64.0.1 accept
oif "wg0-mullvad" tcp dport 53 ip daddr 10.64.0.1 accept
meta cgroup 5087041 ct mark set 0x00000f41 meta mark set 0x6d6f6c65
}
chain nat {
type nat hook postrouting priority srcnat; policy accept;
oif "wg0-mullvad" ct mark 0x00000f41 drop
oif != "lo" ct mark 0x00000f41 masquerade
}
}
Marcar pacotes debl
Como Mullvad parece ignorar pacotes marcados com 0x6d6f6c65
, tentei adicionar uma regra que carimbasse essa marca em pacotes vindos de bl
.
$ sudo nft add rule inet mullvad prerouting ip saddr 192.168.11.2 meta mark set 0x6d6f6c65
$ sudo nft list ruleset | grep mullvad
table inet mullvad {
chain prerouting {
type filter hook prerouting priority -199; policy accept;
iif != "wg0-mullvad" ct mark 0x00000f41 meta mark set 0x6d6f6c65
ip saddr 170.62.100.66 udp sport 10501 meta mark set 0x6d6f6c65
ip saddr 192.168.11.2 meta mark set 0x6d6f6c65
}
...
}
Mas isso não fez nada.
$ ip route get 8.8.8.8 from 192.168.11.2
8.8.8.8 from 192.168.11.2 dev wg0-mullvad table 1836018789 uid 1000
cache
$ sudo ip netns exec bl traceroute -n 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 * * *
2 * * *
3 * * *
4 * * *
5 * * *
6 *^C
Cometi algum erro ou toda a minha abordagem é falha?