希望你做得很好。
两天以来,我一直在为主机系统和容器之间的连接问题而苦苦挣扎。看来我在“运输”到容器的过程中随机丢失了数据包(100 个中有 47 个)。
我可以看到数据包“离开”主机接口 docker0,但有时它们从未到达容器中。无论使用的图像如何(不同的软件不是版本),这都是可复制的。
我真的很感谢任何指示,因为我现在一无所知。谢谢你的时间!
容器
tcpdump 主机 172.17.0.6 和 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
13:34:22.406364 IP 172.17.0.1 > files.bit-in: ICMP echo request, id 19794, seq 1, length 64
13:34:22.406397 IP files.bit-in > 172.17.0.1: ICMP echo reply, id 19794, seq 1, length 64
>> missing seq# 2 <<
13:34:24.451683 IP 172.17.0.1 > files.bit-in: ICMP echo request, id 19794, seq 3, length 64
13:34:24.451721 IP files.bit-in > 172.17.0.1: ICMP echo reply, id 19794, seq 3, length 64
^C
4 packets captured
4 packets received by filter
0 packets dropped by kernel
主持人
Debian GNU/Linux 10 (buster)
Kernel 4.19.0-10-amd64 (Debian 4.19.132-1)
Docker version 19.03.12, build 48a66213fe
ping 172.17.0.6 -c3
PING 172.17.0.6 (172.17.0.6) 56(84) bytes of data.
64 bytes from 172.17.0.6: icmp_seq=1 ttl=64 time=0.111 ms
>> missing seq# 2 <<
64 bytes from 172.17.0.6: icmp_seq=3 ttl=64 time=0.097 ms
--- 172.17.0.6 ping statistics ---
3 packets transmitted, 2 received, 33.3333% packet loss, time 47ms
rtt min/avg/max/mdev = 0.097/0.104/0.111/0.007 ms
tcpdump -i docker0 主机 172.17.0.6 和 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on docker0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:34:22.406332 IP 172.17.0.1 > 172.17.0.6: ICMP echo request, id 19794, seq 1, length 64
15:34:22.406401 IP 172.17.0.6 > 172.17.0.1: ICMP echo reply, id 19794, seq 1, length 64
15:34:23.427549 IP 172.17.0.1 > 172.17.0.6: ICMP echo request, id 19794, seq 2, length 64
15:34:24.451657 IP 172.17.0.1 > 172.17.0.6: ICMP echo request, id 19794, seq 3, length 64
15:34:24.451725 IP 172.17.0.6 > 172.17.0.1: ICMP echo reply, id 19794, seq 3, length 64
^C
5 packets captured
5 packets received by filter
0 packets dropped by kernel
码头工人网络检查桥
[
{
"Name": "bridge",
"Id": "fbd2aea6a1c634c95ea3e0ac628daf0c266f77cdda63edc573d978d142c57ed8",
"Created": "2020-08-21T22:13:41.905418474+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"90f16aa06342932ae2bb258c0f1b67a833db3de8142336dbec1cda4468ddf76e": {
"Name": "container-test",
"EndpointID": "6e3b17d55b1836d6a1a12219011ceb31950c0fc421fb17923a983eea3c2d559a",
"MacAddress": "02:42:ac:11:00:07",
"IPv4Address": "172.17.0.6/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
更新 1
我用最少的选项创建了一个新的桥接网络,对于在新网络中运行的容器,问题似乎已经解决。
docker network create --subnet=172.20.0.0/24 --gateway=172.20.0.1 docker20
唯一的区别似乎是相关的选项。新的没有,旧的有:
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
会做更多的调查。