AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / unix / 问题

问题[docker](unix)

Martin Hope
Basil Bourque
Asked: 2025-04-30 05:49:53 +0800 CST

Podman 错误:Docker 套接字未正确伪装

  • 6

我在macOS Sequoia上安装了全新版本的Podman Desktop应用程序 1.18.0 版。启动Podman Desktop应用程序后,出现一个浮动通知窗口,显示以下内容:

Docker 套接字伪装不正确

Podman 未正确伪装 Docker 套接字 (/var/run/docker.sock)。这可能会导致 Docker 兼容工具失败。请禁用所有冲突的工具,然后重新启用 Docker 兼容性。

我在 Google 和 Ecosia 上搜索这条消息都没有结果。难道只有我一个人遇到这个错误吗?

👉🏽 这个错误是什么?怎么解决?我不知道如何正确地伪装套接字。

我没有任何遗留的 Docker 工作可以带到这个 Podman 中。那么我是否需要关心 Docker 兼容性呢?

docker
  • 1 个回答
  • 56 Views
Martin Hope
Hauke Laging
Asked: 2025-03-23 07:15:02 +0800 CST

奇怪的 Docker veth 接口(对等)名称

  • 6

在 Docker 主机上(我没有设置;我对 Docker 也不是很熟悉)我注意到我不理解接口名称:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000

    link/ether 5e:44:5a:26:82:e7 brd ff:ff:ff:ff:ff:ff

8: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default

    link/ether ae:b3:52:68:1d:5b brd ff:ff:ff:ff:ff:ff

12: br-7fef86ec14bd: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default

    link/ether 76:d3:a0:d7:73:0a brd ff:ff:ff:ff:ff:ff

33: vethc35030f@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-7fef86ec14bd state UP mode DEFAULT group default

    link/ether 6e:b1:3e:85:88:c4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
ip -d link show dev vethc35030f

33: vethc35030f@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-7fef86ec14bd state UP mode DEFAULT group default
    link/ether 6e:b1:3e:85:88:c4 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 1 minmtu 68 maxmtu 65535
    veth
    bridge_slave [...]

因此vethc35030f,这不仅听起来像veth,实际上它就是veth。

怎么会这样@if2?文档说veth接口总是成对创建的,配对接口名称或(如果在不同的命名空间中)编号是后面的部分@。我不知道以后是否有可能更改对veth等体,尤其是更改为不同类型的接口。

somename@if2是我所期望的macvlan(或类似的)界面,但这里的情况并非如此。

docker
  • 1 个回答
  • 63 Views
Martin Hope
Jean
Asked: 2025-02-21 21:04:37 +0800 CST

我的 Docker Xibo 上存在 Fail2ban 问题

  • 6

我目前正尝试让 Fail2ban 与我的 Xibo 容器一起工作,但即使我被列为被禁止,我仍然可以尝试登录。

debian@vps-ec7a07fd:~/xibo$ sudo fail2ban-client status xibo
Status for the jail: xibo
|- Filter
|  |- Currently failed: 1
|  |- Total failed:     7
|  `- File list:        /var/log/xilog/container.log
`- Actions
   |- Currently banned: 1
   |- Total banned:     1
   `- Banned IP list:   172.18.0.1

我去检查了 IPtables,但没有发现任何异常。

 pkts bytes target     prot opt in     out     source               destination
 32  6227 f2b-xibo   6    --  *      *       0.0.0.0/0            0.0.0.0/0            
 multiport dports 80,443

最后,我没有在 fail2ban.log 中看到任何错误。

2025-02-21 12:35:55,883 fail2ban.filter         [133082]: INFO    [xibo] Found 
172.18.0.1 - 2025-02-21 12:35:55
2025-02-21 12:35:57,813 fail2ban.filter         [133082]: INFO    [xibo] Found 
172.18.0.1 - 2025-02-21 12:35:57
2025-02-21 12:36:00,516 fail2ban.filter         [133082]: INFO    [xibo] Found 
172.18.0.1 - 2025-02-21 12:35:59
2025-02-21 12:36:00,674 fail2ban.actions        [133082]: NOTICE  [xibo] Ban 172.18.0.1
2025-02-21 12:36:02,119 fail2ban.filter         [133082]: INFO    [xibo] Found 
172.18.0.1 - 2025-02-21 12:36:01
2025-02-21 12:36:03,743 fail2ban.filter         [133082]: INFO    [xibo] Found 
172.18.0.1 - 2025-02-21 12:36:03
2025-02-21 12:36:05,501 fail2ban.filter         [133082]: INFO    [xibo] Found 
172.18.0.1 - 2025-02-21 12:36:05
2025-02-21 12:36:05,904 fail2ban.actions        [133082]: NOTICE  [xibo] 172.18.0.1 a 
lready banned
2025-02-21 12:36:07,244 fail2ban.filter         [133082]: INFO    [xibo] Found 
172.18.0.1 - 2025-02-21 12:36:07
2025-02-21 12:39:05,184 fail2ban.actions        [133082]: NOTICE  [xibo] Unban 
172.18.0.1

如果您知道问题可能出在哪里,我愿意接受任何建议。

docker
  • 1 个回答
  • 24 Views
Martin Hope
CarloC
Asked: 2024-11-13 16:43:07 +0800 CST

将 gdb 从 docker 容器附加到在不同 PID 命名空间中运行的进程

  • 5

我构建了一个 docker 镜像,其中安装了gccbinutils 和gdb调试器。

我会将该 docker 容器附加到在同一 Linux 主机上运行的容器gdb内的进程。该容器使用自己的命名空间,因此在 docker 容器中运行会抱怨目标进程和调试器不在同一个命名空间中。lxclxcPIDgdbPID

[SR-PCE-251:~]$ docker run -it --pid host --rm --cap-add=SYS_PTRACE --security-opt seccomp=unconfined carlo/ubuntu
root@e7b2db23af34:/#
root@e7b2db23af34:/# id
uid=0(root) gid=0(root) groups=0(root)
root@e7b2db23af34:/# 
root@e7b2db23af34:/# gdb -q attach 11365
attach: No such file or directory.
Attaching to process 11365
[New LWP 24283]
[New LWP 20025]
[New LWP 20024]
[New LWP 19992]
[New LWP 19991]
[New LWP 13974]
[New LWP 13970]
[New LWP 13969]
[New LWP 13968]
[New LWP 13967]
[New LWP 13962]
[New LWP 13958]
[New LWP 13957]
[New LWP 13954]
[New LWP 13952]
[New LWP 13944]
[New LWP 12078]
[New LWP 11822]
[New LWP 11543]
[New LWP 11515]
[New LWP 11489]
[New LWP 11483]
[New LWP 11482]
[New LWP 11477]
[New LWP 11476]

warning: "target:/proc/11365/exe": could not open as an executable file: Operation not permitted.

warning: `target:/proc/11365/exe': can't open to read symbols: Operation not permitted.

warning: Could not load vsyscall page because no executable was specified

warning: Target and debugger are in different PID namespaces; thread lists and other data are likely unreliable.  Connect to gdbserver inside the container.
0x00007f0bf997ac73 in ?? ()
(gdb)

我怎样才能摆脱它?

docker
  • 1 个回答
  • 20 Views
Martin Hope
jamshid
Asked: 2024-10-12 01:05:33 +0800 CST

为什么 rockylinux 9 上的“dig”无法在 docker compose 网络中找到名为“https”的容器/主机?

  • 6

抱歉,我不知道这是 docker 问题还是 rockylinux 9 上的 dig 问题。在 rockylinux 8 上一切都按预期运行。

我有一个docker-compose.yml名为的文件,其中包含一个名为 的服务https。这允许通过主机名 引用容器https。虽然ping https可以工作,但由于某种原因dig https(DiG 9.16.23-RH)在 rockylinux 9 上不起作用。它在 rockylinux 8 上可以工作(DiG 9.11.36-RedHat-9.11.36-16.el8_10.2)。如果我将服务名称更改为httpsx然后dig httpsx就可以工作。

services:
  https:
    image: "rockylinux:${RL_VERSION}"
    command: bash -c "yum install -y iputils bind-utils && echo '=====dig version output====' && dig -v && echo '=====ping https output====' && ping -c 3 https && echo '=====dig https output====' && dig +short https"
    environment:
       - RL_VERSION

工作8:

% RL_VERSION=8 docker-compose up
Attaching to https-1
https-1  | Rocky Linux 8 - AppStream                       5.7 MB/s |  11 MB     00:01    
...
https-1  | Complete!
https-1  | =====dig version output====
https-1  | DiG 9.11.36-RedHat-9.11.36-16.el8_10.2
https-1  | =====ping https output====
https-1  | PING https (172.21.0.2) 56(84) bytes of data.
https-1  | 64 bytes from c3f0c7a6613c (172.21.0.2): icmp_seq=1 ttl=64 time=0.558 ms
https-1  | 64 bytes from c3f0c7a6613c (172.21.0.2): icmp_seq=2 ttl=64 time=0.051 ms
https-1  | 64 bytes from c3f0c7a6613c (172.21.0.2): icmp_seq=3 ttl=64 time=0.040 ms
https-1  | 
https-1  | --- https ping statistics ---
https-1  | 3 packets transmitted, 3 received, 0% packet loss, time 2025ms
https-1  | rtt min/avg/max/mdev = 0.040/0.216/0.558/0.241 ms
https-1  | =====dig https output====
https-1  | 172.21.0.2

失败9:

% RL_VERSION=9 docker-compose up
[+] Running 1/1
 ✔ Container testhttps-https-1  Recreated                                                                                                    0.2s 
Attaching to https-1
https-1  | Rocky Linux 9 - BaseOS                          2.4 MB/s | 2.4 MB     00:00    
...
https-1  | Complete!
https-1  | =====dig version output====
https-1  | DiG 9.16.23-RH
https-1  | =====ping https output====
https-1  | PING https (172.21.0.2) 56(84) bytes of data.
https-1  | 64 bytes from 4a2841b5dac9 (172.21.0.2): icmp_seq=1 ttl=64 time=0.404 ms
https-1  | 64 bytes from 4a2841b5dac9 (172.21.0.2): icmp_seq=2 ttl=64 time=0.117 ms
https-1  | 64 bytes from 4a2841b5dac9 (172.21.0.2): icmp_seq=3 ttl=64 time=0.088 ms
https-1  | 
https-1  | --- https ping statistics ---
https-1  | 3 packets transmitted, 3 received, 0% packet loss, time 2009ms
https-1  | rtt min/avg/max/mdev = 0.088/0.203/0.404/0.142 ms
https-1  | =====dig https output====
https-1  | c.root-servers.net.
https-1  | l.root-servers.net.
https-1  | e.root-servers.net.
https-1  | d.root-servers.net.
https-1  | i.root-servers.net.
https-1  | b.root-servers.net.
https-1  | g.root-servers.net.
https-1  | m.root-servers.net.
https-1  | a.root-servers.net.
https-1  | f.root-servers.net.
https-1  | h.root-servers.net.
https-1  | j.root-servers.net.
https-1  | k.root-servers.net.
docker
  • 1 个回答
  • 27 Views
Martin Hope
atl123
Asked: 2024-09-21 04:50:24 +0800 CST

系统日志记录驱动程序给出错误协议错误套接字类型

  • 4

我有一个通过 docker compose 定义的服务(见下面的定义)。当我尝试通过 docker-compose -f up --wait -d my_service 启动此服务时,我收到错误

Error response from daemon: failed to create task for container: failed to initialize logging driver: dial unix /dev/log: connect: protocol wrong type for socket

在执行 docker compose cmd 的主机服务器上,我看到套接字存在并且我的用户具有写权限:

srw-rw-rw-. 1 root root 0 Aug 29  2023 /dev/log

服务定义:

  my_service:
    command: <omitted>
    image: <omitted>
    volumes:
      - "/dev/log:/dev/log"
    logging:
      driver: "syslog"
      options:
        syslog-address: "unix:///dev/log"
        tag: "my_service"

有谁知道是什么原因导致了这个错误?

docker
  • 1 个回答
  • 14 Views
Martin Hope
Franz Wong
Asked: 2024-09-06 21:40:19 +0800 CST

cgroup memory.max 被 docker 覆盖

  • 5

我使用以下脚本创建了 cgroup 。的mygroup.slice值为。 我可以看到 的内容为。memory.max200Mmemory.max209715200

sudo mkdir -p /sys/fs/cgroup/mygroup.slice
echo "200M" | sudo tee "/sys/fs/cgroup/mygroup.slice/memory.max"
echo "+memory" | sudo tee "/sys/fs/cgroup/mygroup.slice/cgroup.subtree_control"
cat "/sys/fs/cgroup/mygroup.slice/memory.max"

然后我用启动了一个 docker 容器cgroup-parent。

docker run -d --cgroup-parent mygroup.slice --env CONTAINER_NAME=container1 --name container1 simple/container1

此后 的内容memory.max将max代替209715200。

我怎样才能停止docker来覆盖它? 谢谢。

以下是我的环境的详细信息。

使用 cgroup v2。

stat -fc %T /sys/fs/cgroup/
cgroup2fs

以下是输出docker version

docker version
Client: Docker Engine - Community
 Version:           27.2.0
 API version:       1.47
 Go version:        go1.21.13
 Git commit:        3ab4256
 Built:             Tue Aug 27 14:15:45 2024
 OS/Arch:           linux/arm64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          27.2.0
  API version:      1.47 (minimum version 1.24)
  Go version:       go1.21.13
  Git commit:       3ab5c7d
  Built:            Tue Aug 27 14:15:45 2024
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.7.21
  GitCommit:        472731909fa34bd7bc9c087e4c27943f9835f111
 runc:
  Version:          1.1.13
  GitCommit:        v1.1.13-0-g58aa920
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
docker
  • 1 个回答
  • 9 Views
Martin Hope
Daniel Walker
Asked: 2024-09-05 22:39:30 +0800 CST

检查 procfs 是否存在基于临时文件的容器

  • 5

我正在开发一个应用程序,并将其作为 Docker 映像发布。我的应用程序由一个可执行文件组成,该文件仅链接到 libc 和 libcrypto。我正在考虑通过静态链接可执行文件并将其放入临时映像中来缩小映像大小。

我的问题来自我的测试。在测试期间,我想确认我的应用程序(它监听多个端口(TCP 和 UDP))是否正常运行。我一直通过 运行来做到这一点cat /proc/net/<file>。docker exec但是,使用临时映像,我将无法访问cat甚至sh。

我知道在测试期间我可以将 BusyBox 放入容器内。但是,有没有办法只使用 来检查 proc 文件系统docker?

我试过

docker cp the_container:/proc/net/tcp .

但出现了错误

守护进程的错误响应:在容器 the_container 中找不到文件 /proc/net/tcp

docker
  • 1 个回答
  • 30 Views
Martin Hope
KronwarsCZ
Asked: 2024-07-26 00:42:55 +0800 CST

从特定源 IP 将流量路由到 Docker 容器

  • 5

我目前面临从外部路由流量到docker容器的问题。

这是我的设置:

  • Rocky Linux 9.4 主机
  • Docker 网络(网桥),IP 范围为 172.20.0.0/16
  • 3 个 docker 容器(运行 Rapid7 扫描引擎,但这并不重要),每个容器在端口 40814 上都有一个可用服务,但这些服务不会导出,而且每个服务器在该 docker 网络上都有一个静态 IP(172.20.0.2-4)
  • 主机上的firewalld配置:
public (active)
 target: default
 icmp-block-inversion: no
 interfaces: ens192
 sources:
 services: cockpit dhcpv6-client ssh
 ports: 10050/tcp 40814/tcp
 protocols:
 forward: yes
 masquerade: yes
 forward-ports:
   port=40814:proto=tcp:toport=40814:toaddr=172.20.0.2
 source-ports:
 icmp-blocks:
 rich rules:

我想要实现的是,基于给定的源 IP(我网络上的其他服务器),我想将流量路由到 3 个 docker 容器中的一个。另一个服务器只知道我的 rocky linux 服务器的 IP 和端口 40814,然后 rocky linux 服务器决定将流量路由到哪个 docker。这不是负载平衡的尝试。

我能够通过telnet 172.20.0.2 40814(从主机/rocky linux 服务器)检查 docker 容器是否正常工作,然后在 docker 容器日志中显示连接尝试,但是当我尝试telnet 10.0.20.123 40814从网络上的其他服务器执行(rocky linux 服务器的 ip)时,我只得到Trying 10.0.20.123...。尝试该 IP 上的任何其他端口都会立即以 结尾Connection refused。日志也没有报告连接尝试。

我尝试过不同的防火墙设置,例如:

One:
firewall-cmd --add-rich-rule='rule 
family="ipv4" \
source address="10.0.20.120/32" \
port protocol="tcp" port="40814" accept'
firewall-cmd --add-forward-port=port=40814:proto=tcp:toport=40814:toaddr=172.20.0.2
firewall-cmd --zone=public --add-forward-port=port=41814:proto=tcp:toaddr=172.20.0.2:toport=40814 --permanent 

Two:
firewall-cmd --add-rich-rule='rule 
family="ipv4" \
source address="10.0.20.120/32" \
forward-port protocol="tcp" port="41814" toport=40814 toaddr=172.20.0.2'

SELinux 正在强制执行,但我不确定这是否有区别。

你能帮忙吗?非常感谢!

编辑:添加更多信息

网络相关的docker信息:

"NetworkSettings": {
            "Bridge": "",
            "SandboxID": "06e60f163d002b1ef377542172f4007dcfa33749bf104315047921ac8af0d8c0",
            "SandboxKey": "/var/run/docker/netns/06e60f163d00",
            "Ports": {},
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "se-net": {
                    "IPAMConfig": {
                        "IPv4Address": "172.20.0.2"
                    },
                    "Links": null,
                    "Aliases": [
                        "nse-1",
                        "nse-1"
                    ],
                    "MacAddress": "02:42:ac:14:00:02",
                    "DriverOpts": null,
                    "NetworkID": "ae57f90864d9171ee342803f1ce2d336db530482f000e8a7c2c4ef44fb9f09b9",
                    "EndpointID": "9f12ea7fca0ce272d3cfcd4797a4d68d88c9542d2b2ce7616581a0f2aff32f90",
                    "Gateway": "172.20.0.1",
                    "IPAddress": "172.20.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "DNSNames": [
                        "nse-1",
                        "7ee892f16d9f"
                    ]
                }
            }
        }

iptables-保存

# Generated by iptables-save v1.8.10 (nf_tables) on Fri Jul 26 10:56:54 2024
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [822:49320]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o br-ae57f90864d9 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-ae57f90864d9 -j DOCKER
-A FORWARD -i br-ae57f90864d9 ! -o br-ae57f90864d9 -j ACCEPT
-A FORWARD -i br-ae57f90864d9 -o br-ae57f90864d9 -j ACCEPT
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i br-ae57f90864d9 ! -o br-ae57f90864d9 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o br-ae57f90864d9 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Fri Jul 26 10:56:54 2024
# Generated by iptables-save v1.8.10 (nf_tables) on Fri Jul 26 10:56:54 2024
*nat
:PREROUTING ACCEPT [434998:26094074]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [62817:4380590]
:POSTROUTING ACCEPT [62817:4380590]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.20.0.0/16 ! -o br-ae57f90864d9 -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i br-ae57f90864d9 -j RETURN
-A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Fri Jul 26 10:56:54 2024

nft 列表规则集

# Warning: table ip nat is managed by iptables-nft, do not touch!
table ip nat {
    chain DOCKER {
        iifname "br-ae57f90864d9" counter packets 0 bytes 0 return
        iifname "docker0" counter packets 0 bytes 0 return
    }

    chain POSTROUTING {
        type nat hook postrouting priority srcnat; policy accept;
        ip saddr 172.20.0.0/16 oifname != "br-ae57f90864d9" counter packets 15 bytes 900 masquerade
        ip saddr 172.17.0.0/16 oifname != "docker0" counter packets 0 bytes 0 masquerade
    }

    chain PREROUTING {
        type nat hook prerouting priority dstnat; policy accept;
        fib daddr type local counter packets 433464 bytes 26008040 jump DOCKER
    }

    chain OUTPUT {
        type nat hook output priority dstnat; policy accept;
        ip daddr != 127.0.0.0/8 fib daddr type local counter packets 0 bytes 0 jump DOCKER
    }
}
# Warning: table ip filter is managed by iptables-nft, do not touch!
table ip filter {
    chain DOCKER {
    }

    chain DOCKER-ISOLATION-STAGE-1 {
        iifname "br-ae57f90864d9" oifname != "br-ae57f90864d9" counter packets 15 bytes 900 jump DOCKER-ISOLATION-STAGE-2
        iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 jump DOCKER-ISOLATION-STAGE-2
        counter packets 20043 bytes 21105956 return
    }

    chain DOCKER-ISOLATION-STAGE-2 {
        oifname "br-ae57f90864d9" counter packets 0 bytes 0 drop
        oifname "docker0" counter packets 0 bytes 0 drop
        counter packets 6609 bytes 387973 return
    }

    chain FORWARD {
        type filter hook forward priority filter; policy drop;
        counter packets 821 bytes 49260 jump DOCKER-USER
        counter packets 821 bytes 49260 jump DOCKER-ISOLATION-STAGE-1
        oifname "br-ae57f90864d9" ct state related,established counter packets 0 bytes 0 accept
        oifname "br-ae57f90864d9" counter packets 0 bytes 0 jump DOCKER
        iifname "br-ae57f90864d9" oifname != "br-ae57f90864d9" counter packets 15 bytes 900 accept
        iifname "br-ae57f90864d9" oifname "br-ae57f90864d9" counter packets 0 bytes 0 accept
        oifname "docker0" ct state related,established counter packets 0 bytes 0 accept
        oifname "docker0" counter packets 870 bytes 52200 jump DOCKER
        iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 accept
        iifname "docker0" oifname "docker0" counter packets 0 bytes 0 accept
    }

    chain DOCKER-USER {
        counter packets 20043 bytes 21105956 return
    }
}
table ip6 nat {
    chain DOCKER {
    }
}
table ip6 filter {
    chain DOCKER {
    }

    chain DOCKER-ISOLATION-STAGE-1 {
        iifname "br-ae57f90864d9" oifname != "br-ae57f90864d9" counter packets 0 bytes 0 jump DOCKER-ISOLATION-STAGE-2
        iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 jump DOCKER-ISOLATION-STAGE-2
        counter packets 0 bytes 0 return
    }

    chain DOCKER-ISOLATION-STAGE-2 {
        oifname "br-ae57f90864d9" counter packets 0 bytes 0 drop
        oifname "docker0" counter packets 0 bytes 0 drop
        counter packets 0 bytes 0 return
    }

    chain FORWARD {
        type filter hook forward priority filter; policy drop;
        counter packets 0 bytes 0 jump DOCKER-USER
    }

    chain DOCKER-USER {
        counter packets 0 bytes 0 return
    }
}
table inet firewalld {
    chain mangle_PREROUTING {
        type filter hook prerouting priority mangle + 10; policy accept;
        jump mangle_PREROUTING_ZONES
    }

    chain mangle_PREROUTING_POLICIES_pre {
        jump mangle_PRE_policy_allow-host-ipv6
    }

    chain mangle_PREROUTING_ZONES {
        iifname "br-ae57f90864d9" goto mangle_PRE_docker
        iifname "docker0" goto mangle_PRE_docker
        iifname "ens192" goto mangle_PRE_public
        goto mangle_PRE_public
    }

    chain mangle_PREROUTING_POLICIES_post {
    }

    chain nat_PREROUTING {
        type nat hook prerouting priority dstnat + 10; policy accept;
        jump nat_PREROUTING_ZONES
    }

    chain nat_PREROUTING_POLICIES_pre {
        jump nat_PRE_policy_allow-host-ipv6
    }

    chain nat_PREROUTING_ZONES {
        iifname "br-ae57f90864d9" goto nat_PRE_docker
        iifname "docker0" goto nat_PRE_docker
        iifname "ens192" goto nat_PRE_public
        goto nat_PRE_public
    }

    chain nat_PREROUTING_POLICIES_post {
    }

    chain nat_POSTROUTING {
        type nat hook postrouting priority srcnat + 10; policy accept;
        jump nat_POSTROUTING_ZONES
    }

    chain nat_POSTROUTING_POLICIES_pre {
        oifname { "docker0", "br-ae57f90864d9" } jump nat_POST_policy_docker-forwarding
    }

    chain nat_POSTROUTING_ZONES {
        oifname "br-ae57f90864d9" goto nat_POST_docker
        oifname "docker0" goto nat_POST_docker
        oifname "ens192" goto nat_POST_public
        goto nat_POST_public
    }

    chain nat_POSTROUTING_POLICIES_post {
    }

    chain nat_OUTPUT {
        type nat hook output priority dstnat + 10; policy accept;
        jump nat_OUTPUT_POLICIES_pre
        jump nat_OUTPUT_POLICIES_post
    }

    chain nat_OUTPUT_POLICIES_pre {
    }

    chain nat_OUTPUT_POLICIES_post {
    }

    chain filter_PREROUTING {
        type filter hook prerouting priority filter + 10; policy accept;
        icmpv6 type { nd-router-advert, nd-neighbor-solicit } accept
        meta nfproto ipv6 fib saddr . mark . iif oif missing drop
    }

    chain filter_INPUT {
        type filter hook input priority filter + 10; policy accept;
        ct state { established, related } accept
        ct status dnat accept
        iifname "lo" accept
        ct state invalid drop
        jump filter_INPUT_ZONES
        reject with icmpx admin-prohibited
    }

    chain filter_FORWARD {
        type filter hook forward priority filter + 10; policy accept;
        ct state { established, related } accept
        ct status dnat accept
        iifname "lo" accept
        ct state invalid drop
        ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } reject with icmpv6 addr-unreachable
        jump filter_FORWARD_ZONES
        reject with icmpx admin-prohibited
    }

    chain filter_OUTPUT {
        type filter hook output priority filter + 10; policy accept;
        ct state { established, related } accept
        oifname "lo" accept
        ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } reject with icmpv6 addr-unreachable
        jump filter_OUTPUT_POLICIES_pre
        jump filter_OUTPUT_POLICIES_post
    }

    chain filter_INPUT_POLICIES_pre {
        jump filter_IN_policy_allow-host-ipv6
    }

    chain filter_INPUT_ZONES {
        iifname "br-ae57f90864d9" goto filter_IN_docker
        iifname "docker0" goto filter_IN_docker
        iifname "ens192" goto filter_IN_public
        goto filter_IN_public
    }

    chain filter_INPUT_POLICIES_post {
    }

    chain filter_FORWARD_POLICIES_pre {
        oifname { "docker0", "br-ae57f90864d9" } jump filter_FWD_policy_docker-forwarding
    }

    chain filter_FORWARD_ZONES {
        iifname "br-ae57f90864d9" goto filter_FWD_docker
        iifname "docker0" goto filter_FWD_docker
        iifname "ens192" goto filter_FWD_public
        goto filter_FWD_public
    }

    chain filter_FORWARD_POLICIES_post {
    }

    chain filter_OUTPUT_POLICIES_pre {
    }

    chain filter_OUTPUT_POLICIES_post {
    }

    chain filter_IN_public {
        jump filter_INPUT_POLICIES_pre
        jump filter_IN_public_pre
        jump filter_IN_public_log
        jump filter_IN_public_deny
        jump filter_IN_public_allow
        jump filter_IN_public_post
        jump filter_INPUT_POLICIES_post
        meta l4proto { icmp, ipv6-icmp } accept
        reject with icmpx admin-prohibited
    }

    chain filter_IN_public_pre {
    }

    chain filter_IN_public_log {
    }

    chain filter_IN_public_deny {
    }

    chain filter_IN_public_allow {
        tcp dport 22 accept
        ip6 daddr fe80::/64 udp dport 546 accept
        tcp dport 9090 accept
        tcp dport 10050 accept
        tcp dport 40814 accept
    }

    chain filter_IN_public_post {
    }

    chain nat_POST_public {
        jump nat_POSTROUTING_POLICIES_pre
        jump nat_POST_public_pre
        jump nat_POST_public_log
        jump nat_POST_public_deny
        jump nat_POST_public_allow
        jump nat_POST_public_post
        jump nat_POSTROUTING_POLICIES_post
    }

    chain nat_POST_public_pre {
    }

    chain nat_POST_public_log {
    }

    chain nat_POST_public_deny {
    }

    chain nat_POST_public_allow {
        meta nfproto ipv4 oifname != "lo" masquerade
    }

    chain nat_POST_public_post {
    }

    chain filter_FWD_public {
        jump filter_FORWARD_POLICIES_pre
        jump filter_FWD_public_pre
        jump filter_FWD_public_log
        jump filter_FWD_public_deny
        jump filter_FWD_public_allow
        jump filter_FWD_public_post
        jump filter_FORWARD_POLICIES_post
        reject with icmpx admin-prohibited
    }

    chain filter_FWD_public_pre {
    }

    chain filter_FWD_public_log {
    }

    chain filter_FWD_public_deny {
    }

    chain filter_FWD_public_allow {
        oifname "ens192" accept
    }

    chain filter_FWD_public_post {
    }

    chain nat_PRE_public {
        jump nat_PREROUTING_POLICIES_pre
        jump nat_PRE_public_pre
        jump nat_PRE_public_log
        jump nat_PRE_public_deny
        jump nat_PRE_public_allow
        jump nat_PRE_public_post
        jump nat_PREROUTING_POLICIES_post
    }

    chain nat_PRE_public_pre {
    }

    chain nat_PRE_public_log {
    }

    chain nat_PRE_public_deny {
    }

    chain nat_PRE_public_allow {
        ip saddr 10.0.20.120 tcp dport 40814 dnat ip to 172.17.0.2:40814
        ip saddr 10.0.20.120 tcp dport 40814 dnat ip to 172.20.0.2:40814
    }

    chain nat_PRE_public_post {
    }

    chain mangle_PRE_public {
        jump mangle_PREROUTING_POLICIES_pre
        jump mangle_PRE_public_pre
        jump mangle_PRE_public_log
        jump mangle_PRE_public_deny
        jump mangle_PRE_public_allow
        jump mangle_PRE_public_post
        jump mangle_PREROUTING_POLICIES_post
    }

    chain mangle_PRE_public_pre {
    }

    chain mangle_PRE_public_log {
    }

    chain mangle_PRE_public_deny {
    }

    chain mangle_PRE_public_allow {
    }

    chain mangle_PRE_public_post {
    }

    chain filter_IN_policy_allow-host-ipv6 {
        jump filter_IN_policy_allow-host-ipv6_pre
        jump filter_IN_policy_allow-host-ipv6_log
        jump filter_IN_policy_allow-host-ipv6_deny
        jump filter_IN_policy_allow-host-ipv6_allow
        jump filter_IN_policy_allow-host-ipv6_post
    }

    chain filter_IN_policy_allow-host-ipv6_pre {
    }

    chain filter_IN_policy_allow-host-ipv6_log {
    }

    chain filter_IN_policy_allow-host-ipv6_deny {
    }

    chain filter_IN_policy_allow-host-ipv6_allow {
        icmpv6 type nd-neighbor-advert accept
        icmpv6 type nd-neighbor-solicit accept
        icmpv6 type nd-router-advert accept
        icmpv6 type nd-redirect accept
    }

    chain filter_IN_policy_allow-host-ipv6_post {
    }

    chain nat_PRE_policy_allow-host-ipv6 {
        jump nat_PRE_policy_allow-host-ipv6_pre
        jump nat_PRE_policy_allow-host-ipv6_log
        jump nat_PRE_policy_allow-host-ipv6_deny
        jump nat_PRE_policy_allow-host-ipv6_allow
        jump nat_PRE_policy_allow-host-ipv6_post
    }

    chain nat_PRE_policy_allow-host-ipv6_pre {
    }

    chain nat_PRE_policy_allow-host-ipv6_log {
    }

    chain nat_PRE_policy_allow-host-ipv6_deny {
    }

    chain nat_PRE_policy_allow-host-ipv6_allow {
    }

    chain nat_PRE_policy_allow-host-ipv6_post {
    }

    chain mangle_PRE_policy_allow-host-ipv6 {
        jump mangle_PRE_policy_allow-host-ipv6_pre
        jump mangle_PRE_policy_allow-host-ipv6_log
        jump mangle_PRE_policy_allow-host-ipv6_deny
        jump mangle_PRE_policy_allow-host-ipv6_allow
        jump mangle_PRE_policy_allow-host-ipv6_post
    }

    chain mangle_PRE_policy_allow-host-ipv6_pre {
    }

    chain mangle_PRE_policy_allow-host-ipv6_log {
    }

    chain mangle_PRE_policy_allow-host-ipv6_deny {
    }

    chain mangle_PRE_policy_allow-host-ipv6_allow {
    }

    chain mangle_PRE_policy_allow-host-ipv6_post {
    }

    chain filter_IN_docker {
        jump filter_INPUT_POLICIES_pre
        jump filter_IN_docker_pre
        jump filter_IN_docker_log
        jump filter_IN_docker_deny
        jump filter_IN_docker_allow
        jump filter_IN_docker_post
        jump filter_INPUT_POLICIES_post
        accept
    }

    chain filter_IN_docker_pre {
    }

    chain filter_IN_docker_log {
    }

    chain filter_IN_docker_deny {
    }

    chain filter_IN_docker_allow {
    }

    chain filter_IN_docker_post {
    }

    chain nat_POST_docker {
        jump nat_POSTROUTING_POLICIES_pre
        jump nat_POST_docker_pre
        jump nat_POST_docker_log
        jump nat_POST_docker_deny
        jump nat_POST_docker_allow
        jump nat_POST_docker_post
        jump nat_POSTROUTING_POLICIES_post
    }

    chain nat_POST_docker_pre {
    }

    chain nat_POST_docker_log {
    }

    chain nat_POST_docker_deny {
    }

    chain nat_POST_docker_allow {
        meta nfproto ipv4 oifname != "lo" masquerade
    }

    chain nat_POST_docker_post {
    }

    chain filter_FWD_docker {
        jump filter_FORWARD_POLICIES_pre
        jump filter_FWD_docker_pre
        jump filter_FWD_docker_log
        jump filter_FWD_docker_deny
        jump filter_FWD_docker_allow
        jump filter_FWD_docker_post
        jump filter_FORWARD_POLICIES_post
        accept
    }

    chain filter_FWD_docker_pre {
    }

    chain filter_FWD_docker_log {
    }

    chain filter_FWD_docker_deny {
    }

    chain filter_FWD_docker_allow {
        oifname "docker0" accept
        oifname "br-ae57f90864d9" accept
    }

    chain filter_FWD_docker_post {
    }

    chain nat_PRE_docker {
        jump nat_PREROUTING_POLICIES_pre
        jump nat_PRE_docker_pre
        jump nat_PRE_docker_log
        jump nat_PRE_docker_deny
        jump nat_PRE_docker_allow
        jump nat_PRE_docker_post
        jump nat_PREROUTING_POLICIES_post
    }

    chain nat_PRE_docker_pre {
    }

    chain nat_PRE_docker_log {
    }

    chain nat_PRE_docker_deny {
    }

    chain nat_PRE_docker_allow {
        ip saddr 10.0.20.120 tcp dport 40814 dnat ip to 172.20.0.2:40814
    }

    chain nat_PRE_docker_post {
    }

    chain mangle_PRE_docker {
        jump mangle_PREROUTING_POLICIES_pre
        jump mangle_PRE_docker_pre
        jump mangle_PRE_docker_log
        jump mangle_PRE_docker_deny
        jump mangle_PRE_docker_allow
        jump mangle_PRE_docker_post
        jump mangle_PREROUTING_POLICIES_post
    }

    chain mangle_PRE_docker_pre {
    }

    chain mangle_PRE_docker_log {
    }

    chain mangle_PRE_docker_deny {
    }

    chain mangle_PRE_docker_allow {
    }

    chain mangle_PRE_docker_post {
    }

    chain filter_FWD_policy_docker-forwarding {
        jump filter_FWD_policy_docker-forwarding_pre
        jump filter_FWD_policy_docker-forwarding_log
        jump filter_FWD_policy_docker-forwarding_deny
        jump filter_FWD_policy_docker-forwarding_allow
        jump filter_FWD_policy_docker-forwarding_post
        accept
    }

    chain filter_FWD_policy_docker-forwarding_pre {
    }

    chain filter_FWD_policy_docker-forwarding_log {
    }

    chain filter_FWD_policy_docker-forwarding_deny {
    }

    chain filter_FWD_policy_docker-forwarding_allow {
    }

    chain filter_FWD_policy_docker-forwarding_post {
    }

    chain nat_POST_policy_docker-forwarding {
        jump nat_POST_policy_docker-forwarding_pre
        jump nat_POST_policy_docker-forwarding_log
        jump nat_POST_policy_docker-forwarding_deny
        jump nat_POST_policy_docker-forwarding_allow
        jump nat_POST_policy_docker-forwarding_post
    }

    chain nat_POST_policy_docker-forwarding_pre {
    }

    chain nat_POST_policy_docker-forwarding_log {
    }

    chain nat_POST_policy_docker-forwarding_deny {
    }

    chain nat_POST_policy_docker-forwarding_allow {
    }

    chain nat_POST_policy_docker-forwarding_post {
    }
}

防火墙命令--列出所有区域

block
  target: %%REJECT%%
  icmp-block-inversion: no
  interfaces:
  sources:
  services:
  ports:
  protocols:
  forward: yes
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

dmz
  target: default
  icmp-block-inversion: no
  interfaces:
  sources:
  services: ssh
  ports:
  protocols:
  forward: yes
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

docker (active)
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: br-ae57f90864d9 docker0
  sources:
  services:
  ports:
  protocols:
  forward: yes
  masquerade: yes
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
    rule family="ipv4" source address="10.0.20.120/32" forward-port port="40814" protocol="tcp" to-port="40814" to-addr="172.20.0.2"

drop
  target: DROP
  icmp-block-inversion: no
  interfaces:
  sources:
  services:
  ports:
  protocols:
  forward: yes
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

external
  target: default
  icmp-block-inversion: no
  interfaces:
  sources:
  services: ssh
  ports:
  protocols:
  forward: yes
  masquerade: yes
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

home
  target: default
  icmp-block-inversion: no
  interfaces:
  sources:
  services: cockpit dhcpv6-client mdns samba-client ssh
  ports:
  protocols:
  forward: yes
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

internal
  target: default
  icmp-block-inversion: no
  interfaces:
  sources:
  services: cockpit dhcpv6-client mdns samba-client ssh
  ports:
  protocols:
  forward: yes
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

nm-shared
  target: ACCEPT
  icmp-block-inversion: no
  interfaces:
  sources:
  services: dhcp dns ssh
  ports:
  protocols: icmp ipv6-icmp
  forward: no
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
    rule priority="32767" reject

public (active)
  target: default
  icmp-block-inversion: no
  interfaces: ens192
  sources:
  services: cockpit dhcpv6-client ssh
  ports: 10050/tcp 40814/tcp
  protocols:
  forward: yes
  masquerade: yes
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
    rule family="ipv4" source address="10.0.20.120" forward-port port="40814" protocol="tcp" to-port="40814" to-addr="172.20.0.2"
    rule family="ipv4" source address="10.0.20.120" forward-port port="40814" protocol="tcp" to-port="40814" to-addr="172.17.0.2"

trusted
  target: ACCEPT
  icmp-block-inversion: no
  interfaces:
  sources:
  services:
  ports:
  protocols:
  forward: yes
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

work
  target: default
  icmp-block-inversion: no
  interfaces:
  sources:
  services: cockpit dhcpv6-client ssh
  ports:
  protocols:
  forward: yes
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
docker
  • 1 个回答
  • 23 Views
Martin Hope
WolfiG
Asked: 2024-07-18 15:54:43 +0800 CST

无法连接到 Debian 上的 Docker 守护进程

  • 5

我尝试docker run hello-world在 debian bullseye 机器上执行,在该机器上我没有 root 用户权限,但可以使用sudo。

我按照 docker 文档安装了 docker (v19.03.13) sudo。docker 可以启动,sudo systemctl start docker并且sudo systemctl status docker服务按预期运行。sudo systemctl status docker.socket看起来也很正常。

我docker.sock在 找到一个文件/var/run/。

现在,当尝试运行docker run hello-world(不sudo!)时,我收到一条错误消息:

docker: Cannot connect to the Docker daemon at unix:///home/<myuser>/.docker/run/docker.sock. Is the docker daemon running?.

看起来,docker 服务正在错误的目录中寻找 docker.sock。

我已经尝试过以下方法,但是没有效果:

  • sudo systemctl stop/start/restart docker(我在有关此问题的讨论中找到了标准答案)
  • 将我的用户添加到docker组:sudo usermod -aG docker $USER。命令less /etc/group | grep docker产生 docker:x:999:myUserName
  • ls -la /var/run/docker.sock--> (由https://unix.stackexchange.com/a/279785/335075srw-rw---- 1 root docker 0 Jul 17 17:48 /var/run/docker.sock 建议)
  • unset DOCKER_HOST(例如https://stackoverflow.com/a/69674630/1552080)。
  • sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/system(这种方法成功过一次,但在我退出并再次登录后就失效了)

我能做的就是跑步sudo docker run hello-world。

docker
  • 2 个回答
  • 21 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    模块 i915 可能缺少固件 /lib/firmware/i915/*

    • 3 个回答
  • Marko Smith

    无法获取 jessie backports 存储库

    • 4 个回答
  • Marko Smith

    如何将 GPG 私钥和公钥导出到文件

    • 4 个回答
  • Marko Smith

    我们如何运行存储在变量中的命令?

    • 5 个回答
  • Marko Smith

    如何配置 systemd-resolved 和 systemd-networkd 以使用本地 DNS 服务器来解析本地域和远程 DNS 服务器来解析远程域?

    • 3 个回答
  • Marko Smith

    dist-upgrade 后 Kali Linux 中的 apt-get update 错误 [重复]

    • 2 个回答
  • Marko Smith

    如何从 systemctl 服务日志中查看最新的 x 行

    • 5 个回答
  • Marko Smith

    Nano - 跳转到文件末尾

    • 8 个回答
  • Marko Smith

    grub 错误:你需要先加载内核

    • 4 个回答
  • Marko Smith

    如何下载软件包而不是使用 apt-get 命令安装它?

    • 7 个回答
  • Martin Hope
    user12345 无法获取 jessie backports 存储库 2019-03-27 04:39:28 +0800 CST
  • Martin Hope
    Carl 为什么大多数 systemd 示例都包含 WantedBy=multi-user.target? 2019-03-15 11:49:25 +0800 CST
  • Martin Hope
    rocky 如何将 GPG 私钥和公钥导出到文件 2018-11-16 05:36:15 +0800 CST
  • Martin Hope
    Evan Carroll systemctl 状态显示:“状态:降级” 2018-06-03 18:48:17 +0800 CST
  • Martin Hope
    Tim 我们如何运行存储在变量中的命令? 2018-05-21 04:46:29 +0800 CST
  • Martin Hope
    Ankur S 为什么 /dev/null 是一个文件?为什么它的功能不作为一个简单的程序来实现? 2018-04-17 07:28:04 +0800 CST
  • Martin Hope
    user3191334 如何从 systemctl 服务日志中查看最新的 x 行 2018-02-07 00:14:16 +0800 CST
  • Martin Hope
    Marko Pacak Nano - 跳转到文件末尾 2018-02-01 01:53:03 +0800 CST
  • Martin Hope
    Kidburla 为什么真假这么大? 2018-01-26 12:14:47 +0800 CST
  • Martin Hope
    Christos Baziotis 在一个巨大的(70GB)、一行、文本文件中替换字符串 2017-12-30 06:58:33 +0800 CST

热门标签

linux bash debian shell-script text-processing ubuntu centos shell awk ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve