AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / computer / 问题

问题[docker](computer)

Martin Hope
user3532232
Asked: 2025-04-07 17:57:21 +0800 CST

Docker:防止使用远程映像

  • 8

假设我使用 FROM 命令创建了一个使用另一个镜像作为 baseImage 的 Dockerfile,那么如果在本地找不到该 baseImage,Docker build 将从存储库中提取该 baseImage。

现在假设我构建了自己的基础镜像,并希望将其作为其他镜像的基础。然后,我为基础镜像指定一个通用名称,例如 aCommonImageName。如果我忘记在本地构建基础镜像,那么使用我的基础镜像作为基础镜像的镜像将会从仓库中下载另一个同名的随机镜像,甚至可能是被盗用的镜像。

如果在本地机器映像中找不到 baseImage,我该如何限制 Dockerfile 中止构建过程?

docker
  • 2 个回答
  • 183 Views
Martin Hope
Elliott B
Asked: 2025-03-31 07:50:58 +0800 CST

如何使用带有环境变量的路径将 AWS 凭证绑定到 Dev Container 中?

  • 6

我想将我的 AWS 凭证文件以可移植的方式绑定到 DevContainer 中,以便在 VS Code 中进行开发,以便其他同事也可以在他们的机器上进行开发,所以我需要对我的主目录的相对引用。我尝试了这个 devcontainer.json:

{
    "image": "ubuntu:latest",
    "mounts": [{
        "source": "$HOME/.aws",
        "target": "/root/.aws",
        "type": "bind"
    }]
}

但它失败并出现错误:

[2025-03-30T23:22:08.292Z] docker:守护进程的错误响应:类型“bind”的挂载配置无效:挂载路径无效:'$HOME/.aws'挂载路径必须是绝对路径

在命令行上,此命令运行良好,因为 bash 扩展了变量:

docker run --mount type=bind,src=$HOME/.aws,dst=/root/.aws ubuntu:latest

PS:我的.aws/credentials文件是由外部进程动态生成和刷新的。

docker
  • 1 个回答
  • 16 Views
Martin Hope
huhzz
Asked: 2025-03-04 03:48:06 +0800 CST

为什么只有在 macOS 上使用“sudo”运行 Chrome 时才能访问本地 Web 服务器?

  • 5
该问题已从 Server Fault迁移,因为可以在 Super User 上回答。5 天前迁移 。

描述

我有一台Windows 台式机和一台MacBook,都连接到同一个本地网络。

  • 桌面(Windows):通过以太网连接
  • MacBook:通过 Wi-Fi 连接

在我的桌面 (Windows)上,我使用Docker设置了一个本地 Web 服务器(在端口 上运行9000,绑定到0.0.0.0)。
容器以 启动-p 9000:9000,因此应该可以从同一网络上的其他设备访问它。

使用桌面浏览器时,我可以毫无问题地访问其 Web 界面。

但是,当我尝试使用 Google Chrome从 MacBook 访问 Web 界面时,出现“页面未找到”错误。
奇怪的是,如果我使用 运行 Chrome sudo,页面可以正确加载。

更多详细信息

  • 台式机(Windows)和 MacBook 位于同一子网。
  • Web 服务器正在监听0.0.0.0:9000,因此应该可以从其他设备访问它。
  • 在MacBook上运行nc -zv <desktop-ip> 9000显示端口已打开。
  • 使用 的 Python 脚本requests.get("http://<desktop-ip>:9000") 在正常运行时会失败,但使用 执行时可以正常工作sudo。
  • 我的 MacBook 只有一个用户帐户,并且具有管理员权限。
  • macOS 防火墙已被禁用(/usr/libexec/ApplicationFirewall/socketfilterfw --getglobalstate确认这一点)。
  • 运行sudo pfctl -d(禁用pf防火墙)不能解决问题。

我已经知道的

  • 运行任何sudo允许访问的东西,所以我确实有一个解决方法。
  • 然而,我真的很好奇这个问题的根本原因。
  • 当我仅使用Windows 桌面时,我从未遇到过此问题。
    它似乎是macOS 特有的,可能是由于其安全模型或网络行为。

我的问题

  1. 为什么只有在使用 Chrome 运行时 Web 服务器才会加载sudo?
  2. 为什么一个简单的 Pythonrequests.get()调用会失败,除非我用 运行它sudo?
  3. 什么可能限制 macOS 上普通用户进程的网络访问?

这对我来说不是一个关键问题,因为我有一个解决方法,但我真的很想知道为什么会发生这种情况。任何见解都将不胜感激——谢谢!

docker
  • 1 个回答
  • 42 Views
Martin Hope
Fallen Soul
Asked: 2025-01-12 15:03:09 +0800 CST

Docker Compose:qBittorrent 和 NordVPN 配置问题

  • 5

我对使用 Docker 和 Docker Compose 还比较陌生,遇到了一个问题,不知道该如何解决。如能得到任何帮助,我将不胜感激!

这是我的设置:

容器 1:主机 NordVPN

容器 2:运行 qBittorrent

我已经配置了 docker-compose 文件,以便容器 2 依赖于容器 1。当我启动容器时,一切似乎都运行正常。但是,当我尝试下载 torrent 时,状态立即被标记为“停滞”。

以下是我目前已确认的情况:

Both containers are getting the same public IP address.

Both containers can communicate with Google (tested with ping/curl).

qBittorrent works fine without going through the VPN.

Other torrent programs such as transmitting and deluge run with no issue.

一旦我通过 VPN 容器路由 qBittorrent,它就会再次停止工作。

有人之前遇到过这个问题吗?或者有人对我可能做错的事情有什么建议吗?

    services:
  # NordVPN service
  nordvpn:
    image: ghcr.io/bubuntux/nordlynx
    container_name: nordvpn
    cap_add:
      - NET_ADMIN
    environment:
      - PRIVATE_KEY= "insert key"
      - CONNECT=au
      - TECHNOLOGY=NordLynx
      - NET_LOCAL=192.168.0.0/24
    ports:
      - 6881:6881/tcp
      - 6881:6881/udp
      - 8080:8080/tcp
    sysctls:
      net.ipv6.conf.all.disable_ipv6: 1
    restart: unless-stopped
  qbittorrent:
    image: lscr.io/linuxserver/qbittorrent:latest
    container_name: qbittorrent
    environment:
      - PUID=1000
      - PGID=1000
      - TZ='Australia/Brisbane'
      - WEBUI_PORT=8080
      - TORRENTING_PORT=6881
    volumes:
      - ./config:/config
      - /home/andrew/downloads:/downloads
    depends_on:
      - nordvpn
    network_mode: container:nordvpn  # Use the VPN container's network

提前感谢您的帮助!

docker
  • 1 个回答
  • 51 Views
Martin Hope
Johannes
Asked: 2024-12-06 23:59:52 +0800 CST

为什么代理到 PHP 时会出现 Coolify Docker 网关超时?

  • 6

我有一个 Coolify 环境,其中包含一个由 PHP Web 服务器和 nginx 代理组成的 docker compose。nginx 代理提供静态文件并充当 PHP 服务器的代理。

我在 Coolify 中设置了两个域,如下所示 https://app.example.org:80,https://static.example.org:80 这是我的 Docker 撰写文件:

version: '3.8'

services:
  app:
    image: php:8.3-fpm-alpine
    container_name: php-app
    working_dir: /var/www/html
    volumes:
#      - ./:/var/www/html
      - ./config:/var/www/html/config
      - ./backup:/var/www/html/backup
      - ./userdata:/var/www/html/userdata
      - ./.logs:/var/www/html/.logs
    ports:
      - "9000:9000"
    environment:
      PHP_OPCACHE_ENABLE: "1"
      PHP_OPCACHE_MEMORY_CONSUMPTION: "128"
    build:
      context: .
      dockerfile: Dockerfile
    networks:
      - app-network

  webserver:
    image: nginx:alpine
    container_name: nginx-web
    volumes:
      - ./nginx/sites:/etc/nginx/sites
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./shared_assets:/var/www/html/shared_assets
      - ./instances:/var/www/html/instances
    environment:
      - ENVIRONMENT=production
    ports:
      - "89:80"
    depends_on:
      - app
    networks:
      - app-network

networks:
  app-network:

对于 Nginx,我配置了两个站点

server {
    listen 80;
    server_name static.example.org;

    root /var/www/html/shared_assets;
    index index.html index.htm;

    # Serve static files from the shared_assets folder
     location /assets/ {
            alias /var/www/html/shared_assets/assets/;  # Fix the alias path to match the actual filesystem location
            try_files $uri $uri/ =404;
        }
}
server {
    listen 80;
    server_name app.example.org;

    root /var/www/html/instances/app/public;
    index index.php;
    add_header Access-Control-Allow-Origin "*";

    location / {
        try_files $uri /index.php$is_args$args;
    }

    location ~ \.php$ {
        include fastcgi_params;
        fastcgi_pass app:9000;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    }

    location ~ /\.ht {
        deny all;
    }
}

静态文件服务正常,但应用程序提示网关超时。此设置在我本地机器上运行良好。

知道可能是什么问题吗?

docker
  • 1 个回答
  • 28 Views
Martin Hope
SneakyShrike
Asked: 2024-11-17 00:04:32 +0800 CST

如何使用 Let's Encrypt SSL 和 Duck DNS 设置 Caddy 来为作为 Docker 容器运行的多项服务提供服务?

  • 6

我已经成功设置了 VaultWarden,它只能通过 Caddy 在具有 lets encrypt SSL 证书的本地 LAN 上访问。Caddy、VaultWarden 和其他服务作为在 Raspberry Pi 主机上运行的 Docker 容器运行。

我已经设置了一个 Duck DNS 域:test111.duckdns.org它指向我的 Raspberry Pi 私有 LAN IP 地址。

我在 pfSense DNS 解析器设置中添加了主机覆盖(这是使其工作的关键步骤),如下所示:

Host: test111
Domain: duckdns.org
IP Address: <raspberry pi IP address>

我的 Docker 撰写文件:

networks:
  docker-mongoose:
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: "172.16.117.0/27"

services:
  caddy:
    image: caddy:2
    networks:
      docker-mongoose:
        ipv4_address: 172.16.117.10
    container_name: caddy
    restart: always
    ports:
      - 80:80
      - 443:443
      - 443:443/udp # Needed for HTTP/3.
    volumes:
      - ./caddy:/usr/bin/caddy  
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - ./caddy-config:/config
      - ./caddy-data:/data
    environment:
      DOMAIN: "test111.duckdns.org" 
      #EMAIL: ""
      DUCKDNS_TOKEN: "<duckdns token>>"
      LOG_FILE: "/data/access.logs

   unifi-network-application:
    container_name: unifi-network-application
    image: lscr.io/linuxserver/unifi-network-application:latest
    networks:
      docker-mongoose:
        ipv4_address: 172.16.117.9
    sysctls:
      - net.ipv6.conf.all.disable_ipv6=1s
    ports:
      - 8443:8443
      - 3478:3478/udp
      - 10001:10001/udp
      - 8080:8080
      - 1900:1900/udp #optional
      #- 8843:8843 #optional
      #- 8880:8880 #optional
      #- 6789:6789 #optional
      #- 5514:5514/udp #optional
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
      - MONGO_USER=user
      - MONGO_PASS=password
      - MONGO_HOST=unifi-db
      - MONGO_PORT=27017
      - MONGO_DBNAME=unifi-db
      - MEM_LIMIT=1024 #optional
      - MEM_STARTUP=1024 #optional
      #- MONGO_TLS= #optional
      #- MONGO_AUTHSOURCE= #optional
    volumes:
      - /home/user/docker/unifi-network-application/config:/config
    restart: unless-stopped

    unifi-db:
      etc....

  vaultwarden:
    image: vaultwarden/server:latest
    networks:
      docker-mongoose:
        ipv4_address: 172.16.117.8
    container_name: vaultwarden
    restart: always
    environment:
      DOMAIN: "https://test111.duckdns.org"
      SIGNUPS_ALLOWED: "false"
      INVITATIONS_ALLOWED: "false"
      SHOW_PASSWORD_HINT: "false"
      LOG_FILE: "/data/vaultwarden.log"
      LOG_LEVEL: "warn"
    volumes:
      - ./vw-data:/data # the path before the : can be changed
    #ports:
      #- 8888:80 # you can replace the 11001 with your preferred port

我的Caddy文件:

{$DOMAIN} {
    log {
        level INFO
        output file {$LOG_FILE} {
            roll_size 10MB
            roll_keep 10
        }
    }

    # Use the ACME DNS-01 challenge to get a cert for the configured domain.
    tls {
        dns duckdns {$DUCKDNS_TOKEN}
    }

    # This setting may have compatibility issues with some browsers
    # (e.g., attachment downloading on Firefox). Try disabling this
    # if you encounter issues.
    encode zstd gzip

    # Proxy everything to Rocket
    reverse_proxy vaultwarden:80
}

此设置运行良好,我可以通过 SSL 访问我的 VaultWarden https://test111.duckdns.org,它使用 Let's Encrypt 证书。我使用本指南来实现此目的。

但是我希望它能够使用 Caddy 提供多种 Docker 服务。例如,要访问 VaultWarden,我可以访问https://vaultwarden.test111.duckdns.org或https://service.test111.duckdns.org等。

我尝试使用通配符在 Caddyfile 中更改此设置:

# Wildcard SSL for all subdomains under the domain defined in the {$DOMAIN} variable
*.{$DOMAIN} {
    tls {
        dns duckdns {$DUCKDNS_TOKEN}
    }

    # Logs configuration (optional, adjust as necessary)
    log {
        level INFO
        output file {$LOG_FILE} {
            roll_size 10MB
            roll_keep 10
        }
    }

    # Default reverse proxy to a generic service if no specific service matches
    reverse_proxy service_default:80
}

# Vaultwarden Service
vaultwarden.{$DOMAIN} {
    reverse_proxy vaultwarden:80
    log {
        level INFO
        output file {$LOG_FILE} {
            roll_size 10MB
            roll_keep 10
        }
    }
}


unifi.{$DOMAIN} {
    reverse_proxy unifi-network-application:8443
    log {
        level INFO
        output file {$LOG_FILE} {
            roll_size 10MB
            roll_keep 10
        }
    }
}

我还在 pfSense 的 DNS 解析器设置中添加了主机覆盖,以便不同的服务指向我的 Docker IP 地址:

Host: unifi     
Domain: test111.duckdns.org     
IP Address: 172.16.117.9
Host: vaultwarden   
Domain: test111.duckdns.org     
IP Address: 172.16.117.8

它可以使用nslookup找到这些:

nslookup vaultwarden.test111.duckdns.org
Server:     127.0.0.53
Address:    127.0.0.53#53

Non-authoritative answer:
Name:   vaultwarden.test111.duckdns.org
Address: 172.16.117.8

然而这不起作用我无法访问我的 Docker 服务并且在我的 Caddy 容器中出现以下错误:

{"level":"info","ts":1731770427.407683,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}

{"level":"info","ts":1731770427.4159002,"msg":"adapted config to JSON","adapter":"caddyfile"}

{"level":"info","ts":1731770427.4204524,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}

{"level":"info","ts":1731770427.4216182,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x383a900"}

{"level":"info","ts":1731770427.4221516,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}

{"level":"info","ts":1731770427.4224873,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}

{"level":"info","ts":1731770427.4248602,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}

{"level":"info","ts":1731770427.4254677,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 7168 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details."}

{"level":"info","ts":1731770427.4263346,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}

{"level":"info","ts":1731770427.4268074,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}

{"level":"info","ts":1731770427.4269671,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["vaultwarden.test111.duckdns.org","unifi.test111.duckdns.org","*.test111.duckdns.org"]}

{"level":"info","ts":1731770427.4284034,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}

{"level":"info","ts":1731770427.4289424,"msg":"serving initial configuration"}

{"level":"info","ts":1731770427.4288747,"logger":"tls.obtain","msg":"acquiring lock","identifier":"vaultwarden.test111.duckdns.org"}

{"level":"info","ts":1731770427.4296653,"logger":"tls.obtain","msg":"acquiring lock","identifier":"unifi.test111.duckdns.org"}

{"level":"info","ts":1731770427.429877,"logger":"tls.obtain","msg":"acquiring lock","identifier":"*.test111.duckdns.org"}

{"level":"info","ts":1731770427.4420304,"logger":"tls","msg":"storage cleaning happened too recently; skipping for now","storage":"FileStorage:/data/caddy","instance":"a16163dc-5a65-4977-a1d2-99f3861efde9","try_again":1731856827.4420183,"try_again_in":86399.999995129}

{"level":"info","ts":1731770427.4445798,"logger":"tls","msg":"finished cleaning storage units"}

{"level":"info","ts":1731770427.44627,"logger":"tls.obtain","msg":"lock acquired","identifier":"*.test111.duckdns.org"}

{"level":"info","ts":1731770427.4462702,"logger":"tls.obtain","msg":"lock acquired","identifier":"vaultwarden.test111.duckdns.org"}

{"level":"info","ts":1731770427.446822,"logger":"tls.obtain","msg":"obtaining certificate","identifier":"*.test111.duckdns.org"}

{"level":"info","ts":1731770427.4474423,"logger":"tls.obtain","msg":"obtaining certificate","identifier":"vaultwarden.test111.duckdns.org"}

{"level":"info","ts":1731770427.4468448,"logger":"tls.obtain","msg":"lock acquired","identifier":"unifi.test111.duckdns.org"}

{"level":"info","ts":1731770427.4486356,"logger":"tls.obtain","msg":"obtaining certificate","identifier":"unifi.test111.duckdns.org"}

{"level":"info","ts":1731770427.4698937,"logger":"tls","msg":"waiting on internal rate limiter","identifiers":["unifi.test111.duckdns.org"],"ca":"https://acme-v02.api.letsencrypt.org/directory","account":""}

{"level":"info","ts":1731770427.4699652,"logger":"tls","msg":"done waiting on internal rate limiter","identifiers":["unifi.test111.duckdns.org"],"ca":"https://acme-v02.api.letsencrypt.org/directory","account":""}

{"level":"info","ts":1731770427.4700146,"logger":"tls","msg":"using ACME account","account_id":"https://acme-v02.api.letsencrypt.org/acme/acct/1972895377","account_contact":[]}

{"level":"info","ts":1731770427.4704852,"logger":"tls","msg":"waiting on internal rate limiter","identifiers":["vaultwarden.test111.duckdns.org"],"ca":"https://acme-v02.api.letsencrypt.org/directory","account":""}

{"level":"info","ts":1731770427.4709487,"logger":"tls","msg":"done waiting on internal rate limiter","identifiers":["vaultwarden.test111.duckdns.org"],"ca":"https://acme-v02.api.letsencrypt.org/directory","account":""}

{"level":"info","ts":1731770427.472356,"logger":"tls","msg":"using ACME account","account_id":"https://acme-v02.api.letsencrypt.org/acme/acct/1972895377","account_contact":[]}

{"level":"info","ts":1731770427.4715934,"logger":"tls.issuance.acme","msg":"waiting on internal rate limiter","identifiers":["*.test111.duckdns.org"],"ca":"https://acme-v02.api.letsencrypt.org/directory","account":""}

{"level":"info","ts":1731770427.4725082,"logger":"tls.issuance.acme","msg":"done waiting on internal rate limiter","identifiers":["*.test111.duckdns.org"],"ca":"https://acme-v02.api.letsencrypt.org/directory","account":""}

{"level":"info","ts":1731770427.4725654,"logger":"tls.issuance.acme","msg":"using ACME account","account_id":"https://acme-v02.api.letsencrypt.org/acme/acct/1972895377","account_contact":[]}

{"level":"info","ts":1731770428.6145887,"logger":"tls.acme_client","msg":"trying to solve challenge","identifier":"vaultwarden.test111.duckdns.org","challenge_type":"tls-alpn-01","ca":"https://acme-v02.api.letsencrypt.org/directory"}

{"level":"info","ts":1731770428.686017,"logger":"tls.acme_client","msg":"trying to solve challenge","identifier":"unifi.test111.duckdns.org","challenge_type":"tls-alpn-01","ca":"https://acme-v02.api.letsencrypt.org/directory"}

{"level":"info","ts":1731770428.8439467,"logger":"tls.issuance.acme.acme_client","msg":"trying to solve challenge","identifier":"*.test111.duckdns.org","challenge_type":"dns-01","ca":"https://acme-v02.api.letsencrypt.org/directory"}

{"level":"error","ts":1731770429.2492373,"logger":"tls.acme_client","msg":"challenge failed","identifier":"unifi.test111.duckdns.org","challenge_type":"tls-alpn-01","problem":{"type":"urn:ietf:params:acme:error:dns","title":"","detail":"no valid A records found for unifi.test111.duckdns.org; DNS problem: SERVFAIL looking up AAAA for unifi.test111.duckdns.org - the domain's nameservers may be malfunctioning","instance":"","subproblems":[]}}

{"level":"error","ts":1731770429.2495832,"logger":"tls.acme_client","msg":"validating authorization","identifier":"unifi.test111.duckdns.org","problem":{"type":"urn:ietf:params:acme:error:dns","title":"","detail":"no valid A records found for unifi.test111.duckdns.org; DNS problem: SERVFAIL looking up AAAA for unifi.test111.duckdns.org - the domain's nameservers may be malfunctioning","instance":"","subproblems":[]},"order":"https://acme-v02.api.letsencrypt.org/acme/order/1972895377/323713330197","attempt":1,"max_attempts":3}

{"level":"info","ts":1731770430.672126,"logger":"tls.acme_client","msg":"trying to solve challenge","identifier":"unifi.test111.duckdns.org","challenge_type":"http-01","ca":"https://acme-v02.api.letsencrypt.org/directory"}

{"level":"error","ts":1731770431.3256845,"logger":"tls.issuance.acme.acme_client","msg":"cleaning up solver","identifier":"*.test111.duckdns.org","challenge_type":"dns-01","error":"no memory of presenting a DNS record for \"_acme-challenge.test111.duckdns.org\" (usually OK if presenting also failed)"}

{"level":"error","ts":1731770431.5020833,"logger":"tls.obtain","msg":"could not get certificate from issuer","identifier":"*.test111.duckdns.org","issuer":"acme-v02.api.letsencrypt.org-directory","error":"[*.test111.duckdns.org] solving challenges: presenting for challenge: could not determine zone for domain \"_acme-challenge.test111.duckdns.org\": unexpected response code 'SERVFAIL' for _acme-challenge.test111.duckdns.org. (order=https://acme-v02.api.letsencrypt.org/acme/order/1972895377/323713330907) (ca=https://acme-v02.api.letsencrypt.org/directory)"}

{"level":"error","ts":1731770431.5025475,"logger":"tls.obtain","msg":"will retry","error":"[*.test111.duckdns.org] Obtain: [*.test111.duckdns.org] solving challenges: presenting for challenge: could not determine zone for domain \"_acme-challenge.test111.duckdns.org\": unexpected response code 'SERVFAIL' for _acme-challenge.test111.duckdns.org. (order=https://acme-v02.api.letsencrypt.org/acme/order/1972895377/323713330907) (ca=https://acme-v02.api.letsencrypt.org/directory)","attempt":1,"retrying_in":60,"elapsed":4.05621756,"max_duration":2592000}

{"level":"error","ts":1731770438.8788044,"logger":"tls.acme_client","msg":"challenge failed","identifier":"vaultwarden.test111.duckdns.org","challenge_type":"tls-alpn-01","problem":{"type":"urn:ietf:params:acme:error:dns","title":"","detail":"DNS problem: SERVFAIL looking up A for vaultwarden.test111.duckdns.org - the domain's nameservers may be malfunctioning; no valid AAAA records found for vaultwarden.test111.duckdns.org","instance":"","subproblems":[]}}

{"level":"error","ts":1731770438.8789387,"logger":"tls.acme_client","msg":"validating authorization","identifier":"vaultwarden.test111.duckdns.org","problem":{"type":"urn:ietf:params:acme:error:dns","title":"","detail":"DNS problem: SERVFAIL looking up A for vaultwarden.test111.duckdns.org - the domain's nameservers may be malfunctioning; no valid AAAA records found for vaultwarden.test111.duckdns.org","instance":"","subproblems":[]},"order":"https://acme-v02.api.letsencrypt.org/acme/order/1972895377/323713330097","attempt":1,"max_attempts":3}

{"level":"info","ts":1731770440.2944498,"logger":"tls.acme_client","msg":"trying to solve challenge","identifier":"vaultwarden.test111.duckdns.org","challenge_type":"http-01","ca":"https://acme-v02.api.letsencrypt.org/directory"}

{"level":"error","ts":1731770450.1866465,"logger":"tls.acme_client","msg":"challenge failed","identifier":"unifi.test111.duckdns.org","challenge_type":"http-01","problem":{"type":"urn:ietf:params:acme:error:dns","title":"","detail":"no valid A records found for unifi.test111.duckdns.org; no valid AAAA records found for unifi.test111.duckdns.org","instance":"","subproblems":[]}}

{"level":"error","ts":1731770450.1867352,"logger":"tls.acme_client","msg":"validating authorization","identifier":"unifi.test111.duckdns.org","problem":{"type":"urn:ietf:params:acme:error:dns","title":"","detail":"no valid A records found for unifi.test111.duckdns.org; no valid AAAA records found for unifi.test111.duckdns.org","instance":"","subproblems":[]},"order":"https://acme-v02.api.letsencrypt.org/acme/order/1972895377/323713337107","attempt":2,"max_attempts":3}

{"level":"error","ts":1731770450.1868649,"logger":"tls.obtain","msg":"could not get certificate from issuer","identifier":"unifi.test111.duckdns.org","issuer":"acme-v02.api.letsencrypt.org-directory","error":"HTTP 400 urn:ietf:params:acme:error:dns - no valid A records found for unifi.test111.duckdns.org; no valid AAAA records found for unifi.test111.duckdns.org"}

{"level":"error","ts":1731770450.1870203,"logger":"tls.obtain","msg":"will retry","error":"[unifi.test111.duckdns.org] Obtain: [unifi.test111.duckdns.org] solving challenge: unifi.test111.duckdns.org: [unifi.test111.duckdns.org] authorization failed: HTTP 400 urn:ietf:params:acme:error:dns - no valid A records found for unifi.test111.duckdns.org; no valid AAAA records found for unifi.test111.duckdns.org (ca=https://acme-v02.api.letsencrypt.org/directory)","attempt":1,"retrying_in":60,"elapsed":22.738644345,"max_duration":2592000}

{"level":"error","ts":1731770460.5871239,"logger":"tls.acme_client","msg":"challenge failed","identifier":"vaultwarden.test111.duckdns.org","challenge_type":"http-01","problem":{"type":"urn:ietf:params:acme:error:dns","title":"","detail":"DNS problem: SERVFAIL looking up A for vaultwarden.test111.duckdns.org - the domain's nameservers may be malfunctioning; DNS problem: SERVFAIL looking up AAAA for vaultwarden.test111.duckdns.org - the domain's nameservers may be malfunctioning","instance":"","subproblems":[]}}

{"level":"error","ts":1731770460.5872557,"logger":"tls.acme_client","msg":"validating authorization","identifier":"vaultwarden.test111.duckdns.org","problem":{"type":"urn:ietf:params:acme:error:dns","title":"","detail":"DNS problem: SERVFAIL looking up A for vaultwarden.test111.duckdns.org - the domain's nameservers may be malfunctioning; DNS problem: SERVFAIL looking up AAAA for vaultwarden.test111.duckdns.org - the domain's nameservers may be malfunctioning","instance":"","subproblems":[]},"order":"https://acme-v02.api.letsencrypt.org/acme/order/1972895377/323713378127","attempt":2,"max_attempts":3}

{"level":"error","ts":1731770460.5873518,"logger":"tls.obtain","msg":"could not get certificate from issuer","identifier":"vaultwarden.test111.duckdns.org","issuer":"acme-v02.api.letsencrypt.org-directory","error":"HTTP 400 urn:ietf:params:acme:error:dns - DNS problem: SERVFAIL looking up A for vaultwarden.test111.duckdns.org - the domain's nameservers may be malfunctioning; DNS problem: SERVFAIL looking up AAAA for vaultwarden.test111.duckdns.org - the domain's nameservers may be malfunctioning"}

{"level":"error","ts":1731770460.5875442,"logger":"tls.obtain","msg":"will retry","error":"[vaultwarden.test111.duckdns.org] Obtain: [vaultwarden.test111.duckdns.org] solving challenge: vaultwarden.test111.duckdns.org: [vaultwarden.test111.duckdns.org] authorization failed: HTTP 400 urn:ietf:params:acme:error:dns - DNS problem: SERVFAIL looking up A for vaultwarden.test111.duckdns.org - the domain's nameservers may be malfunctioning; DNS problem: SERVFAIL looking up AAAA for vaultwarden.test111.duckdns.org - the domain's nameservers may be malfunctioning (ca=https://acme-v02.api.letsencrypt.org/directory)","attempt":1,"retrying_in":60,"elapsed":33.140588664,"max_duration":2592000}

我本质上是想通过 Caddy 和 duck dns 来实现这一点。但我不确定这是否可行,或者我是否错误地配置了 Caddy。

docker
  • 1 个回答
  • 86 Views
Martin Hope
antman1p
Asked: 2024-11-04 21:47:48 +0800 CST

从 17.2.9 更新到 17.5.1 后,如何修复 GitLab 的 Web IDE?

  • 7

将 GitLab-ee、Docker、Omnibus 从 17.2.9 升级到 17.3.6 并最终升级到 17.5.1 后,当我在 GitLab 中的存储库中单击文件时,我单击“编辑”->“在 Web IDE 中打开”(vscode_web_ide),系统会转发 500 错误页面。

我安装的 17.2.9 或更早版本中不存在此行为。我将此问题发布为 GitLab 问题,但没有得到回复。

我该如何修复这个问题?

日志:

==> /var/log/gitlab/gitlab-exporter/current <==
2024-10-29_05:50:26.69914 ::1 - - [29/Oct/2024:05:50:26 UTC] "GET /sidekiq HTTP/1.1" 200 579
2024-10-29_05:50:26.69917 - -> /sidekiq

==> /var/log/gitlab/gitlab-rails/production.log <==
OpenSSL::Cipher::CipherError ():
encryptor (3.0.0) lib/encryptor.rb:98:in final' encryptor (3.0.0) lib/encryptor.rb:98:in crypt'
encryptor (3.0.0) lib/encryptor.rb:49:in decrypt' vendor/gems/attr_encrypted/lib/attr_encrypted.rb:244:in attr_decrypt'
vendor/gems/attr_encrypted/lib/attr_encrypted.rb:333:in attr_decrypt' vendor/gems/attr_encrypted/lib/attr_encrypted.rb:163:in block (2 levels) in attr_encrypted'
activemodel (7.0.8.4) lib/active_model/validator.rb:150:in block in validate' activemodel (7.0.8.4) lib/active_model/validator.rb:149:in each'
activemodel (7.0.8.4) lib/active_model/validator.rb:149:in validate' activesupport (7.0.8.4) lib/active_support/callbacks.rb:423:in block in make_lambda'
activesupport (7.0.8.4) lib/active_support/callbacks.rb:199:in block (2 levels) in halting' activesupport (7.0.8.4) lib/active_support/callbacks.rb:687:in block (2 levels) in default_terminator'
activesupport (7.0.8.4) lib/active_support/callbacks.rb:686:in catch' activesupport (7.0.8.4) lib/active_support/callbacks.rb:686:in block in default_terminator'
activesupport (7.0.8.4) lib/active_support/callbacks.rb:200:in block in halting' activesupport (7.0.8.4) lib/active_support/callbacks.rb:595:in block in invoke_before'
activesupport (7.0.8.4) lib/active_support/callbacks.rb:595:in each' activesupport (7.0.8.4) lib/active_support/callbacks.rb:595:in invoke_before'
activesupport (7.0.8.4) lib/active_support/callbacks.rb:106:in run_callbacks' activesupport (7.0.8.4) lib/active_support/callbacks.rb:929:in _run_validate_callbacks'
activemodel (7.0.8.4) lib/active_model/validations.rb:406:in run_validations!' activemodel (7.0.8.4) lib/active_model/validations/callbacks.rb:115:in block in run_validations!'
activesupport (7.0.8.4) lib/active_support/callbacks.rb:107:in run_callbacks' activesupport (7.0.8.4) lib/active_support/callbacks.rb:929:in _run_validation_callbacks'
activemodel (7.0.8.4) lib/active_model/validations/callbacks.rb:115:in run_validations!' activemodel (7.0.8.4) lib/active_model/validations.rb:337:in valid?'
activerecord (7.0.8.4) lib/active_record/validations.rb:68:in valid?' activerecord (7.0.8.4) lib/active_record/validations.rb:84:in perform_validations'
activerecord (7.0.8.4) lib/active_record/validations.rb:53:in save!' activerecord (7.0.8.4) lib/active_record/transactions.rb:302:in block in save!'
activerecord (7.0.8.4) lib/active_record/transactions.rb:354:in block in with_transaction_returning_status' activerecord (7.0.8.4) lib/active_record/connection_adapters/abstract/database_statements.rb:314:in transaction'
lib/gitlab/database/load_balancing/connection_proxy.rb:127:in public_send' lib/gitlab/database/load_balancing/connection_proxy.rb:127:in block in write_using_load_balancer'
lib/gitlab/database/load_balancing/load_balancer.rb:141:in block in read_write' lib/gitlab/database/load_balancing/load_balancer.rb:228:in retry_with_backoff'
lib/gitlab/database/load_balancing/load_balancer.rb:130:in read_write' lib/gitlab/database/load_balancing/connection_proxy.rb:126:in write_using_load_balancer'
lib/gitlab/database/load_balancing/connection_proxy.rb:78:in transaction' activerecord (7.0.8.4) lib/active_record/transactions.rb:350:in with_transaction_returning_status'
activerecord (7.0.8.4) lib/active_record/transactions.rb:302:in save!' activerecord (7.0.8.4) lib/active_record/suppressor.rb:54:in save!'
activerecord (7.0.8.4) lib/active_record/persistence.rb:782:in block in update!' activerecord (7.0.8.4) lib/active_record/transactions.rb:354:in block in with_transaction_returning_status'
activerecord (7.0.8.4) lib/active_record/connection_adapters/abstract/database_statements.rb:314:in transaction' lib/gitlab/database/load_balancing/connection_proxy.rb:127:in public_send'
lib/gitlab/database/load_balancing/connection_proxy.rb:127:in block in write_using_load_balancer' lib/gitlab/database/load_balancing/load_balancer.rb:141:in block in read_write'
lib/gitlab/database/load_balancing/load_balancer.rb:228:in retry_with_backoff' lib/gitlab/database/load_balancing/load_balancer.rb:130:in read_write'
lib/gitlab/database/load_balancing/connection_proxy.rb:126:in write_using_load_balancer' lib/gitlab/database/load_balancing/connection_proxy.rb:78:in transaction'
activerecord (7.0.8.4) lib/active_record/transactions.rb:350:in with_transaction_returning_status' activerecord (7.0.8.4) lib/active_record/persistence.rb:780:in update!'
lib/web_ide/default_oauth_application.rb:51:in block in ensure_oauth_application!' app/models/concerns/cross_database_modification.rb:91:in block in transaction'
activerecord (7.0.8.4) lib/active_record/connection_adapters/abstract/transaction.rb:319:in block in within_new_transaction' activesupport (7.0.8.4) lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in handle_interrupt'
activesupport (7.0.8.4) lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in block in synchronize' activesupport (7.0.8.4) lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in handle_interrupt'
activesupport (7.0.8.4) lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in synchronize' activerecord (7.0.8.4) lib/active_record/connection_adapters/abstract/transaction.rb:317:in within_new_transaction'
activerecord (7.0.8.4) lib/active_record/connection_adapters/abstract/database_statements.rb:316:in transaction' lib/gitlab/database/load_balancing/connection_proxy.rb:127:in public_send'
lib/gitlab/database/load_balancing/connection_proxy.rb:127:in block in write_using_load_balancer' lib/gitlab/database/load_balancing/load_balancer.rb:141:in block in read_write'
lib/gitlab/database/load_balancing/load_balancer.rb:228:in retry_with_backoff' lib/gitlab/database/load_balancing/load_balancer.rb:130:in read_write'
lib/gitlab/database/load_balancing/connection_proxy.rb:126:in write_using_load_balancer' lib/gitlab/database/load_balancing/connection_proxy.rb:78:in transaction'
activerecord (7.0.8.4) lib/active_record/transactions.rb:209:in transaction' lib/gitlab/database.rb:383:in block in transaction'
activesupport (7.0.8.4) lib/active_support/notifications.rb:206:in block in instrument' activesupport (7.0.8.4) lib/active_support/notifications/instrumenter.rb:24:in instrument'
activesupport (7.0.8.4) lib/active_support/notifications.rb:206:in instrument' lib/gitlab/database.rb:382:in transaction'
app/models/concerns/cross_database_modification.rb:82:in transaction' activerecord (7.0.8.4) lib/active_record/transactions.rb:290:in transaction'
lib/web_ide/default_oauth_application.rb:39:in ensure_oauth_application!' app/controllers/ide_controller.rb:48:in ensure_web_ide_oauth_application!'
activesupport (7.0.8.4) lib/active_support/callbacks.rb:400:in block in make_lambda' activesupport (7.0.8.4) lib/active_support/callbacks.rb:180:in block (2 levels) in halting_and_conditional'
actionpack (7.0.8.4) lib/abstract_controller/callbacks.rb:34:in block (2 levels) in <module:Callbacks>' activesupport (7.0.8.4) lib/active_support/callbacks.rb:181:in block in halting_and_conditional'
activesupport (7.0.8.4) lib/active_support/callbacks.rb:595:in block in invoke_before' activesupport (7.0.8.4) lib/active_support/callbacks.rb:595:in each'
activesupport (7.0.8.4) lib/active_support/callbacks.rb:595:in invoke_before' activesupport (7.0.8.4) lib/active_support/callbacks.rb:116:in block in run_callbacks'
lib/gitlab/ip_address_state.rb:11:in with' ee/app/controllers/ee/application_controller.rb:45:in set_current_ip_address'
activesupport (7.0.8.4) lib/active_support/callbacks.rb:127:in block in run_callbacks' app/controllers/application_controller.rb:484:in set_current_admin'
activesupport (7.0.8.4) lib/active_support/callbacks.rb:127:in block in run_callbacks' lib/gitlab/session.rb:11:in with_session'
app/controllers/application_controller.rb:475:in set_session_storage' activesupport (7.0.8.4) lib/active_support/callbacks.rb:127:in block in run_callbacks'
lib/gitlab/i18n.rb:114:in with_locale' lib/gitlab/i18n.rb:120:in with_user_locale'
app/controllers/application_controller.rb:466:in set_locale' activesupport (7.0.8.4) lib/active_support/callbacks.rb:127:in block in run_callbacks'
marginalia (1.11.1) lib/marginalia.rb:109:in record_query_comment' activesupport (7.0.8.4) lib/active_support/callbacks.rb:127:in block in run_callbacks'
app/controllers/application_controller.rb:459:in set_current_context' activesupport (7.0.8.4) lib/active_support/callbacks.rb:127:in block in run_callbacks'
sentry-rails (5.19.0) lib/sentry/rails/controller_transaction.rb:30:in block in sentry_around_action' sentry-ruby (5.19.0) lib/sentry/hub.rb:102:in with_child_span'
sentry-ruby (5.19.0) lib/sentry-ruby.rb:498:in with_child_span' sentry-rails (5.19.0) lib/sentry/rails/controller_transaction.rb:16:in sentry_around_action'
activesupport (7.0.8.4) lib/active_support/callbacks.rb:127:in block in run_callbacks' activesupport (7.0.8.4) lib/active_support/callbacks.rb:138:in run_callbacks'
actionpack (7.0.8.4) lib/abstract_controller/callbacks.rb:233:in process_action' actionpack (7.0.8.4) lib/action_controller/metal/rescue.rb:23:in process_action'
actionpack (7.0.8.4) lib/action_controller/metal/instrumentation.rb:67:in block in process_action' activesupport (7.0.8.4) lib/active_support/notifications.rb:206:in block in instrument'
activesupport (7.0.8.4) lib/active_support/notifications/instrumenter.rb:24:in instrument' activesupport (7.0.8.4) lib/active_support/notifications.rb:206:in instrument'
actionpack (7.0.8.4) lib/action_controller/metal/instrumentation.rb:66:in process_action' actionpack (7.0.8.4) lib/action_controller/metal/params_wrapper.rb:259:in process_action'
activerecord (7.0.8.4) lib/active_record/railties/controller_runtime.rb:27:in process_action' actionpack (7.0.8.4) lib/abstract_controller/base.rb:151:in process'
actionview (7.0.8.4) lib/action_view/rendering.rb:39:in process' actionpack (7.0.8.4) lib/action_controller/metal.rb:188:in dispatch'
actionpack (7.0.8.4) lib/action_controller/metal.rb:249:in block in dispatch' lib/gitlab/middleware/action_controller_static_context.rb:23:in call'
actionpack (7.0.8.4) lib/action_controller/metal.rb:249:in dispatch' actionpack (7.0.8.4) lib/action_dispatch/routing/route_set.rb:49:in dispatch'
actionpack (7.0.8.4) lib/action_dispatch/routing/route_set.rb:32:in serve' actionpack (7.0.8.4) lib/action_dispatch/journey/router.rb:50:in block in serve'
actionpack (7.0.8.4) lib/action_dispatch/journey/router.rb:32:in each' actionpack (7.0.8.4) lib/action_dispatch/journey/router.rb:32:in serve'
actionpack (7.0.8.4) lib/action_dispatch/routing/route_set.rb:852:in call' gitlab-experiment (0.9.1) lib/gitlab/experiment/middleware.rb:19:in call'
flipper (0.26.2) lib/flipper/middleware/memoizer.rb:72:in memoized_call' flipper (0.26.2) lib/flipper/middleware/memoizer.rb:37:in call'
lib/gitlab/metrics/elasticsearch_rack_middleware.rb:16:in call' lib/gitlab/middleware/sidekiq_shard_awareness_validation.rb:20:in block in call'
lib/gitlab/sidekiq_sharding/validator.rb:42:in enabled' lib/gitlab/middleware/sidekiq_shard_awareness_validation.rb:20:in call'
lib/gitlab/middleware/memory_report.rb:13:in call' lib/gitlab/middleware/speedscope.rb:13:in call'
lib/gitlab/database/load_balancing/rack_middleware.rb:23:in call' lib/gitlab/middleware/rails_queue_duration.rb:33:in call'
lib/gitlab/etag_caching/middleware.rb:21:in call' lib/gitlab/metrics/rack_middleware.rb:16:in block in call'
lib/gitlab/metrics/web_transaction.rb:46:in run' lib/gitlab/metrics/rack_middleware.rb:16:in call'
lib/gitlab/middleware/go.rb:21:in call' lib/gitlab/middleware/query_analyzer.rb:11:in block in call'
lib/gitlab/database/query_analyzer.rb:83:in within' lib/gitlab/middleware/query_analyzer.rb:11:in call'
lib/ci/job_token/middleware.rb:11:in call' batch-loader (2.0.5) lib/batch_loader/middleware.rb:11:in call'
rack-attack (6.7.0) lib/rack/attack.rb:103:in call' apollo_upload_server (2.1.6) lib/apollo_upload_server/middleware.rb:19:in call'
lib/gitlab/middleware/multipart.rb:173:in call' rack-attack (6.7.0) lib/rack/attack.rb:127:in call'
warden (1.2.9) lib/warden/manager.rb:36:in block in call' warden (1.2.9) lib/warden/manager.rb:34:in catch'
warden (1.2.9) lib/warden/manager.rb:34:in call' rack-cors (2.0.2) lib/rack/cors.rb:102:in call'
rack (2.2.9) lib/rack/tempfile_reaper.rb:15:in call' rack (2.2.9) lib/rack/etag.rb:27:in call'
rack (2.2.9) lib/rack/conditional_get.rb:27:in call' rack (2.2.9) lib/rack/head.rb:12:in call'
actionpack (7.0.8.4) lib/action_dispatch/http/permissions_policy.rb:38:in call' actionpack (7.0.8.4) lib/action_dispatch/http/content_security_policy.rb:36:in call'
lib/gitlab/middleware/read_only/controller.rb:50:in call' lib/gitlab/middleware/read_only.rb:18:in call'
lib/gitlab/middleware/unauthenticated_session_expiry.rb:18:in call' rack (2.2.9) lib/rack/session/abstract/id.rb:266:in context'
rack (2.2.9) lib/rack/session/abstract/id.rb:260:in call' actionpack (7.0.8.4) lib/action_dispatch/middleware/cookies.rb:704:in call'
lib/gitlab/middleware/strip_cookies.rb:29:in call' lib/gitlab/middleware/same_site_cookies.rb:27:in call'
actionpack (7.0.8.4) lib/action_dispatch/middleware/callbacks.rb:27:in block in call' activesupport (7.0.8.4) lib/active_support/callbacks.rb:99:in run_callbacks'
actionpack (7.0.8.4) lib/action_dispatch/middleware/callbacks.rb:26:in call' sentry-rails (5.19.0) lib/sentry/rails/rescued_exception_interceptor.rb:12:in call'
actionpack (7.0.8.4) lib/action_dispatch/middleware/debug_exceptions.rb:28:in call' lib/gitlab/middleware/path_traversal_check.rb:35:in call'
lib/gitlab/middleware/handle_malformed_strings.rb:21:in call' sentry-ruby (5.19.0) lib/sentry/rack/capture_exceptions.rb:30:in block (2 levels) in call'
sentry-ruby (5.19.0) lib/sentry/hub.rb:258:in with_session_tracking' sentry-ruby (5.19.0) lib/sentry-ruby.rb:411:in with_session_tracking'
sentry-ruby (5.19.0) lib/sentry/rack/capture_exceptions.rb:21:in block in call' sentry-ruby (5.19.0) lib/sentry/hub.rb:59:in with_scope'
sentry-ruby (5.19.0) lib/sentry-ruby.rb:391:in with_scope' sentry-ruby (5.19.0) lib/sentry/rack/capture_exceptions.rb:20:in call'
actionpack (7.0.8.4) lib/action_dispatch/middleware/show_exceptions.rb:29:in call' lib/gitlab/middleware/basic_health_check.rb:25:in call'
lograge (0.11.2) lib/lograge/rails_ext/rack/logger.rb:15:in call_app' railties (7.0.8.4) lib/rails/rack/logger.rb:25:in block in call'
activesupport (7.0.8.4) lib/active_support/tagged_logging.rb:99:in block in tagged' activesupport (7.0.8.4) lib/active_support/tagged_logging.rb:37:in tagged'
activesupport (7.0.8.4) lib/active_support/tagged_logging.rb:99:in tagged' railties (7.0.8.4) lib/rails/rack/logger.rb:25:in call'
actionpack (7.0.8.4) lib/action_dispatch/middleware/remote_ip.rb:93:in call' lib/gitlab/middleware/handle_ip_spoof_attack_error.rb:25:in call'
lib/gitlab/middleware/request_context.rb:15:in call' lib/gitlab/middleware/webhook_recursion_detection.rb:15:in call'
request_store (1.5.1) lib/request_store/middleware.rb:19:in call' rack (2.2.9) lib/rack/method_override.rb:24:in call'
rack (2.2.9) lib/rack/runtime.rb:22:in call' rack-timeout (0.7.0) lib/rack/timeout/core.rb:154:in block in call'
rack-timeout (0.7.0) lib/rack/timeout/support/timeout.rb:19:in timeout' rack-timeout (0.7.0) lib/rack/timeout/core.rb:153:in call'
config/initializers/fix_local_cache_middleware.rb:11:in call' lib/gitlab/middleware/compressed_json.rb:44:in call'
actionpack (7.0.8.4) lib/action_dispatch/middleware/executor.rb:14:in call' lib/gitlab/middleware/rack_multipart_tempfile_factory.rb:19:in call'
rack (2.2.9) lib/rack/sendfile.rb:110:in call' lib/gitlab/middleware/sidekiq_web_static.rb:20:in call'
lib/gitlab/metrics/requests_rack_middleware.rb:79:in call' gitlab-labkit (0.36.1) lib/labkit/middleware/rack.rb:22:in block in call'
gitlab-labkit (0.36.1) lib/labkit/context.rb:35:in with_context' gitlab-labkit (0.36.1) lib/labkit/middleware/rack.rb:21:in call'
actionpack (7.0.8.4) lib/action_dispatch/middleware/request_id.rb:26:in call' actionpack (7.0.8.4) lib/action_dispatch/middleware/host_authorization.rb:131:in call'
railties (7.0.8.4) lib/rails/engine.rb:530:in call' railties (7.0.8.4) lib/rails/railtie.rb:226:in public_send'
railties (7.0.8.4) lib/rails/railtie.rb:226:in method_missing' lib/gitlab/middleware/release_env.rb:13:in call'
rack (2.2.9) lib/rack/urlmap.rb:74:in block in call' rack (2.2.9) lib/rack/urlmap.rb:58:in each'
rack (2.2.9) lib/rack/urlmap.rb:58:in call' puma (6.4.3) lib/puma/configuration.rb:272:in call'
puma (6.4.3) lib/puma/request.rb:100:in block in handle_request' puma (6.4.3) lib/puma/thread_pool.rb:378:in with_force_shutdown'
puma (6.4.3) lib/puma/request.rb:99:in handle_request' puma (6.4.3) lib/puma/server.rb:464:in process_client'
puma (6.4.3) lib/puma/server.rb:245:in block in run' puma (6.4.3) lib/puma/thread_pool.rb:155:in block in spawn_thread'

==> /var/log/gitlab/gitlab-workhorse/current <==
{"backend_id":"rails","content_type":"text/html; charset=utf-8","correlation_id":"01JBBD6MCDVN5E5M60A619DCB9","duration_ms":649,"host":"gitlab.REDACTED.net:8443","level":"info","method":"GET","msg":"access","proto":"HTTP/1.1","referrer":"https://gitlab.REDACTED.net:8443/REDACTED/documentation/-/blob/main/.gitlab-ci.yml?ref_type=heads","remote_addr":"REDACTED_IP:0","remote_ip":"REDACTED_IP","route":"^/-/","route_id":"dash","status":500,"system":"http","time":"2024-10-29T05:50:26Z","ttfb_ms":649,"uri":"/-/ide/project/REDACTED/documentation/edit/main/-/.gitlab-ci.yml","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36 Edg/130.0.0.0","written_bytes":1624}
==> /var/log/gitlab/nginx/gitlab_access.log <==
REDACTED_IP - - [29/Oct/2024:05:50:26 +0000] "GET /-/ide/project/REDACTED/documentation/edit/main/-/.gitlab-ci.yml HTTP/2.0" 500 1624 "https://gitlab.REDACTED.net:8443/REDACTED/documentation/-/blob/main/.gitlab-ci.yml" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36 Edg/130.0.0.0" -

==> /var/log/gitlab/gitlab-workhorse/current <==
{"correlation_id":"01JBBD6N1KZTCQV207EN7SJB92","encoding":"","file":"/opt/gitlab/embedded/service/gitlab-rails/public/-/error-illustrations/error-500-lg.svg","level":"info","method":"GET","msg":"Send static file","time":"2024-10-29T05:50:26Z","uri":"/-/error-illustrations/error-500-lg.svg"}
{"backend_id":"rails","content_type":"image/svg+xml","correlation_id":"01JBBD6N1KZTCQV207EN7SJB92","duration_ms":0,"host":"gitlab.REDACTED.net:8443","level":"info","method":"GET","msg":"access","proto":"HTTP/1.1","referrer":"https://gitlab.REDACTED.net:8443/-/ide/project/REDACTED/documentation/edit/main/-/.gitlab-ci.yml","remote_addr":"REDACTED_IP:0","remote_ip":"REDACTED_IP","route":"^/-/","route_id":"dash","status":200,"system":"http","time":"2024-10-29T05:50:26Z","ttfb_ms":0,"uri":"/-/error-illustrations/error-500-lg.svg","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36 Edg/130.0.0.0","written_bytes":6506}

==> /var/log/gitlab/nginx/gitlab_access.log <==
REDACTED_IP - - [29/Oct/2024:05:50:26 +0000] "GET /-/error-illustrations/error-500-lg.svg HTTP/2.0" 200 6506 "https://gitlab.REDACTED.net:8443/-/ide/project/REDACTED/documentation/edit/main/-/.gitlab-ci.yml" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36 Edg/130.0.0.0" -

==> /var/log/gitlab/gitlab-exporter/current <==
2024-10-29_05:50:26.97736 ::1 - - [29/Oct/2024:05:50:26 UTC] "GET /database HTTP/1.1" 200 2230
2024-10-29_05:50:26.97739 - -> /database

==> /var/log/gitlab/gitlab-workhorse/current <==
{"backend_id":"rails","content_type":"text/html","correlation_id":"01JBBD6N2BDGFBZE6KE7H99K00","duration_ms":135,"host":"gitlab.REDACTED.net:8443","level":"info","method":"GET","msg":"access","proto":"HTTP/1.1","referrer":"https://gitlab.REDACTED.net:8443/-/ide/project/REDACTED/documentation/edit/main/-/.gitlab-ci.yml","remote_addr":"REDACTED_IP:0","remote_ip":"REDACTED_IP","route":"","route_id":"default","status":301,"system":"http","time":"2024-10-29T05:50:27Z","ttfb_ms":135,"uri":"/favicon.ico","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36 Edg/130.0.0.0","written_bytes":192}

==> /var/log/gitlab/nginx/gitlab_access.log <==
REDACTED_IP - - [29/Oct/2024:05:50:27 +0000] "GET /favicon.ico HTTP/2.0" 301 192 "https://gitlab.REDACTED.net:8443/-/ide/project/REDACTED/documentation/edit/main/-/.gitlab-ci.yml" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36 Edg/130.0.0.0" -
Rails Console:
irb(main):001:0> Feature.all.each { |feature| puts "#{feature.name}: #{feature.enabled?}" }
vscode_web_ide: true
ci_job_artifacts_backlog_work: true

=>
[#<Flipper::Feature:792080 name="vscode_web_ide", state=:on, enabled_gate_names=[:boolean], adapter=:memoizable>,
#<Flipper::Feature:792100 name="ci_job_artifacts_backlog_work", state=:on, enabled_gate_names=[:boolean], adapter=:memoizable>,
irb(main):002:0> Feature.enabled?(:vscode_web_ide)
=> true
docker
  • 2 个回答
  • 160 Views
Martin Hope
Arkan29
Asked: 2024-10-03 21:06:27 +0800 CST

如何在 WSL 2 中使用容器,而无需安装 Hyper-v 功能?

  • 9
  • 操作系统:Win 10 Entreprise 22H2 build 19045

  • 虚拟机管理程序:VMware® Workstation 17 Pro

是否可以在 Windows 客户端上安装 WSL,而无需安装 Hyper-v 功能?

我的问题是,我在实验室中使用 VMWare Workstation Pro。现在我想在我的计算机上使用 Docker 桌面来托管我的容器。

但不幸的是,docker deskop 抱怨我没有安装 Hyper-V,但这很正常,因为不可能在同一个主机上安装 2 个 hyervisor type 1。

您有什么想法可以让我干扰本地容器吗?也许没有 docker 桌面,但我需要一些光 :-)

docker
  • 2 个回答
  • 958 Views
Martin Hope
nycynik
Asked: 2024-09-17 22:09:50 +0800 CST

如何让 docker-healthcheck 在容器外运行?

  • 5

我可能有点困惑,但我想做的是让 healthceck 在容器外运行,例如在主机上。我不确定为什么 docker 本身不能对容器进行 curling 来判断它是否健康。

  healthcheck:
    test: ["CMD-SHELL", "curl -s http://localhost:9200/_cluster/health | grep '\"status\":\"green\"'"]
    interval: 10s
    retries: 10
    start_period: 30s
    timeout: 5s

但是盒子上没有 curl,而且我读到添加 curl 永远不是一个好主意;那么,我如何在没有 curl 的情况下在盒子上运行该检查?我可以从 docker 应用程序运行它吗?或者从另一个可以在 docker 中设置该盒子健康状况的容器运行它吗?

docker
  • 1 个回答
  • 103 Views
Martin Hope
chronos
Asked: 2024-04-27 23:52:35 +0800 CST

Docker 用户命名空间到底是如何工作的?

  • 6

我在 Docker 中启用了用户命名空间,试图(我认为)将任何容器使用的任何用户分配给特定的用户。

该用户已被docker创建,并且subuid和subgid中的条目已创建:

dockremap:362144:65536

虽然dockremap它本身的 ID 为 116,

我希望现在可以将主机中的任何文件绑定到容器,并且只要该文件由dockremap主机拥有或权限足够开放,容器就能够读取它。与目录相同。

相反,我发现自己必须让某个用户成为文件/文件夹的所有者362144(这意味着主机上什么都没有,所以ls、ps等......只显示数字 ID)。

预计会像这样工作吗?因为我做错了,或者从管理的角度来看,这是一场噩梦。

docker
  • 1 个回答
  • 40 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    如何减少“vmmem”进程的消耗?

    • 11 个回答
  • Marko Smith

    从 Microsoft Stream 下载视频

    • 4 个回答
  • Marko Smith

    Google Chrome DevTools 无法解析 SourceMap:chrome-extension

    • 6 个回答
  • Marko Smith

    Windows 照片查看器因为内存不足而无法运行?

    • 5 个回答
  • Marko Smith

    支持结束后如何激活 WindowsXP?

    • 6 个回答
  • Marko Smith

    远程桌面间歇性冻结

    • 7 个回答
  • Marko Smith

    子网掩码 /32 是什么意思?

    • 6 个回答
  • Marko Smith

    鼠标指针在 Windows 中按下的箭头键上移动?

    • 1 个回答
  • Marko Smith

    VirtualBox 无法以 VERR_NEM_VM_CREATE_FAILED 启动

    • 8 个回答
  • Marko Smith

    应用程序不会出现在 MacBook 的摄像头和麦克风隐私设置中

    • 5 个回答
  • Martin Hope
    Vickel Firefox 不再允许粘贴到 WhatsApp 网页中? 2023-08-18 05:04:35 +0800 CST
  • Martin Hope
    Saaru Lindestøkke 为什么使用 Python 的 tar 库时 tar.xz 文件比 macOS tar 小 15 倍? 2021-03-14 09:37:48 +0800 CST
  • Martin Hope
    CiaranWelsh 如何减少“vmmem”进程的消耗? 2020-06-10 02:06:58 +0800 CST
  • Martin Hope
    Jim Windows 10 搜索未加载,显示空白窗口 2020-02-06 03:28:26 +0800 CST
  • Martin Hope
    andre_ss6 远程桌面间歇性冻结 2019-09-11 12:56:40 +0800 CST
  • Martin Hope
    Riley Carney 为什么在 URL 后面加一个点会删除登录信息? 2019-08-06 10:59:24 +0800 CST
  • Martin Hope
    zdimension 鼠标指针在 Windows 中按下的箭头键上移动? 2019-08-04 06:39:57 +0800 CST
  • Martin Hope
    jonsca 我所有的 Firefox 附加组件突然被禁用了,我该如何重新启用它们? 2019-05-04 17:58:52 +0800 CST
  • Martin Hope
    MCK 是否可以使用文本创建二维码? 2019-04-02 06:32:14 +0800 CST
  • Martin Hope
    SoniEx2 更改 git init 默认分支名称 2019-04-01 06:16:56 +0800 CST

热门标签

windows-10 linux windows microsoft-excel networking ubuntu worksheet-function bash command-line hard-drive

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve