AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • Início
  • system&network
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • Início
  • system&network
    • Recentes
    • Highest score
    • tags
  • Ubuntu
    • Recentes
    • Highest score
    • tags
  • Unix
    • Recentes
    • tags
  • DBA
    • Recentes
    • tags
  • Computer
    • Recentes
    • tags
  • Coding
    • Recentes
    • tags
Início / user-956798

Tamino Elgert's questions

Martin Hope
Tamino Elgert
Asked: 2023-07-24 09:42:32 +0800 CST

Calico Node e Kube Proxy falharam permanentemente em um novo nó

  • 5

Eu tenho um cluster Kubernetes na versão 1.25.0 com alguns nós (máquinas servidor Ubuntu). Eu uso calico de https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml . Agora estou adicionando um novo nó. O nó é completamente idêntico. A única exceção é que ele tem uma porta de rede de 2,5 gbit em vez de uma porta de rede de 1 gbit. Nesse nó, tanto o nó calico quanto o proxy kube falham permanentemente. Em todos os outros nós funciona bem. O Calico Node relata o seguinte como o motivo da falha:

Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refused W0724 00:54:46.157624 73 feature_gate.go:241] Setting GA feature gate ServiceInternalTrafficPolicy=true. It will be removed in a future release.
Back-off restarting failed container

O proxy do Kube apenas trava com arquivos back-off restarting failed container.

Os logs de todos parecem bons, sem erros - nem mesmo aviso. Aqui está uma parte dos logs do contêiner do nó calico:

2023-07-24 01:35:34.609 [INFO][115] felix/int_dataplane.go 1893: Received interface update msg=&intdataplane.ifaceStateUpdate{Name:"calico_tmp_B", State:"", Index:76}
2023-07-24 01:35:34.609 [INFO][115] felix/int_dataplane.go 1913: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"calico_tmp_B", Addrs:set.Set[string](nil)}
2023-07-24 01:35:34.609 [INFO][115] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"calico_tmp_B", Addrs:set.Set[string](nil)}
2023-07-24 01:35:34.609 [INFO][115] felix/int_dataplane.go 1893: Received interface update msg=&intdataplane.ifaceStateUpdate{Name:"calico_tmp_A", State:"", Index:77}
2023-07-24 01:35:34.609 [INFO][115] felix/int_dataplane.go 1913: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"calico_tmp_A", Addrs:set.Set[string](nil)}
2023-07-24 01:35:34.609 [INFO][115] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"calico_tmp_A", Addrs:set.Set[string](nil)}
2023-07-24 01:35:34.609 [INFO][115] felix/int_dataplane.go 1803: Dataplane updates throttled
2023-07-24 01:35:35.603 [INFO][115] felix/int_dataplane.go 1770: Dataplane updates no longer throttled
bird: device1: Initializing
bird: direct1: Initializing
bird: device1: Starting
bird: device1: Initializing
bird: direct1: Initializing
bird: Mesh_192_168_178_58: Initializing
bird: Mesh_192_168_178_25: Initializing
bird: Mesh_192_168_178_70: Initializing
bird: Mesh_192_168_178_38: Initializing
bird: Mesh_192_168_178_72: Initializing
bird: device1: Starting
bird: device1: Connected to table master
bird: bird: device1: Connected to table masterdevice1: State changed to feed
bird: device1: State changed to feed
bird: direct1: Starting
bird: direct1: Connected to table master
bird: direct1: State changed to feed
bird: direct1: Startingbird: 
Graceful restart started
bird: bird: direct1: Connected to table masterGraceful restart done
bird: direct1: State changed to feedbird: 
Startedbird: 
Mesh_192_168_178_58: Starting
bird: bird: Mesh_192_168_178_58: State changed to startdevice1: State changed to up
bird: Mesh_192_168_178_25: Starting
bird: Mesh_192_168_178_25: State changed to start
bird: Mesh_192_168_178_70: Starting
bird: Mesh_192_168_178_70: State changed to start
bird: Mesh_192_168_178_38: Starting
bird: bird: direct1: State changed to upMesh_192_168_178_38: State changed to start
bird: Mesh_192_168_178_72: Starting
bird: Mesh_192_168_178_72: State changed to start
bird: Graceful restart started
bird: Started
bird: device1: State changed to up
bird: direct1: State changed to up
bird: Mesh_192_168_178_58: Connected to table master
bird: Mesh_192_168_178_58: State changed to wait
bird: Mesh_192_168_178_25: Connected to table master
bird: Mesh_192_168_178_25: State changed to wait
bird: Mesh_192_168_178_72: Connected to table master
bird: Mesh_192_168_178_72: State changed to wait
bird: Mesh_192_168_178_70: Connected to table master
bird: Mesh_192_168_178_70: State changed to wait
bird: Mesh_192_168_178_38: Connected to table master
bird: Mesh_192_168_178_38: State changed to wait
bird: Graceful restart done
bird: Mesh_192_168_178_58: State changed to feed
bird: Mesh_192_168_178_25: State changed to feed
bird: Mesh_192_168_178_70: State changed to feed
bird: Mesh_192_168_178_38: State changed to feed
bird: Mesh_192_168_178_72: State changed to feed
bird: Mesh_192_168_178_58: State changed to up
bird: Mesh_192_168_178_25: State changed to up
bird: Mesh_192_168_178_70: State changed to up
bird: Mesh_192_168_178_38: State changed to up
bird: Mesh_192_168_178_72: State changed to up
2023-07-24 01:35:41.982 [INFO][115] felix/health.go 336: Overall health status changed: live=true ready=true
+---------------------------+---------+----------------+-----------------+--------+
|         COMPONENT         | TIMEOUT |    LIVENESS    |    READINESS    | DETAIL |
+---------------------------+---------+----------------+-----------------+--------+
| CalculationGraph          | 30s     | reporting live | reporting ready |        |
| FelixStartup              | -       | reporting live | reporting ready |        |
| InternalDataplaneMainLoop | 1m30s   | reporting live | reporting ready |        |
+---------------------------+---------+----------------+-----------------+--------+
2023-07-24 01:36:27.256 [INFO][115] felix/int_dataplane.go 1836: Received *proto.HostMetadataV4V6Update update from calculation graph msg=hostname:"storage-controller" ipv4_addr:"192.168.178.72/24" labels:<key:"beta.kubernetes.io/arch" value:"amd64" > labels:<key:"beta.kubernetes.io/os" value:"linux" > labels:<key:"kubernetes.io/arch" value:"amd64" > labels:<key:"kubernetes.io/hostname" value:"storage-controller" > labels:<key:"kubernetes.io/os" value:"linux" > labels:<key:"specialServerType" value:"storage" > 
2023-07-24 01:36:29.551 [INFO][115] felix/int_dataplane.go 1836: Received *proto.HostMetadataV4V6Update update from calculation graph msg=hostname:"node1" ipv4_addr:"192.168.178.25/24" labels:<key:"beta.kubernetes.io/arch" value:"amd64" > labels:<key:"beta.kubernetes.io/os" value:"linux" > labels:<key:"kubernetes.io/arch" value:"amd64" > labels:<key:"kubernetes.io/hostname" value:"node1" > labels:<key:"kubernetes.io/os" value:"linux" > labels:<key:"node-role.kubernetes.io/control-plane" value:"" > labels:<key:"node.kubernetes.io/exclude-from-external-load-balancers" value:"" > 
2023-07-24 01:36:34.389 [INFO][117] monitor-addresses/autodetection_methods.go 103: Using autodetected IPv4 address on interface enp11s0: 192.168.178.88/24
2023-07-24 01:36:37.850 [INFO][115] felix/summary.go 100: Summarising 20 dataplane reconciliation loops over 1m3.5s: avg=13ms longest=180ms (resync-filter-v4,resync-ipsets-v4,resync-mangle-v4,resync-nat-v4,resync-raw-v4,resync-routes-v4,resync-routes-v4,resync-rules-v4,update-filter-v4,update-ipsets-4,update-mangle-v4,update-nat-v4,update-raw-v4)

Eu absolutamente não entendo isso. Já reconstruí todo o nó, atualizei o nó calico e também tentei outras versões do kubernetes (1.25.11). Nenhum firewall está instalado. Alguém pode me ajudar aqui? Obrigado

PS: Já tentei todos os métodos de detecção automática. Agora eu uso IP_AUTODETECTION_METHOD=can-reach=8.8.8.8

Saída de ifconfig no nó:

 ifconfig -a
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:50:db:52:31  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp11s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.178.88  netmask 255.255.255.0  broadcast 192.168.178.255
        inet6 fe80::67c:16ff:fec8:53e6  prefixlen 64  scopeid 0x20<link>
        inet6 2a02:908:523:bd80:67c:16ff:fec8:53e6  prefixlen 64  scopeid 0x0<global>
        ether 04:7c:16:c8:53:e6  txqueuelen 1000  (Ethernet)
        RX packets 8748  bytes 6634723 (6.6 MB)
        RX errors 0  dropped 242  overruns 0  frame 0
        TX packets 6342  bytes 944277 (944.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 390  bytes 47600 (47.6 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 390  bytes 47600 (47.6 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tunl0: flags=193<UP,RUNNING,NOARP>  mtu 1480
        inet 192.168.53.192  netmask 255.255.255.255
        tunnel   txqueuelen 1000  (IPIP Tunnel)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 39  bytes 13183 (13.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

wlp12s0: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 60:e9:aa:5e:01:95  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
ubuntu
  • 2 respostas
  • 50 Views
Martin Hope
Tamino Elgert
Asked: 2022-02-28 06:24:15 +0800 CST

Kubernetes Nginx Ingress com Cert Manager e letsencrypt não permite wildcarts em nomes de domínio

  • 0

Eu tenho um cluster Kubernetes auto-hospedado com um Nginx Ingress. O Cert-manager também está sendo executado no cluster, com o qual tento obter certificados SSL válidos usando o Letsencrypt. Tudo funciona e recebo um certificado válido para example.com, www.example.com ou app1.example.com, mas não para um curinga geral *.example.com. Se eu tentar de alguma forma inserir um curinga no meu ingresso em sec.tls.hosts, nenhum certificado será gerado para mim. eu recebo a saída para

kubectl get certificate

NAME              READY   SECRET            AGE
tls-test-cert     False   tls-electi-cert   20h

kubectl get CertificateRequest

NAME                    APPROVED   DENIED   READY   ISSUER                REQUESTOR                                         AGE
tls-test-cert-8jw75     True                False   letsencrypt-staging   system:serviceaccount:cert-manager:cert-manager   18m

kubectl describe CertificateRequest

[...]
Status:
  Conditions:
    Last Transition Time:  2022-02-27T13:54:38Z
    Message:               Certificate request has been approved by cert-manager.io
    Reason:                cert-manager.io
    Status:                True
    Type:                  Approved
    Last Transition Time:  2022-02-27T13:54:38Z
    Message:               Waiting on certificate issuance from order gateway/tls-test-cert-8jw75-1425588341: "pending"
    Reason:                Pending
    Status:                False
    Type:                  Ready
Events:
  Type    Reason           Age   From          Message
  ----    ------           ----  ----          -------
  Normal  cert-manager.io  18m   cert-manager  Certificate request has been approved by cert-manager.io
  Normal  OrderCreated     18m   cert-manager  Created Order resource gateway/tls-test-cert-8jw75-1425588341
  Normal  OrderPending     18m   cert-manager  Waiting on certificate issuance from order gateway/tls-test-cert-8jw75-1425588341: ""

My Nginx Ingress: (troquei meu domínio para example.com para este post)

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-management
  namespace: gateway
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: "letsencrypt-staging"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
  ingressClassName: nginx
  tls:
  - secretName: tls-test-cert
    hosts:
      - example.com
      - '*.example.com'
  rules:
    - host: example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: test-gateway
                port:
                  number: 80
    - host: '*.example.com'
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: test-gateway
                port:
                  number: 80

Emissor: (eu editei meu e-mail aqui)

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: letsencrypt-staging
  namespace: cert-manager
spec:
  acme:
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    email: *******
    privateKeySecretRef:
      name: letsencrypt-staging
    solvers:
      - http01:
          ingress:
            class: nginx

Meu proxy reverso (teste-gateway) definitivamente funciona e encaminha todos os subdomínios para o meu site. Agradecemos antecipadamente por quaisquer idéias sobre o que pode estar causando isso.

nginx kubernetes kubeadm lets-encrypt cert-manager
  • 1 respostas
  • 1061 Views

Sidebar

Stats

  • Perguntas 205573
  • respostas 270741
  • best respostas 135370
  • utilizador 68524
  • Highest score
  • respostas
  • Marko Smith

    Você pode passar usuário/passar para autenticação básica HTTP em parâmetros de URL?

    • 5 respostas
  • Marko Smith

    Ping uma porta específica

    • 18 respostas
  • Marko Smith

    Verifique se a porta está aberta ou fechada em um servidor Linux?

    • 7 respostas
  • Marko Smith

    Como automatizar o login SSH com senha?

    • 10 respostas
  • Marko Smith

    Como posso dizer ao Git para Windows onde encontrar minha chave RSA privada?

    • 30 respostas
  • Marko Smith

    Qual é o nome de usuário/senha de superusuário padrão para postgres após uma nova instalação?

    • 5 respostas
  • Marko Smith

    Qual porta o SFTP usa?

    • 6 respostas
  • Marko Smith

    Linha de comando para listar usuários em um grupo do Windows Active Directory?

    • 9 respostas
  • Marko Smith

    O que é um arquivo Pem e como ele difere de outros formatos de arquivo de chave gerada pelo OpenSSL?

    • 3 respostas
  • Marko Smith

    Como determinar se uma variável bash está vazia?

    • 15 respostas
  • Martin Hope
    Davie Ping uma porta específica 2009-10-09 01:57:50 +0800 CST
  • Martin Hope
    kernel O scp pode copiar diretórios recursivamente? 2011-04-29 20:24:45 +0800 CST
  • Martin Hope
    Robert ssh retorna "Proprietário incorreto ou permissões em ~/.ssh/config" 2011-03-30 10:15:48 +0800 CST
  • Martin Hope
    Eonil Como automatizar o login SSH com senha? 2011-03-02 03:07:12 +0800 CST
  • Martin Hope
    gunwin Como lidar com um servidor comprometido? 2011-01-03 13:31:27 +0800 CST
  • Martin Hope
    Tom Feiner Como posso classificar a saída du -h por tamanho 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich O que é um arquivo Pem e como ele difere de outros formatos de arquivo de chave gerada pelo OpenSSL? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent Como determinar se uma variável bash está vazia? 2009-05-13 09:54:48 +0800 CST

Hot tag

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • Início
  • Perguntas
    • Recentes
    • Highest score
  • tag
  • help

Footer

AskOverflow.Dev

About Us

  • About Us
  • Contact Us

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve