AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • Início
  • system&network
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • Início
  • system&network
    • Recentes
    • Highest score
    • tags
  • Ubuntu
    • Recentes
    • Highest score
    • tags
  • Unix
    • Recentes
    • tags
  • DBA
    • Recentes
    • tags
  • Computer
    • Recentes
    • tags
  • Coding
    • Recentes
    • tags
Início / user-987560

Raphael10's questions

Martin Hope
Raphael10
Asked: 2024-09-13 02:11:58 +0800 CST

Qual endereço de servidor devo usar no arquivo de configuração do emissor do vault?

  • 6

Eu defini e apliquei um ServiceAccount "service-account-token" : Vault-Config/service-account-token.yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: service-account-token
automountServiceAccountToken: false

root@k8s-eu-1-control-plane-node-1:~# kubectl apply -f Vault-Config/service-account- 
token.yaml 
serviceaccount/service-account-token created

root@k8s-eu-1-control-plane-node-1:~# kubectl get ServiceAccount
NAME                       SECRETS   AGE
default                    0         10d
issuer                     0         20h
secrets-store-csi-driver   0         2d9h
service-account-token      0         22s   // <----------------------
webapp-sa                  0         2d1h

Eu defini e apliquei um segredo do emissor do cofre:

root@k8s-eu-1-control-plane-node-1:~# nano Vault-Config/cert-manager-vault-issuer-
secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: issuer-token-abcde
  #namespace: nats
  annotations:
    kubernetes.io/service-account.name: issuer
type: kubernetes.io/service-account-token # https://developer.hashicorp.com/vault
/docs/auth/kubernetes#continue-using-long-lived-tokens

->

root@k8s-eu-1-control-plane-node-1:~# kubectl apply -f Vault-Config/cert-manager-vault-
issuer-secret.yaml 
secret/issuer-token-abcde created

->

root@k8s-eu-1-control-plane-node-1:~# kubectl get secrets
NAME                         TYPE                                  DATA   AGE
issuer-token-abcde           kubernetes.io/service-account-token   3      8s  // <------------
nats-box-contexts            Opaque                                1      6d
sh.helm.release.v1.csi.v1    helm.sh/release.v1                    1      2d9h
sh.helm.release.v1.nats.v1   helm.sh/release.v1                    1      6d

Quando aplico este vault-issuer : Vault-Config/vault-issuer-cert-manager.yaml:

# https://developer.hashicorp.com/vault/tutorials/archive/kubernetes-cert-   
manager#configure-an-issuer-and-generate-a-certificate

    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: vault-issuer
      #namespace: nats
    spec:
      vault:
        server: http://vault.default // <---- as suggested here: https://cert-manager.io/docs/configuration/vault/#deployment
        path: pki_int/sign/nats
        auth:
          kubernetes:
            mountPath: /v1/auth/kubernetes
            role: issuer
            secretRef:
              name: issuer-token-abcde
              #key: token

-> :

root@k8s-eu-1-control-plane-node-1:~# kubectl apply -f Vault-Config/vault-issuer-cert-
manager.yaml 
issuer.cert-manager.io/vault-issuer created

Recebo este erro:

root@k8s-eu-1-control-plane-node-1:~# kubectl describe issuer vault-issue
Failed to initialize Vault client: while requesting a Vault token using the Kubernetes auth:
error calling Vault server: Post "https://vault.default/v1/auth/kubernetes/login": dial tcp: 
lookup vault.default on 10.96.0.10:53: no such host

Para a configuração do Vault apliquei através do helm estes valores:

root@k8s-eu-1-control-plane-node-1:~# nano Vault-Config/overrides.yaml :

global:
   enabled: true
   tlsDisable: false
injector:
   enabled: true
server:
   extraEnvironmentVars:
      VAULT_CACERT: /vault/userconfig/vault-ha-tls/vault.ca
      VAULT_TLSCERT: /vault/userconfig/vault-ha-tls/vault.crt
      VAULT_TLSKEY: /vault/userconfig/vault-ha-tls/vault.key
   dataStorage:
       enabled: true
   volumes:
      - name: userconfig-vault-ha-tls
        secret:
         defaultMode: 420
         secretName: vault-ha-tls
   volumeMounts:
      - mountPath: /vault/userconfig/vault-ha-tls
        name: userconfig-vault-ha-tls
        readOnly: true
   standalone:
      enabled: false
   affinity: ""
   readinessProbe:
     enabled: true
     path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
   ha:
      enabled: true
      replicas: 3
      raft:
         enabled: true
         setNodeId: true
         config: |
            cluster_name = "vault-integrated-storage"
            ui = true
            listener "tcp" {
               tls_disable = 0
               address = "[::]:8200"
               cluster_address = "[::]:8201"
               tls_cert_file = "/vault/userconfig/vault-ha-tls/vault.crt"
               tls_key_file  = "/vault/userconfig/vault-ha-tls/vault.key"
               tls_client_ca_file = "/vault/userconfig/vault-ha-tls/vault.ca"
            }

            # https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-raft-deployment-guide#vault-storage-configuration

            storage "raft" {
               path = "/vault/data"

               retry_join {
                 leader_api_addr = "https://vault-0.vault-internal:8200"
                 leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
                 leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
                 leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
               }

               retry_join {
                 leader_api_addr = "https://vault-1.vault-internal:8200"
                 leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
                 leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
                 leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
               }

               retry_join {
                 leader_api_addr = "https://vault-2.vault-internal:8200"
                 leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
                 leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
                 leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
               }

               retry_join {
                 leader_api_addr = "https://vault-3.vault-internal:8200"
                 leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
                 leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
                 leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
               }

               retry_join {
                 leader_api_addr = "https://vault-4.vault-internal:8200"
                 leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
                 leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
                 leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
               }

               autopilot {
                 server_stabilization_time = "10s"
                 last_contact_threshold = "10s"
                 min_quorum = 5
                 cleanup_dead_servers = false
                 dead_server_last_contact_threshold = "10m"
                 max_trailing_logs = 1000
                 disable_upgrade_migration = false
               }


            }
            disable_mlock = true
            service_registration "kubernetes" {}

Qual endereço de servidor devo colocar no arquivo de configuração do vault-issuer : Vault-Config/vault-issuer-cert-manager.yaml:

# https://developer.hashicorp.com/vault/tutorials/archive/kubernetes-cert-manager#configure-an-issuer-and-generate-a-certificate

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: vault-issuer
  #namespace: nats
spec:
  vault:
    server: https://vault-0.vault-internal:8200/    // <----------------- ????????
    path: pki_int/sign/nats
    auth:
      kubernetes:
        mountPath: /v1/auth/kubernetes
        role: issuer
        secretRef:
          name: issuer-token-abcde
          key: token

--> :

root@k8s-eu-1-control-plane-node-1:~# kubectl describe issuer     vault-issue

Message:               Failed to initialize Vault client: while  
requesting a Vault token using the Kubernetes auth: error calling 
Vault server: Post "http://vault.default:8200/v1/auth/kubernetes
/login": dial tcp: lookup vault.default on 10.96.0.10:53: no such 
host

?

kubernetes
  • 1 respostas
  • 45 Views
Martin Hope
Raphael10
Asked: 2023-12-01 19:48:28 +0800 CST

Erros haproxy.cfg no arquivo de configuração. Ajuda necessária

  • 7

Tentando seguir estas indicações:

https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md#haproxy-configuration

e estas indicações:

HAProxy usa URLs na configuração do servidor?

Estou tentando definir correta e adequadamente haproxy.cfg, mas estou recebendo erros

Este é o conteúdo de /run/systemd/resolve/resolv.conf:

root@k8s-eu-1-control-plane-node-1:~# sudo cat /run/systemd/resolve/resolv.conf
# This is /run/systemd/resolve/resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
#
# This file might be symlinked as /etc/resolv.conf. If you're looking at
# /etc/resolv.conf and seeing this text, you have followed the symlink.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver kkk.kk.kkk.kk
nameserver qqq.qq.qqq.qq
search invalid

Este é o intervalo de portas:

root@k8s-eu-1-control-plane-node-1:~# cat /proc/sys/net/ipv4/ip_local_port_range
32768   60999

Então, tentei definir haproxy.cfgo seguinte: /etc/haproxy/haproxy.cfg
# https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md#haproxy-configuration

# /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    #log /dev/log local0
    #log /dev/log local1 notice

    #log /var/log local0
    #log /var/log local1 notice

    daemon

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 1
    timeout http-request    10s
    timeout queue           20s
    timeout connect         5s
    timeout client          20s
    timeout server          20s
    timeout http-keep-alive 10s
    timeout check           10s

#---------------------------------------------------------------------
# apiserver frontend which proxys to the control plane nodes
#---------------------------------------------------------------------

# https://www.digitalocean.com/community/tutorials/haproxy-network-error-cannot-bind-socket

frontend apiserver
    bind *:45000
    mode tcp
    option tcplog
    default_backend apiserverbackend


resolvers mydns
    nameserver dns1 161.97.189.51:53
    nameserver dns2 161.97.189.52:53
    parse-resolv-conf
    resolve_retries       3
    timeout resolve       1s
    timeout retry         1s
    hold other           30s
    hold refused         30s
    hold nx              30s
    hold timeout         30s
    hold valid           10s
    hold obsolete        30s


#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserverbackend
    option httpchk GET /healthz
    http-check expect status 200
    mode tcp
    option ssl-hello-chk

    balance     roundrobin
        #server ${HOST1_ID} ${HOST1_ADDRESS}:${APISERVER_SRC_PORT} check

        server k8s-eu-1-control-plane-node-1:6443 resolvers mydns resolve-prefer ipv4

Mas retorna o erro unknown keyword 'mydns':

root@k8s-eu-1-control-plane-node-1:~# sudo haproxy -c -f /etc/haproxy/haproxy.cfg 
[NOTICE]   (39412) : haproxy version is 2.6.15-1ppa1~jammy
[NOTICE]   (39412) : path to executable is /usr/sbin/haproxy
[ALERT]    (39412) : config : [/etc/haproxy/haproxy.cfg:92] : 'server apiserverbackend/k8s-eu-1-control-plane-node-1:6443' : unknown keyword 'mydns'.
[ALERT]    (39412) : config : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg
[ALERT]    (39412) : config : Fatal errors found in configuration.
kubernetes
  • 1 respostas
  • 107 Views
Martin Hope
Raphael10
Asked: 2023-11-09 03:45:21 +0800 CST

Ubuntu 22.04 Erro do daemon: Obtenha "https://registry-1.docker.io/v2/": disque tcp: procure registro-1.docker.io em 127.0.0.53:53: conexão recusada

  • 5

No Ubuntu 22.04

root@k8s-eu-1-master:~# sudo systemctl daemon-reload
root@k8s-eu-1-master:~# sudo systemctl enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
root@k8s-eu-1-master:~# 
root@k8s-eu-1-master:~# sudo systemctl restart docker
root@k8s-eu-1-master:~# 
root@k8s-eu-1-master:~# sudo docker pull cassandra:latest
Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 127.0.0.53:53: read udp 127.0.0.1:48086->127.0.0.53:53: read: connection refused

root@k8s-eu-1-master:~# sudo systemctl status docker
● docker.service - Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2023-11-08 20:22:55 CET; 11min ago
TriggeredBy: ● docker.socket
       Docs: https://docs.docker.com
   Main PID: 272072 (dockerd)
      Tasks: 15
     Memory: 37.7M
        CPU: 638ms
     CGroup: /system.slice/docker.service
             └─272072 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Nov 08 20:22:55 k8s-eu-1-master dockerd[272072]: time="2023-11-08T20:22:55.320001583+01:00" level=info msg="Daemon has completed initialization"
Nov 08 20:22:55 k8s-eu-1-master dockerd[272072]: time="2023-11-08T20:22:55.353839818+01:00" level=info msg="API listen on /run/docker.sock"
Nov 08 20:22:55 k8s-eu-1-master systemd[1]: Started Docker Application Container Engine.
Nov 08 20:23:07 k8s-eu-1-master dockerd[272072]: time="2023-11-08T20:23:07.456049976+01:00" level=warning msg="Error getting v2 registry: Get \"https://registry-1.docker.io/v2/\": dial tcp: lookup registry-1.docker.io on 127.0.0.53:53: read udp 127.0.0.1:48086->127.0.0.53:53: read: connection refused"
Nov 08 20:23:07 k8s-eu-1-master dockerd[272072]: time="2023-11-08T20:23:07.456096778+01:00" level=info msg="Attempting next endpoint for pull after error: Get \"https://registry-1.docker.io/v2/\": dial tcp: lookup registry-1.docker.io on 127.0.0.53:53: read udp 127.0.0.1:48086->127.0.0.53:53: read: connection refused"
Nov 08 20:23:07 k8s-eu-1-master dockerd[272072]: time="2023-11-08T20:23:07.458039838+01:00" level=error msg="Handler for POST /v1.43/images/create returned error: Get \"https://registry-1.docker.io/v2/\": dial tcp: lookup registry-1.docker.io on 127.0.0.53:53: read udp 127.0.0.1:48086->127.0.0.53:53: read: connection refused"
Nov 08 20:32:06 k8s-eu-1-master dockerd[272072]: time="2023-11-08T20:32:06.231824815+01:00" level=warning msg="Error getting v2 registry: Get \"https://registry-1.docker.io/v2/\": dial tcp: lookup registry-1.docker.io on 127.0.0.53:53: read udp 127.0.0.1:43104->127.0.0.53:53: read: connection refused"
Nov 08 20:32:06 k8s-eu-1-master dockerd[272072]: time="2023-11-08T20:32:06.231929019+01:00" level=info msg="Attempting next endpoint for pull after error: Get \"https://registry-1.docker.io/v2/\": dial tcp: lookup registry-1.docker.io on 127.0.0.53:53: read udp 127.0.0.1:43104->127.0.0.53:53: read: connection refused"
Nov 08 20:32:06 k8s-eu-1-master dockerd[272072]: time="2023-11-08T20:32:06.233897769+01:00" level=error msg="Handler for POST /v1.43/images/create returned error: Get \"https://registry-1.docker.io/v2/\": dial tcp: lookup registry-1.docker.io on 127.0.0.53:53: read udp 127.0.0.1:43104->127.0.0.53:53: read: connection refused"
Nov 08 20:32:10 k8s-eu-1-master dockerd[272072]: time="2023-11-08T20:32:10.259620826+01:00" level=info msg="ignoring event" container=544ab9c3758d8293d505aead2429bd2a6fff065672bc83c171d4834393814dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

Mas o hello-worldexemplo do docker funciona bem:

root@k8s-eu-1-master:~# docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

Esta é a saída de docker info:

root@k8s-eu-1-master:~# docker info
    Client: Docker Engine - Community
     Version:    24.0.7
     Context:    default
     Debug Mode: false
     Plugins:
      buildx: Docker Buildx (Docker Inc.)
        Version:  v0.11.2
        Path:     /usr/libexec/docker/cli-plugins/docker-buildx
      compose: Docker Compose (Docker Inc.)
        Version:  v2.21.0
        Path:     /usr/libexec/docker/cli-plugins/docker-compose
    
    Server:
     Containers: 4
      Running: 0
      Paused: 0
      Stopped: 4
     Images: 1
     Server Version: 24.0.7
     Storage Driver: overlay2
      Backing Filesystem: extfs
      Supports d_type: true
      Using metacopy: false
      Native Overlay Diff: true
      userxattr: false
     Logging Driver: json-file
     Cgroup Driver: systemd
     Cgroup Version: 2
     Plugins:
      Volume: local
      Network: bridge host ipvlan macvlan null overlay
      Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
     Swarm: inactive
     Runtimes: io.containerd.runc.v2 runc
     Default Runtime: runc
     Init Binary: docker-init
     containerd version: 61f9fd88f79f081d64d6fa3bb1a0dc71ec870523
     runc version: v1.1.9-0-gccaecfc
     init version: de40ad0
     Security Options:
      apparmor
      seccomp
       Profile: builtin
      cgroupns
     Kernel Version: 5.15.0-88-generic
     Operating System: Ubuntu 22.04.3 LTS
     OSType: linux
     Architecture: x86_64
     CPUs: 10
     Total Memory: 58.86GiB
     Name: k8s-eu-1-master
     ID: ca48bc0b-922b-47b6-bae4-85d5d911436c
     Docker Root Dir: /var/lib/docker
     Debug Mode: false
     Experimental: false
     Insecure Registries:
      127.0.0.0/8
     Live Restore Enabled: false

Como fazer dar certo?

docker
  • 2 respostas
  • 568 Views
Martin Hope
Raphael10
Asked: 2023-11-08 19:46:04 +0800 CST

Pods Statefulset: problema com múltiplas declarações de volume persistente

  • 5

Como tenho 5 pastas NFS compartilhadas:

   root@k8s-eu-1-master:~# df -h | grep /srv/
   aa.aaa.aaa.aaa:/srv/shared-k8s-eu-1-worker-1 391G 6.1G 365G 2% /mnt/data
   bb.bbb.bbb.bbb:/srv/shared-k8s-eu-1-worker-2 391G 6.1G 365G 2% /mnt/data
   cc.ccc.ccc.cc:/srv/shared-k8s-eu-1-worker-3 391G 6.1G 365G 2% /mnt/data
   dd.ddd.ddd.dd:/srv/shared-k8s-eu-1-worker-4 391G 6.1G 365G 2% /mnt/data
   ee.eee.eee.eee:/srv/shared-k8s-eu-1-worker-5 391G 6.1G 365G 2% /mnt/data

Adicionei em cassandra-statefulset.yaml o segundo volumeMount com seu volumeClaimTemplate:

  # These volume mounts are persistent. They are like inline claims,
  # but not exactly because the names need to match exactly one of
  # the stateful pod volumes.
  volumeMounts:
  - name: k8s-eu-1-worker-1
   mountPath: /srv/shared-k8s-eu-1-worker-1
  - name: k8s-eu-1-worker-2
   mountPath: /srv/shared-k8s-eu-1-worker-2

   # These are converted to volume claims by the controller
   # and mounted at the paths mentioned above.
   # do not use these in production until ssd GCEPersistentDisk or other ssd pd
   volumeClaimTemplates:
   - metadata:
     name: k8s-eu-1-worker-1
    spec:
     accessModes: [ "ReadWriteOnce" ]
     storageClassName: k8s-eu-1-worker-1
     resources:
      requests:
       storage: 1Gi
   - metadata:
     name: k8s-eu-1-worker-2
    spec:
     accessModes: [ "ReadWriteOnce" ]
     storageClassName: k8s-eu-1-worker-2
     resources:
       requests:
       storage: 1Gi

  ---
  kind: StorageClass
  apiVersion: storage.k8s.io/v1
  metadata:
   name: k8s-eu-1-worker-1
  provisioner: k8s-sigs.io/k8s-eu-1-worker-1
  parameters:
   type: pd-ssd

  kind: StorageClass
  apiVersion: storage.k8s.io/v1
  metadata:
   name: k8s-eu-1-worker-2
  provisioner: k8s-sigs.io/k8s-eu-1-worker-2
  parameters:
   type: pd-ssd

Parece funcionar bem no início:

   root@k8s-eu-1-master:~# kubectl apply -f ./cassandraStatefulApp/cassandra-statefulset.yaml 
   statefulset.apps/cassandra created

Mas o statefulset permanece em um estado "não pronto":

   root@k8s-eu-1-master:~# kubectl get sts
   NAME    READY AGE
   cassandra 0/3  17m

root@k8s-eu-1-master:~# kubectl describe sts cassandra
  Name:       cassandra
  Namespace:     default
  CreationTimestamp: Wed, 08 Nov 2023 12:02:10 +0100
  Selector:     app=cassandra
  Labels:      app=cassandra
  Annotations:    <none>
  Replicas:     3 desired | 1 total
  Update Strategy:  RollingUpdate
   Partition:    0
  Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
  Pod Template:
   Labels: app=cassandra
   Containers:
    cassandra:
    Image:   gcr.io/google-samples/cassandra:v13
    Ports:   7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP
    Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
    Limits:
     cpu:  500m
     memory: 1Gi
    Requests:
     cpu:   500m
     memory: 1Gi
    Readiness: exec [/bin/bash -c /ready-probe.sh] delay=15s timeout=5s period=10s #success=1  
#failure=3
    Environment:
     MAX_HEAP_SIZE:     512M
     HEAP_NEWSIZE:      100M
     CASSANDRA_SEEDS:    cassandra-0.cassandra.default.svc.cluster.local
     CASSANDRA_CLUSTER_NAME: K8Demo
     CASSANDRA_DC:      DC1-K8Demo
     CASSANDRA_RACK:     Rack1-K8Demo
     POD_IP:         (v1:status.podIP)
    Mounts:
     /srv/shared-k8s-eu-1-worker-1 from k8s-eu-1-worker-1 (rw)
     /srv/shared-k8s-eu-1-worker-2 from k8s-eu-1-worker-2 (rw)
   Volumes: <none>
  Volume Claims:
   Name:     k8s-eu-1-worker-1
   StorageClass: k8s-eu-1-worker-1
   Labels:    <none>
   Annotations: <none>
   Capacity:   1Gi
   Access Modes: [ReadWriteOnce]
   Name:     k8s-eu-1-worker-2
   StorageClass: k8s-eu-1-worker-2
   Labels:    <none>
   Annotations: <none>
   Capacity:   1Gi
   Access Modes: [ReadWriteOnce]
  Events:
   Type  Reason      Age From          Message
   ----  ------      ---- ----          -------
   Normal SuccessfulCreate 18m statefulset-controller create Claim k8s-eu-1-worker-1-cassandra-0   
    Pod cassandra-0 in StatefulSet cassandra success
   Normal SuccessfulCreate 18m statefulset-controller create Claim k8s-eu-1-worker-2-cassandra-0 
    Pod cassandra-0 in StatefulSet cassandra success
   Normal SuccessfulCreate 18m statefulset-controller create Pod cassandra-0 in StatefulSet 
 cassandra successful

O pod correspondente permanece no estado "Pendente":

   root@k8s-eu-1-master:~# kubectl get pods
   NAME                               READY STATUS  RESTARTS AGE
   cassandra-0                           0/1  Pending 0     19m
   k8s-eu-1-worker-1-nfs-subdir-external-provisioner-79fff4ff2qx7k 1/1  Running 0     19h

  root@k8s-eu-1-master:~# kubetl describe pod cassandra-0
  kubetl: command not found
  root@k8s-eu-1-master:~# kubectl describe pod cassandra-0
  Name:      cassandra-0
  Namespace:    default
  Priority:    0
  Service Account: default
  Node:      <none>
  Labels:     app=cassandra
           apps.kubernetes.io/pod-index=0
           controller-revision-hash=cassandra-79d64cd8b
           statefulset.kubernetes.io/pod-name=cassandra-0
  Annotations:   <none>
  Status:     Pending
  IP:       
  IPs:       <none>
  Controlled By:  StatefulSet/cassandra
  Containers:
   cassandra:
    Image:   gcr.io/google-samples/cassandra:v13
    Ports:   7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP
    Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
    Limits:
     cpu:  500m
     memory: 1Gi
    Requests:
     cpu:   500m
     memory: 1Gi
    Readiness: exec [/bin/bash -c /ready-probe.sh] delay=15s timeout=5s period=10s #success=1  
#failure=3
    Environment:
     MAX_HEAP_SIZE:     512M
     HEAP_NEWSIZE:      100M
     CASSANDRA_SEEDS:    cassandra-0.cassandra.default.svc.cluster.local
     CASSANDRA_CLUSTER_NAME: K8Demo
     CASSANDRA_DC:      DC1-K8Demo
     CASSANDRA_RACK:     Rack1-K8Demo
     POD_IP:         (v1:status.podIP)
    Mounts:
     /srv/shared-k8s-eu-1-worker-1 from k8s-eu-1-worker-1 (rw)
     /srv/shared-k8s-eu-1-worker-2 from k8s-eu-1-worker-2 (rw)
     /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wxx58 (ro)
  Conditions:
   Type     Status
   PodScheduled False 
  Volumes:
   k8s-eu-1-worker-1:
    Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName: k8s-eu-1-worker-1-cassandra-0
    ReadOnly: false
   k8s-eu-1-worker-2:
    Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName: k8s-eu-1-worker-2-cassandra-0
    ReadOnly: false
   kube-api-access-wxx58:
    Type:          Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds: 3607
    ConfigMapName:     kube-root-ca.crt
    ConfigMapOptional:   <nil>
    DownwardAPI:      true
  QoS Class:         Guaranteed
  Node-Selectors:       <none>
  Tolerations:        node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
  Events:
   Type  Reason      Age        From       Message
   ----  ------      ----       ----       -------
   Warning FailedScheduling 20m        default-scheduler 0/6 nodes are available: pod has unbound 
immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful  
for scheduling..
   Warning FailedScheduling 10m (x3 over 20m) default-scheduler 0/6 nodes are available: pod has 
unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not 
helpful for scheduling..

Com apenas uma das duas reivindicações de volume persistente no status "Ligado" e a outra ainda no status "Pendente":

  root@k8s-eu-1-master:~# kubectl get pvc
  NAME              STATUS  VOLUME                  CAPACITY ACCESS MODES STORAGECLASS    
AGE
  k8s-eu-1-worker-1-cassandra-0 Bound  pvc-4f1d877b-8e01-4b76-b4e1-25bc226fd1a5 1Gi    RWO         
  k8s-eu-1-worker-1 21m
  k8s-eu-1-worker-2-cassandra-0 Pending                                    k8s-eu-1-worker-2 21m

O que há de errado com minha cassandra-statefulset.yamlconfiguração acima?

kubernetes
  • 1 respostas
  • 25 Views
Martin Hope
Raphael10
Asked: 2023-11-08 02:39:30 +0800 CST

Pod em estado de execução, mas em eventos do pod: aviso Falha na análise de preparação do kubelet não íntegro: comando "/bin/bash -c /ready-probe.sh" expirou

  • 5

O que significa um pod em execução e um evento:

Warning Unhealthy kubelet Readiness probe failed: command "/bin/bash -c /ready-probe.sh" timed out? :

root@k8s-eu-1-master:~# kubectl describe pod cassandra-0
Name:             cassandra-0
Namespace:        default
Priority:         0
Service Account:  default
Node:             k8s-eu-1-worker-1/xx.xxx.xxx.xxx
Start Time:       Tue, 07 Nov 2023 19:18:49 +0100
Labels:           app=cassandra
                  apps.kubernetes.io/pod-index=0
                  controller-revision-hash=cassandra-58c99f489d
                  statefulset.kubernetes.io/pod-name=cassandra-0
Annotations:      cni.projectcalico.org/containerID: ee11d6b9b5dfade09500ccf53d2d1e4e04aaf479c4502d76f6ce0044c6683ac4
                  cni.projectcalico.org/podIP: 192.168.200.12/32
                  cni.projectcalico.org/podIPs: 192.168.200.12/32
Status:           Running
IP:               192.168.200.12
IPs:
  IP:           192.168.200.12
Controlled By:  StatefulSet/cassandra
Containers:
  cassandra:
    Container ID:   containerd://1386bc65f0f9c11eb9351435578c37efb7081fbbf0acd7a9b2ab6d3507576e0f
    Image:          gcr.io/google-samples/cassandra:v13
    Image ID:       gcr.io/google-samples/cassandra@sha256:7a3d20afa0a46ed073a5c587b4f37e21fa860e83c60b9c42fec1e1e739d64007
    Ports:          7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP
    Host Ports:     0/TCP, 0/TCP, 0/TCP, 0/TCP
    State:          Running
      Started:      Tue, 07 Nov 2023 19:18:51 +0100
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     500m
      memory:  1Gi
    Requests:
      cpu:      500m
      memory:   1Gi
    Readiness:  exec [/bin/bash -c /ready-probe.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
    Environment:
      MAX_HEAP_SIZE:           512M
      HEAP_NEWSIZE:            100M
      CASSANDRA_SEEDS:         cassandra-0.cassandra.default.svc.cluster.local
      CASSANDRA_CLUSTER_NAME:  K8Demo
      CASSANDRA_DC:            DC1-K8Demo
      CASSANDRA_RACK:          Rack1-K8Demo
      POD_IP:                   (v1:status.podIP)
    Mounts:
      /srv/shared-k8s-eu-1-worker-1 from k8s-eu-1-worker-1 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nzb6p (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  k8s-eu-1-worker-1:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  k8s-eu-1-worker-1-cassandra-0
    ReadOnly:   false
  kube-api-access-nzb6p:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age    From               Message
  ----     ------     ----   ----               -------
  Normal   Scheduled  7m28s  default-scheduler  Successfully assigned default/cassandra-0 to k8s-eu-1-worker-1
  Normal   Pulling    7m28s  kubelet            Pulling image "gcr.io/google-samples/cassandra:v13"
  Normal   Pulled     7m28s  kubelet            Successfully pulled image "gcr.io/google-samples/cassandra:v13" in 383ms (383ms including waiting)
  Normal   Created    7m28s  kubelet            Created container cassandra
  Normal   Started    7m27s  kubelet            Started container cassandra
  Warning  Unhealthy  7m     kubelet            Readiness probe failed: command "/bin/bash -c /ready-probe.sh" timed out // <-------------------
kubernetes
  • 1 respostas
  • 20 Views

Sidebar

Stats

  • Perguntas 205573
  • respostas 270741
  • best respostas 135370
  • utilizador 68524
  • Highest score
  • respostas
  • Marko Smith

    Você pode passar usuário/passar para autenticação básica HTTP em parâmetros de URL?

    • 5 respostas
  • Marko Smith

    Ping uma porta específica

    • 18 respostas
  • Marko Smith

    Verifique se a porta está aberta ou fechada em um servidor Linux?

    • 7 respostas
  • Marko Smith

    Como automatizar o login SSH com senha?

    • 10 respostas
  • Marko Smith

    Como posso dizer ao Git para Windows onde encontrar minha chave RSA privada?

    • 30 respostas
  • Marko Smith

    Qual é o nome de usuário/senha de superusuário padrão para postgres após uma nova instalação?

    • 5 respostas
  • Marko Smith

    Qual porta o SFTP usa?

    • 6 respostas
  • Marko Smith

    Linha de comando para listar usuários em um grupo do Windows Active Directory?

    • 9 respostas
  • Marko Smith

    O que é um arquivo Pem e como ele difere de outros formatos de arquivo de chave gerada pelo OpenSSL?

    • 3 respostas
  • Marko Smith

    Como determinar se uma variável bash está vazia?

    • 15 respostas
  • Martin Hope
    Davie Ping uma porta específica 2009-10-09 01:57:50 +0800 CST
  • Martin Hope
    kernel O scp pode copiar diretórios recursivamente? 2011-04-29 20:24:45 +0800 CST
  • Martin Hope
    Robert ssh retorna "Proprietário incorreto ou permissões em ~/.ssh/config" 2011-03-30 10:15:48 +0800 CST
  • Martin Hope
    Eonil Como automatizar o login SSH com senha? 2011-03-02 03:07:12 +0800 CST
  • Martin Hope
    gunwin Como lidar com um servidor comprometido? 2011-01-03 13:31:27 +0800 CST
  • Martin Hope
    Tom Feiner Como posso classificar a saída du -h por tamanho 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich O que é um arquivo Pem e como ele difere de outros formatos de arquivo de chave gerada pelo OpenSSL? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent Como determinar se uma variável bash está vazia? 2009-05-13 09:54:48 +0800 CST

Hot tag

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • Início
  • Perguntas
    • Recentes
    • Highest score
  • tag
  • help

Footer

AskOverflow.Dev

About Us

  • About Us
  • Contact Us

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve