AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • Início
  • system&network
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • Início
  • system&network
    • Recentes
    • Highest score
    • tags
  • Ubuntu
    • Recentes
    • Highest score
    • tags
  • Unix
    • Recentes
    • tags
  • DBA
    • Recentes
    • tags
  • Computer
    • Recentes
    • tags
  • Coding
    • Recentes
    • tags
Início / user-381083

Dolphin's questions

Martin Hope
Dolphin
Asked: 2024-07-12 21:58:11 +0800 CST

Por que o endereço DNS do pod traefik é diferente de outros DNS do pod no Kubernetes

  • 5

Quando eu acesso o mesmo domínio nos mesmos kuberetes, os v2.10.1dns do traefik pod são diferentes dos outros, este é o dns do traefik:

/ $ nslookup kubernetes.default
Server:     100.100.2.136
Address:    100.100.2.136:53

** server can't find kubernetes.default: NXDOMAIN

** server can't find kubernetes.default: NXDOMAIN

e este é o outro pod DNS parecido com:

root@y-websocket-service-9654986bc-cf65m:/home/node/app# nslookup kubernetes.default
Server:         10.96.0.10
Address:        10.96.0.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1

esta é a definição do traefik no kubernetes v1.29.x, semelhante a:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: traefik
  namespace: default
status:
  observedGeneration: 6
  replicas: 1
  updatedReplicas: 1
  readyReplicas: 1
  availableReplicas: 1
  conditions:
    - type: Available
      status: 'True'
      lastUpdateTime: '2024-07-06T08:40:50Z'
      lastTransitionTime: '2024-07-06T08:40:50Z'
      reason: MinimumReplicasAvailable
      message: Deployment has minimum availability.
    - type: Progressing
      status: 'True'
      lastUpdateTime: '2024-07-09T14:04:24Z'
      lastTransitionTime: '2024-07-08T12:57:09Z'
      reason: NewReplicaSetAvailable
      message: ReplicaSet "traefik-5f6fd6d8f5" has successfully progressed.
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: traefik
      app.kubernetes.io/name: traefik
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: traefik
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: traefik
        helm.sh/chart: traefik-10.1.1
      annotations:
        kubectl.kubernetes.io/restartedAt: '2024-07-09T14:04:02Z'
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: tmp
          emptyDir: {}
        - name: config-volume
          configMap:
            name: traefik-config
            defaultMode: 420
      containers:
        - name: traefik
          image: registry.cn-qingdao.aliyuncs.com/reddwarf-public/traefik:v2.10.1
          args:
            - '--global.checknewversion'
            - '--global.sendanonymoususage'
            - '--entryPoints.metrics.address=:9300/tcp'
            - '--entryPoints.traefik.address=:9000/tcp'
            - '--entryPoints.web.address=:8000/tcp'
            - '--entryPoints.websecure.address=:8443/tcp'
            - '--api.dashboard=true'
            - '--ping=true'
            - '--accesslog=true'
            - '--tracing=true'
            - '--log.level=DEBUG'
            - ‘--log.filePath=/opt/traefik.log’
            - '--metrics.prometheus=true'
            - '--metrics.prometheus.entrypoint=metrics'
            - '--providers.kubernetescrd'
            - '--providers.kubernetesingress'
          ports:
            - name: metrics
              hostPort: 9300
              containerPort: 9300
              protocol: TCP
            - name: traefik
              hostPort: 9000
              containerPort: 9000
              protocol: TCP
            - name: web
              hostPort: 8000
              containerPort: 8000
              protocol: TCP
            - name: websecure
              hostPort: 8443
              containerPort: 8443
              protocol: TCP
          resources: {}
          volumeMounts:
            - name: data
              mountPath: /data
            - name: tmp
              mountPath: /tmp
            - name: config-volume
              mountPath: /etc/traefik
          livenessProbe:
            httpGet:
              path: /ping
              port: 9000
              scheme: HTTP
            initialDelaySeconds: 10
            timeoutSeconds: 2
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /ping
              port: 9000
              scheme: HTTP
            initialDelaySeconds: 10
            timeoutSeconds: 2
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 1
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext:
            capabilities:
              drop:
                - ALL
            runAsUser: 65532
            runAsGroup: 65532
            runAsNonRoot: true
            readOnlyRootFilesystem: true
      restartPolicy: Always
      terminationGracePeriodSeconds: 60
      dnsPolicy: ClusterFirst
      serviceAccountName: traefik
      serviceAccount: traefik
      hostNetwork: true
      securityContext:
        fsGroup: 65532
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600

Estou esquecendo de algo?

kubernetes
  • 1 respostas
  • 24 Views
Martin Hope
Dolphin
Asked: 2023-11-14 14:15:19 +0800 CST

Como alterar o endereço da API kube do contêiner inicial do cilium DeamonSet

  • 5

Estou usando o cilium 1.14.3 como o componente kuberenetes v1.28.3 cni, é assim que instalo o cilium:

helm install cilium cilium/cilium --version 1.14.3 \
   --namespace kube-system \
   --set global.nodeinit.enabled=true \
   --set global.kubeProxyReplacement=partial \
   --set global.hostServices.enabled=false \
   --set global.externalIPs.enabled=true \
   --set global.nodePort.enabled=true \
   --set global.hostPort.enabled=true \
   --set global.pullPolicy=IfNotPresent \
   --set config.ipam=kubernetes \
   --set global.hubble.enabled=true \
   --set global.hubble.relay.enabled=true \
   --set global.hubble.ui.enabled=true \
   --set global.hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}"

agora quero alterar o endereço do servidor API do Kubernetes, editei o configmap cilium-config e adicionei isto:

  k8s-service-host: '172.29.217.209'
  k8s-service-port: '6443'
  k8s-api-server: 'https://172.29.217.209:6443'

esta configuração funciona para o operador cilium, mas descobri que o cilium DeamonSet ainda não usava o novo endereço do servidor API. Então adicionei a configuração no cilium DeamonSet assim:

initContainers:
        - name: config
          image: >-
            quay.io/cilium/cilium:v1.14.3@sha256:e5ca22526e01469f8d10c14e2339a82a13ad70d9a359b879024715540eef4ace
          command:
            - cilium
            - build-config
          env:
            - name: K8S_API_SERVER
              value: 'https://172.29.217.209:6443'
            - name: K8S_NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
            - name: CILIUM_K8S_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
          resources: {}
          volumeMounts:
            - name: tmp
              mountPath: /tmp
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: FallbackToLogsOnError
          imagePullPolicy: IfNotPresent

parece que o contêiner inicial não leu o endereço do servidor API. Estou esquecendo de algo? o que devo fazer para alterar o endereço do servidor kube api para o contêiner inicial cilium DeamonSet? Encontrei a configuração em https://docs.cilium.io/en/v1.12/gettingstarted/kubeproxy-free/#kubeproxy-free e acho que funcionará para 1.14.3.

este é o log de erros do contêiner inicial e mostra por que preciso alterar o endereço do servidor API:

level=info msg=Invoked duration="810.146µs" function="cmd.glob..func36 (build-config.go:32)" subsys=hive
level=info msg=Starting subsys=hive
level=info msg="Establishing connection to apiserver" host="https://10.96.0.1:443" subsys=k8s-client
level=info msg="Establishing connection to apiserver" host="https://10.96.0.1:443" subsys=k8s-client
level=error msg="Unable to contact k8s api-server" error="Get \"https://10.96.0.1:443/api/v1/namespaces/kube-system\": dial tcp 10.96.0.1:443: i/o timeout" ipAddr="https://10.96.0.1:443" subsys=k8s-client
level=error msg="Start hook failed" error="Get \"https://10.96.0.1:443/api/v1/namespaces/kube-system\": dial tcp 10.96.0.1:443: i/o timeout" function="client.(*compositeClientset).onStart" subsys=hive
level=info msg=Stopping subsys=hive
Error: failed to start: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system": dial tcp 10.96.0.1:443: i/o timeout
Usage:
  cilium build-config --node-name $K8S_NODE_NAME [flags]

Flags:
      --allow-config-keys strings        List of configuration keys that are allowed to be overridden (e.g. set from not the first source. Takes precedence over deny-config-keys
      --deny-config-keys strings         List of configuration keys that are not allowed to be overridden (e.g. set from not the first source. If allow-config-keys is set, this field is ignored
      --dest string                      Destination directory to write the fully-resolved configuration. (default "/tmp/cilium/config-map")
      --enable-k8s                       Enable the k8s clientset (default true)
      --enable-k8s-api-discovery         Enable discovery of Kubernetes API groups and resources with the discovery API
  -h, --help                             help for build-config
      --k8s-api-server string            Kubernetes API server URL
      --k8s-client-burst int             Burst value allowed for the K8s client
      --k8s-client-qps float32           Queries per second limit for the K8s client
      --k8s-heartbeat-timeout duration   Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s)
      --k8s-kubeconfig-path string       Absolute path of the kubernetes kubeconfig file
kubernetes
  • 1 respostas
  • 55 Views
Martin Hope
Dolphin
Asked: 2023-11-11 22:49:03 +0800 CST

como alterar o endereço de download do contêiner de pausa padrão do Kubernetes

  • 5

Devido ao problema de rede, quero alterar o contêiner de pausa padrão do Google do endereço oficial para endereços espelhados. Estou tentando alterar o endereço do contêiner de pausa padrão no Kubernetes v1.28.3assim:

root@k8sslave01:/var/lib/kubelet# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9"

quando reinicio o serviço kubelet, o endereço parece não funcionar. o que devo fazer para alterar o endereço padrão do contêiner de pausa do Google? Também tentei adicionar o --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6, ainda não consegui resolver o problema, o log do kubelet mostra erro:

root@k8sslave01:/etc/containerd# systemctl status kubelet -l --no-pager
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; preset: enabled)
    Drop-In: /usr/lib/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: active (running) since Sun 2023-11-12 00:48:39 CST; 1min 21s ago
       Docs: https://kubernetes.io/docs/
   Main PID: 2436 (kubelet)
      Tasks: 10 (limit: 2025)
     Memory: 35.7M
        CPU: 1.871s
     CGroup: /system.slice/kubelet.service
             └─2436 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6

Nov 12 00:49:55 k8sslave01 kubelet[2436]: E1112 00:49:55.679287    2436 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 64.233.188.82:443: i/o timeout" pod="reddwarf-monitor/prometheus-prometheus-node-exporter-j78z6"
Nov 12 00:49:55 k8sslave01 kubelet[2436]: E1112 00:49:55.679310    2436 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 64.233.188.82:443: i/o timeout" pod="reddwarf-monitor/prometheus-prometheus-node-exporter-j78z6"
Nov 12 00:49:55 k8sslave01 kubelet[2436]: E1112 00:49:55.679358    2436 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"prometheus-prometheus-node-exporter-j78z6_reddwarf-monitor(786d8b9f-483f-4868-a7e9-42c43997a204)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"prometheus-prometheus-node-exporter-j78z6_reddwarf-monitor(786d8b9f-483f-4868-a7e9-42c43997a204)\\\": rpc error: code = Unknown desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.6\\\": failed to pull image \\\"registry.k8s.io/pause:3.6\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.6\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.6\\\": failed to do request: Head \\\"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\\\": dial tcp 64.233.188.82:443: i/o timeout\"" pod="reddwarf-monitor/prometheus-prometheus-node-exporter-j78z6" podUID="786d8b9f-483f-4868-a7e9-42c43997a204"
Nov 12 00:49:55 k8sslave01 kubelet[2436]: E1112 00:49:55.747517    2436 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 64.233.188.82:443: i/o timeout"
Nov 12 00:49:55 k8sslave01 kubelet[2436]: E1112 00:49:55.747582    2436 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 64.233.188.82:443: i/o timeout" pod="kube-system/kube-proxy-cvrtf"
Nov 12 00:49:55 k8sslave01 kubelet[2436]: E1112 00:49:55.747610    2436 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to resolve reference \"registry.k8s.io/pause:3.6\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\": dial tcp 64.233.188.82:443: i/o timeout" pod="kube-system/kube-proxy-cvrtf"
Nov 12 00:49:55 k8sslave01 kubelet[2436]: E1112 00:49:55.747691    2436 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-cvrtf_kube-system(175f3730-2bf2-4b56-8bbb-992b603edc93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-cvrtf_kube-system(175f3730-2bf2-4b56-8bbb-992b603edc93)\\\": rpc error: code = DeadlineExceeded desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.6\\\": failed to pull image \\\"registry.k8s.io/pause:3.6\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.6\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.6\\\": failed to do request: Head \\\"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.6\\\": dial tcp 64.233.188.82:443: i/o timeout\"" pod="kube-system/kube-proxy-cvrtf" podUID="175f3730-2bf2-4b56-8bbb-992b603edc93"
Nov 12 00:49:57 k8sslave01 kubelet[2436]: E1112 00:49:57.053066    2436 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgdpk" podUID="cf30fa63-9367-44fc-92da-9abaaec31115"
Nov 12 00:49:59 k8sslave01 kubelet[2436]: E1112 00:49:59.053229    2436 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgdpk" podUID="cf30fa63-9367-44fc-92da-9abaaec31115"
Nov 12 00:50:01 k8sslave01 kubelet[2436]: E1112 00:50:01.052851    2436 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mgdpk" podUID="cf30fa63-9367-44fc-92da-9abaaec31115"
kubernetes
  • 1 respostas
  • 33 Views
Martin Hope
Dolphin
Asked: 2020-07-20 04:08:08 +0800 CST

como montar o sistema de arquivos nfs em usuário não root em pods do kubernetes

  • 0

Estou montando um caminho do sistema de arquivos NFS nos pods do cluster kubernetes (v1.18) no CentOS 8 (o nfs está instalado no Fedora 32), este é o meu pv yaml define:

apiVersion: v1
kind: PersistentVolume
metadata:
    name: nfs-jenkins-pv
    namespace: infrastrcuture
spec:
    capacity:
    storage: 8Gi
    accessModes:
    - ReadWriteOnce
    mountOptions:
    - vers=4.0
    - noresvport
    nfs:
    server: "192.168.31.2"
    path: "/home/dolphin/data/k8s/monitoring/infrastructure/jenkins"
    persistentVolumeReclaimPolicy: Retain

e quando eu inicio o pod, ele mostra este erro:

MountVolume.SetUp failed for volume "nfs-jenkins-pv" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/656dacd8-fcc9-44f1-a0c8-baa7eb5fa82e/volumes/kubernetes.io~nfs/nfs-jenkins-pv --scope -- mount -t nfs -o noresvport,vers=4.0 192.168.31.2:/home/dolphin/data/k8s/monitoring/infrastructure/jenkins /var/lib/kubelet/pods/656dacd8-fcc9-44f1-a0c8-baa7eb5fa82e/volumes/kubernetes.io~nfs/nfs-jenkins-pv Output: Running scope as unit: run-r5dc1ce59823746ffbbb18381cbec71cc.scope mount.nfs: Operation not permitted

Estou tentado alterar o privilégio da pasta jenkins assim:

chmod 777 jenkins

mas ainda não funciona. Eu poderia montar o sistema de arquivos nfs da máquina local usando a linha de comando com root assim:

sudo mount -t nfs -o v3 192.168.31.2:/home/dolphin/data/k8s/monitoring/infrastructure/jenkins /mnt

mas no cluster kuberentes ele sempre é usuário root e root não é uma boa prática e pode causar problemas de segurança. Eu ajustei o arquivo de exportação /etc/exportsassim:

[dolphin@MiWiFi-R4CM-srv infrastructure]$ cat /etc/exports
/home/dolphin/data/k8s/monitoring/infrastructure/jenkins *(rw,no_root_squash)

o que devo fazer para que alguém possa montar o sistema de arquivos nfs? talvez evite usar o usuário root.

kubernetes
  • 1 respostas
  • 2664 Views
Martin Hope
Dolphin
Asked: 2020-07-20 02:56:25 +0800 CST

mount.nfs: falha ao aplicar as opções fstab ao montar o sistema de arquivos nfs no fedora 32

  • 5

Este é o status do meu serviço nfs no Fedora 32:

[dolphin@MiWiFi-R4CM-srv infrastructure]$ sudo systemctl status nfs-server.service
● nfs-server.service - NFS server and services
     Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
    Drop-In: /run/systemd/generator/nfs-server.service.d
             └─order-with-mounts.conf
     Active: active (exited) since Sun 2020-07-19 04:16:50 EDT; 2h 34min ago
    Process: 599370 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
    Process: 599371 ExecStart=/usr/sbin/rpc.nfsd (code=exited, status=0/SUCCESS)
    Process: 599381 ExecStart=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl reload gssproxy ; fi (code=exited, status=0/SUCCESS)
   Main PID: 599381 (code=exited, status=0/SUCCESS)
        CPU: 37ms

Jul 19 04:16:50 MiWiFi-R4CM-srv systemd[1]: Starting NFS server and services...
Jul 19 04:16:50 MiWiFi-R4CM-srv systemd[1]: Finished NFS server and services.

e esta é a configuração do meu arquivo de exportação em /etc/exports:

[dolphin@MiWiFi-R4CM-srv infrastructure]$ cat /etc/exports
/home/dolphin/data/k8s/monitoring/infrastructure/jenkins *(rw,all_squash)

quando tento testar meu servcie nfs usando este comando:

[dolphin@MiWiFi-R4CM-srv infrastructure]$ mount -t nfs -o v4 192.168.31.2:/home/dolphin/data/k8s/monitoring/infrastructure/jenkins /mnt
mount.nfs: failed to apply fstab options

Estou pesquisando na internet, mas não encontro ninguém com essa situação. Onde está errado e o que devo fazer para corrigir isso?

[dolphin@MiWiFi-R4CM-srv infrastructure]$ showmount -e 192.168.31.2
Export list for 192.168.31.2:
/home/dolphin/data/k8s/monitoring/infrastructure/jenkins *
linux nfs
  • 1 respostas
  • 29134 Views
Martin Hope
Dolphin
Asked: 2020-07-06 05:20:31 +0800 CST

falha ao carregar a configuração do repositório personalizado /etc/yum.repos.d/kubernetes.repo no CentOS 8

  • 3

Estou adicionando uma configuração de repositório personalizada no CentOS 8 assim na minha máquina virtual kvm assim:

[root@localhost ~]# cat /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
   name=Kubernetes
   baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
   enabled=1
   gpgcheck=0
   repo_gpgcheck=0
   gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

mas quando uso este comando para instalar o componente kuberentes, ele me mostra assim:

[root@localhost ~]# yum -y install kubelet kubeadm kubectl
Warning: failed loading '/etc/yum.repos.d/kubernetes.repo', skipping.
Last metadata expiration check: 0:37:38 ago on Sun 05 Jul 2020 08:38:19 AM EDT.
No match for argument: kubelet
No match for argument: kubeadm
No match for argument: kubectl

estou esquecendo de algo? o que devo fazer para corrigir esse problema?

centos yum kubernetes
  • 1 respostas
  • 7433 Views
Martin Hope
Dolphin
Asked: 2020-07-06 02:35:43 +0800 CST

qual é o número da versão do docker docker-ce-3:19.03.12-3.el7.x86_64

  • 0

Recebo o número da versão do docker: docker-ce-3:19.03.12-3.el7.x86_64.Esse ce significa edição da comunidade, mas 3 significa o quê? 19.03.12 significa número de versão que posso encontrar aqui . Mas qual é o segundo 3? el7 é suporte para Enterprise Linux 7? Por que o número da versão é tão complexo?

docker
  • 1 respostas
  • 99 Views
Martin Hope
Dolphin
Asked: 2020-07-05 00:37:17 +0800 CST

A máquina virtual kvm não encontrou a rede de ponte no fedora 32

  • 0

Estou configurando uma rede de ponte na minha máquina Fedora 32, esta é minha saída ifconfig:

[root@MiWiFi-R4CM-srv network-scripts]# ifconfig
br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.31.2  netmask 255.255.255.0  broadcast 192.168.31.255
        inet6 fe80::4b2:78ff:fe35:2c73  prefixlen 64  scopeid 0x20<link>
        ether 06:b2:78:35:2c:73  txqueuelen 1000  (Ethernet)
        RX packets 104  bytes 13776 (13.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 412  bytes 76686 (74.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

bridge0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.31.178  netmask 255.255.255.0  broadcast 192.168.31.255
        inet6 fe80::b18b:62f0:b07f:1be6  prefixlen 64  scopeid 0x20<link>
        ether 06:0c:7d:51:ea:71  txqueuelen 1000  (Ethernet)
        RX packets 31618  bytes 2354837 (2.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13625  bytes 100610583 (95.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eno2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.31.72  netmask 255.255.255.0  broadcast 192.168.31.255
        inet6 fe80::7903:4d64:5ea0:3339  prefixlen 64  scopeid 0x20<link>
        ether 2c:f0:5d:2c:6e:d5  txqueuelen 1000  (Ethernet)
        RX packets 61756  bytes 26903282 (25.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 87934  bytes 108103869 (103.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 16  memory 0xa1200000-a1220000  

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 5913  bytes 1277029 (1.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5913  bytes 1277029 (1.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:39:c6:9f  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc54:ff:fe36:a8ef  prefixlen 64  scopeid 0x20<link>
        ether fe:54:00:36:a8:ef  txqueuelen 1000  (Ethernet)
        RX packets 28  bytes 3449 (3.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 103  bytes 16899 (16.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

esta é minha configuração de conexões de rede:

insira a descrição da imagem aqui

mas quando configuro a ponte de rede da máquina virtual kvm no gerenciador virtual kvm, não encontrei a ponte de rede br0.o que devo fazer para disponibilizar o br0 para a máquina virtual kvm?

insira a descrição da imagem aqui

kvm-virtualization
  • 1 respostas
  • 124 Views
Martin Hope
Dolphin
Asked: 2020-05-03 06:11:59 +0800 CST

o wireguard não escuta na porta depois de iniciado

  • 2

Estou iniciando o wireguard usando este comando:

wg-quick up wg0

este é o status do wireguard:

insira a descrição da imagem aqui

e usando este comando para ver a porta de escuta:

lsof -i:7456

por que o wireguard não escuta na porta? A configuração do wireguard é bem-sucedida?

wireguard
  • 1 respostas
  • 3606 Views

Sidebar

Stats

  • Perguntas 205573
  • respostas 270741
  • best respostas 135370
  • utilizador 68524
  • Highest score
  • respostas
  • Marko Smith

    Você pode passar usuário/passar para autenticação básica HTTP em parâmetros de URL?

    • 5 respostas
  • Marko Smith

    Ping uma porta específica

    • 18 respostas
  • Marko Smith

    Verifique se a porta está aberta ou fechada em um servidor Linux?

    • 7 respostas
  • Marko Smith

    Como automatizar o login SSH com senha?

    • 10 respostas
  • Marko Smith

    Como posso dizer ao Git para Windows onde encontrar minha chave RSA privada?

    • 30 respostas
  • Marko Smith

    Qual é o nome de usuário/senha de superusuário padrão para postgres após uma nova instalação?

    • 5 respostas
  • Marko Smith

    Qual porta o SFTP usa?

    • 6 respostas
  • Marko Smith

    Linha de comando para listar usuários em um grupo do Windows Active Directory?

    • 9 respostas
  • Marko Smith

    O que é um arquivo Pem e como ele difere de outros formatos de arquivo de chave gerada pelo OpenSSL?

    • 3 respostas
  • Marko Smith

    Como determinar se uma variável bash está vazia?

    • 15 respostas
  • Martin Hope
    Davie Ping uma porta específica 2009-10-09 01:57:50 +0800 CST
  • Martin Hope
    kernel O scp pode copiar diretórios recursivamente? 2011-04-29 20:24:45 +0800 CST
  • Martin Hope
    Robert ssh retorna "Proprietário incorreto ou permissões em ~/.ssh/config" 2011-03-30 10:15:48 +0800 CST
  • Martin Hope
    Eonil Como automatizar o login SSH com senha? 2011-03-02 03:07:12 +0800 CST
  • Martin Hope
    gunwin Como lidar com um servidor comprometido? 2011-01-03 13:31:27 +0800 CST
  • Martin Hope
    Tom Feiner Como posso classificar a saída du -h por tamanho 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich O que é um arquivo Pem e como ele difere de outros formatos de arquivo de chave gerada pelo OpenSSL? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent Como determinar se uma variável bash está vazia? 2009-05-13 09:54:48 +0800 CST

Hot tag

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • Início
  • Perguntas
    • Recentes
    • Highest score
  • tag
  • help

Footer

AskOverflow.Dev

About Us

  • About Us
  • Contact Us

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve