AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / server / 问题

问题[helm](server)

Martin Hope
RogerFC
Asked: 2022-12-02 09:34:52 +0800 CST

如何在 Kustomization 文件中编辑补丁项以使用 Helm 执行 gitop(并避免补丁堆积)

  • 5

我正在寻找一种更好的方法来使用 GitOps 更新在 HelmRelease 中定义的 docker 图像,因为我当前的方法会产生噪音。

在将 Helm 引入我使用 GitOps 管理的集群后,我发现了一些关于如何正确声明要在集群中使用的新 docker 映像构建的困难。

在部署中,我可以使用简单的 Kustomization 资源来替换图像元素,例如:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace

resources:
- namespace.yaml
- my-deployment.yaml

images:
- name: my/image
  newName: my/image
  newTag: updated-tag

对于每个新版本,我只需修改文件

kustomize edit set image my/image=my/image:updated-tag

现在使用 Helm 我不能使用相同的技巧,因为我需要更新spec.values.imageHelmRelease 中的标签,而 Kustomize 似乎没有它的简写形式。所以选择是创建一个补丁:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace

resources:
- namespace.yaml
- my-helm-release.yaml

patches:
- patch: '[{"op": "replace", "path": "/spec/values/image", "value": "my/image:updated-tag"}]'
  target:
    kind: HelmRelease
    name: my-helm-release
    namespace: my-namespace

通过使用类似的命令:

kustomize edit add patch \
    --kind HelmRelease \
    --name my-helm-release \
    --namespace my-namespace --patch "[{\"op\": \"replace\", \"path\": \"/spec/values/image\", \"value\": \"my/image:updated-tag\"}]"

(不要介意引用的报价,请耐心等待)

多次运行此命令时会出现问题。虽然kustomize edit set image覆盖了以前的值,但在后一种情况下,一个新patch的附加到列表中,带有more-updated-tag.

patches:
- patch: '[{"op": "replace", "path": "/spec/values/image", "value": "my/image:updated-tag"}]'
  target:
    kind: HelmRelease
    name: my-helm-release
    namespace: my-namespace
- patch: '[{"op": "replace", "path": "/spec/values/image", "value": "my/image:more-updated-tag"}]'
  target:
    kind: HelmRelease
    name: my-helm-release
    namespace: my-namespace

我怎样才能避免这种重复并为我的文件添加越来越多的噪音?

谢谢!

helm
  • 1 个回答
  • 40 Views
Martin Hope
Felipe
Asked: 2022-04-20 06:37:30 +0800 CST

Kubernetes 集群中的 RabbitMQ Helm 图表安装无法将 Erlang cookie 分发到节点

  • 0

我正在尝试通过 EKS 集群中的 Bitnami Helm 图表(https://github.com/bitnami/charts/tree/master/bitnami/rabbitmq)安装 RabbitMQ 集群,当我执行 Helm 安装时,我得到以下信息创建的第一个 pod 中的错误:

rabbitmq 13:41:15.99
rabbitmq 13:41:15.99 Welcome to the Bitnami rabbitmq container
rabbitmq 13:41:15.99 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-rabbitmq
rabbitmq 13:41:15.99 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-rabbitmq/issues
rabbitmq 13:41:15.99
rabbitmq 13:41:15.99 INFO  ==> ** Starting RabbitMQ setup **
rabbitmq 13:41:16.01 INFO  ==> Validating settings in RABBITMQ_* env vars..
rabbitmq 13:41:16.03 INFO  ==> Initializing RabbitMQ...
rabbitmq 13:41:16.03 DEBUG ==> Creating environment file...
rabbitmq 13:41:16.03 DEBUG ==> Creating enabled_plugins file...
rabbitmq 13:41:16.04 DEBUG ==> Creating Erlang cookie...
rabbitmq 13:41:16.04 DEBUG ==> Ensuring expected directories/files exist...
rabbitmq 13:41:16.05 INFO  ==> Starting RabbitMQ in background...
Waiting for erlang distribution on node '[email protected]' while OS process '51' is running
2022-04-19 13:41:19.198340+00:00 [info] <0.222.0> Feature flags: list of feature flags found:
2022-04-19 13:41:19.212884+00:00 [info] <0.222.0> Feature flags:   [ ] implicit_default_bindings
2022-04-19 13:41:19.212941+00:00 [info] <0.222.0> Feature flags:   [ ] maintenance_mode_status
2022-04-19 13:41:19.212965+00:00 [info] <0.222.0> Feature flags:   [ ] quorum_queue
2022-04-19 13:41:19.212985+00:00 [info] <0.222.0> Feature flags:   [ ] stream_queue
2022-04-19 13:41:19.213077+00:00 [info] <0.222.0> Feature flags:   [ ] user_limits
2022-04-19 13:41:19.213104+00:00 [info] <0.222.0> Feature flags:   [ ] virtual_host_metadata
2022-04-19 13:41:19.213124+00:00 [info] <0.222.0> Feature flags: feature flag states written to disk: yes
2022-04-19 13:41:19.637051+00:00 [noti] <0.44.0> Application syslog exited with reason: stopped
2022-04-19 13:41:19.637148+00:00 [noti] <0.222.0> Logging: switching to configured handler(s); following messages may not be visible in this log output
2022-04-19 13:41:19.656264+00:00 [noti] <0.222.0> Logging: configured log handlers are now ACTIVE
2022-04-19 13:41:19.904087+00:00 [info] <0.222.0> ra: starting system quorum_queues
2022-04-19 13:41:19.904200+00:00 [info] <0.222.0> starting Ra system: quorum_queues in directory: /bitnami/rabbitmq/mnesia/rabbit@rabbitmq-0/quorum/rabbit@rabbitmq-0
2022-04-19 13:41:19.995094+00:00 [info] <0.263.0> ra: meta data store initialised for system quorum_queues. 0 record(s) recovered
2022-04-19 13:41:20.013384+00:00 [noti] <0.268.0> WAL: ra_log_wal init, open tbls: ra_log_open_mem_tables, closed tbls: ra_log_closed_mem_tables
2022-04-19 13:41:20.022921+00:00 [info] <0.222.0> ra: starting system coordination
2022-04-19 13:41:20.022987+00:00 [info] <0.222.0> starting Ra system: coordination in directory: /bitnami/rabbitmq/mnesia/rabbit@rabbitmq-0/coordination/rabbit@rabbitmq-0
2022-04-19 13:41:20.026371+00:00 [info] <0.276.0> ra: meta data store initialised for system coordination. 0 record(s) recovered
2022-04-19 13:41:20.026628+00:00 [noti] <0.281.0> WAL: ra_coordination_log_wal init, open tbls: ra_coordination_log_open_mem_tables, closed tbls: ra_coordination_log_closed_mem_tables
2022-04-19 13:41:20.032159+00:00 [info] <0.222.0>
2022-04-19 13:41:20.032159+00:00 [info] <0.222.0>  Starting RabbitMQ 3.9.8 on Erlang 24.1.2 [jit]
2022-04-19 13:41:20.032159+00:00 [info] <0.222.0>  Copyright (c) 2007-2021 VMware, Inc. or its affiliates.
2022-04-19 13:41:20.032159+00:00 [info] <0.222.0>  Licensed under the MPL 2.0. Website: https://rabbitmq.com

  ##  ##      RabbitMQ 3.9.8
  ##  ##
  ##########  Copyright (c) 2007-2021 VMware, Inc. or its affiliates.
  ######  ##
  ##########  Licensed under the MPL 2.0. Website: https://rabbitmq.com

  Erlang:      24.1.2 [jit]
  TLS Library: OpenSSL - OpenSSL 1.1.1d  10 Sep 2019

  Doc guides:  https://rabbitmq.com/documentation.html
  Support:     https://rabbitmq.com/contact.html
  Tutorials:   https://rabbitmq.com/getstarted.html
  Monitoring:  https://rabbitmq.com/monitoring.html

  Logs: /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@rabbitmq-0_upgrade.log
        <stdout>

  Config file(s): /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf

  Starting broker...2022-04-19 13:41:20.033907+00:00 [info] <0.222.0>
2022-04-19 13:41:20.033907+00:00 [info] <0.222.0>  node           : rabbit@rabbitmq-0
2022-04-19 13:41:20.033907+00:00 [info] <0.222.0>  home dir       : /opt/bitnami/rabbitmq/.rabbitmq
2022-04-19 13:41:20.033907+00:00 [info] <0.222.0>  config file(s) : /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf
2022-04-19 13:41:20.033907+00:00 [info] <0.222.0>  cookie hash    : d3Nfp8t690Ln1h811Tuxzw==
2022-04-19 13:41:20.033907+00:00 [info] <0.222.0>  log(s)         : /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@rabbitmq-0_upgrade.log
2022-04-19 13:41:20.033907+00:00 [info] <0.222.0>                 : <stdout>
2022-04-19 13:41:20.033907+00:00 [info] <0.222.0>  database dir   : /bitnami/rabbitmq/mnesia/rabbit@rabbitmq-0
2022-04-19 13:41:20.307590+00:00 [info] <0.222.0> Feature flags: list of feature flags found:
2022-04-19 13:41:20.307654+00:00 [info] <0.222.0> Feature flags:   [ ] drop_unroutable_metric
2022-04-19 13:41:20.307681+00:00 [info] <0.222.0> Feature flags:   [ ] empty_basic_get_metric
2022-04-19 13:41:20.307705+00:00 [info] <0.222.0> Feature flags:   [ ] implicit_default_bindings
2022-04-19 13:41:20.307792+00:00 [info] <0.222.0> Feature flags:   [ ] maintenance_mode_status
2022-04-19 13:41:20.307818+00:00 [info] <0.222.0> Feature flags:   [ ] quorum_queue
2022-04-19 13:41:20.307838+00:00 [info] <0.222.0> Feature flags:   [ ] stream_queue
2022-04-19 13:41:20.307908+00:00 [info] <0.222.0> Feature flags:   [ ] user_limits
2022-04-19 13:41:20.307947+00:00 [info] <0.222.0> Feature flags:   [ ] virtual_host_metadata
2022-04-19 13:41:20.307968+00:00 [info] <0.222.0> Feature flags: feature flag states written to disk: yes
Error: operation wait on node [email protected] timed out. Timeout value used: 5000
2022-04-19 13:41:23.299211+00:00 [info] <0.222.0> Running boot step pre_boot defined by app rabbit
2022-04-19 13:41:23.299295+00:00 [info] <0.222.0> Running boot step rabbit_global_counters defined by app rabbit
2022-04-19 13:41:23.299545+00:00 [info] <0.222.0> Running boot step rabbit_osiris_metrics defined by app rabbit
2022-04-19 13:41:23.299746+00:00 [info] <0.222.0> Running boot step rabbit_core_metrics defined by app rabbit
2022-04-19 13:41:23.300299+00:00 [info] <0.222.0> Running boot step rabbit_alarm defined by app rabbit
2022-04-19 13:41:23.304497+00:00 [info] <0.297.0> Memory high watermark set to 12695 MiB (13312088473 bytes) of 31738 MiB (33280221184 bytes) total
2022-04-19 13:41:23.308954+00:00 [info] <0.299.0> Enabling free disk space monitoring
2022-04-19 13:41:23.309007+00:00 [info] <0.299.0> Disk free limit set to 50MB
2022-04-19 13:41:23.312489+00:00 [info] <0.222.0> Running boot step code_server_cache defined by app rabbit
2022-04-19 13:41:23.312650+00:00 [info] <0.222.0> Running boot step file_handle_cache defined by app rabbit
2022-04-19 13:41:23.312958+00:00 [info] <0.302.0> Limiting to approx 65439 file handles (58893 sockets)
2022-04-19 13:41:23.313163+00:00 [info] <0.303.0> FHC read buffering: OFF
2022-04-19 13:41:23.313217+00:00 [info] <0.303.0> FHC write buffering: ON
2022-04-19 13:41:23.313829+00:00 [info] <0.222.0> Running boot step worker_pool defined by app rabbit
2022-04-19 13:41:23.313932+00:00 [info] <0.283.0> Will use 4 processes for default worker pool
2022-04-19 13:41:23.313982+00:00 [info] <0.283.0> Starting worker pool 'worker_pool' with 4 processes in it
2022-04-19 13:41:23.314583+00:00 [info] <0.222.0> Running boot step database defined by app rabbit
2022-04-19 13:41:23.314894+00:00 [info] <0.222.0> Node database directory at /bitnami/rabbitmq/mnesia/rabbit@rabbitmq-0 is empty. Assuming we need to join an existing cluster or initialise from scratch...
2022-04-19 13:41:23.314963+00:00 [info] <0.222.0> Configured peer discovery backend: rabbit_peer_discovery_k8s
2022-04-19 13:41:23.315110+00:00 [info] <0.222.0> Will try to lock with peer discovery backend rabbit_peer_discovery_k8s
2022-04-19 13:41:23.316998+00:00 [noti] <0.44.0> Application mnesia exited with reason: stopped

BOOT FAILED
===========
Exception during startup:

2022-04-19 13:41:23.317269+00:00 [erro] <0.222.0>
2022-04-19 13:41:23.317269+00:00 [erro] <0.222.0> BOOT FAILED
2022-04-19 13:41:23.317269+00:00 [erro] <0.222.0> ===========
2022-04-19 13:41:23.317269+00:00 [erro] <0.222.0> Exception during startup:
2022-04-19 13:41:23.317269+00:00 [erro] <0.222.0>
2022-04-19 13:41:23.317269+00:00 [erro] <0.222.0> error:{badmatch,{error,enoent}}
2022-04-19 13:41:23.317269+00:00 [erro] <0.222.0>
2022-04-19 13:41:23.317269+00:00 [erro] <0.222.0>     rabbit_peer_discovery_k8s:make_request/0, line 121
2022-04-19 13:41:23.317269+00:00 [erro] <0.222.0>     rabbit_peer_discovery_k8s:list_nodes/0, line 41
2022-04-19 13:41:23.317269+00:00 [erro] <0.222.0>     rabbit_peer_discovery_k8s:lock/1, line 76
2022-04-19 13:41:23.317269+00:00 [erro] <0.222.0>     rabbit_peer_discovery:lock/0, line 190
2022-04-19 13:41:23.317269+00:00 [erro] <0.222.0>     rabbit_mnesia:init_with_lock/3, line 104
2022-04-19 13:41:23.317269+00:00 [erro] <0.222.0>     rabbit_mnesia:init/0, line 76
2022-04-19 13:41:23.317269+00:00 [erro] <0.222.0>     rabbit_boot_steps:-run_step/2-lc$^0/1-0-/2, line 41
2022-04-19 13:41:23.317269+00:00 [erro] <0.222.0>     rabbit_boot_steps:run_step/2, line 46
2022-04-19 13:41:23.317269+00:00 [erro] <0.222.0>
error:{badmatch,{error,enoent}}

    rabbit_peer_discovery_k8s:make_request/0, line 121
    rabbit_peer_discovery_k8s:list_nodes/0, line 41
    rabbit_peer_discovery_k8s:lock/1, line 76
    rabbit_peer_discovery:lock/0, line 190
    rabbit_mnesia:init_with_lock/3, line 104
    rabbit_mnesia:init/0, line 76
    rabbit_boot_steps:-run_step/2-lc$^0/1-0-/2, line 41
    rabbit_boot_steps:run_step/2, line 46

2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>   crasher:
2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>     initial call: application_master:init/4
2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>     pid: <0.221.0>
2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>     registered_name: []
2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>     exception exit: {{badmatch,{error,enoent}},{rabbit,start,[normal,[]]}}
2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>       in function  application_master:init/4 (application_master.erl, line 142)
2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>     ancestors: [<0.220.0>]
2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>     message_queue_len: 1
2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>     messages: [{'EXIT',<0.222.0>,normal}]
2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>     links: [<0.220.0>,<0.44.0>]
2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>     dictionary: []
2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>     trap_exit: true
2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>     status: running
2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>     heap_size: 2586
2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>     stack_size: 29
2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>     reductions: 186
2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>   neighbours:
2022-04-19 13:41:24.318598+00:00 [erro] <0.221.0>
2022-04-19 13:41:24.319087+00:00 [noti] <0.44.0> Application rabbit exited with reason: {{badmatch,{error,enoent}},{rabbit,start,[normal,[]]}}
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbit,{{badmatch,{error,enoent}},{rabbit,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{{badmatch,{error,enoent}},{rabbit,start,[normal,[]]}}})

Crash dump is being written to: /opt/bitnami/rabbitmq/var/log/rabbitmq/erl_crash.dump...done
Waiting for erlang distribution on node '[email protected]' while OS process '51' is running
Error:
process_not_running
Waiting for erlang distribution on node '[email protected]' while OS process '51' is running
Error:
process_not_running

似乎 Erlang cookie 没有正确分发,但在检查了一些帖子后,我没有得出任何结论。

如果您有任何可能有用的信息,如果您与我分享,我将不胜感激。

编辑 1:我已经进入了必须创建的三个副本中的第一个也是唯一一个 pod,运行rabbitmq-diagnostics erlang_cookie_sources以找出 Erland cookie 文件存储在哪里(/opt/bitnami/rabbitmq/.rabbitmq/.erlang.cookie)和检查它是否与我在图表的 values.yaml 中指示的相同,并且完全相同,所以最后我认为分配密钥没有问题,但我仍然有同样的问题。再次查看日志我可以看到有一些进程没有运行,我不知道问题是否应该存在。

kubernetes rabbitmq bitnami helm
  • 1 个回答
  • 770 Views
Martin Hope
dominik-devops
Asked: 2021-11-17 04:25:23 +0800 CST

通过 helm chart 从私有注册表下载时禁止访问,但不能通过简单的 pod

  • 0

我正在尝试使用托管在 gitlab 上的自定义图像部署 bitnami moodle 图表。当我在 pod 中使用注册表时,图像会被下载。但是,当在图表中使用时,它会给出以下错误并禁止访问。在 minikube 和私有集群上测试。

*Failed to pull image "registry.gitlab.com/<repo>/01976966/container/external/moodle:3.11.4-debian-10-r0": rpc error: code = Unknown desc = Er ││ ror response from daemon: Head "https://registry.gitlab.com/v2/<repo>/01976966/container/external/moodle/manifests/3.11.4-debian-10-r0": denied: access forbidden*

此设置用于父图表 values.yaml:

image:
      registry: registry.gitlab.com
      repository: <repo>/01976966/container/external/moodle
      tag: 3.11.4-debian-10-r0
      pullPolicy: Always
      pullSecrets:
        - name: <secret-name>

有问题的基本图表:https ://github.com/bitnami/charts/tree/master/bitnami/moodle/

kubernetes helm
  • 1 个回答
  • 79 Views
Martin Hope
Stefan
Asked: 2021-09-27 14:37:16 +0800 CST

Grafana 在 kubernetes 中部署,并在 ingess 中使用 Letsencript 证书

  • 0

我想在我的 AKS kubernetes 集群中部署 grafana。对于部署,我使用 helm

helm install grafana grafana/grafana --namespace=grafana --set "service.type=ClusterIP,persistence.enabled=true,replicaCount=1,persistence.size=10Gi,persistence.accessModes[0]=ReadWriteOnce,plugins=grafana-azure-monitor-datasource\,grafana-kubernetes-app,ingress.enabled=true,ingress.tls[0]=enabeld,ingress.tls[0].hosts[0]=mydomain.de,ingress.tls[0].secretName=tls-grafana-ingress,ingress.hosts[0]=mydomain.de,ingress.annotations.kubernetes.io/ingress.class=nginx,ingress.cert-manager.io/issuer=letsencrypt-prod" 

它可以创建 grafana(当我删除“ ingress.annotations.kubernetes.io/ingress.class=nginx,ingress.cert-manager.io/issuer=letsencrypt-prod”时),但 tls 证书有问题。证书将不会产生。

我需要改变什么,所以证书也会创建?

问候斯特凡

nginx kubernetes grafana helm
  • 1 个回答
  • 312 Views
Martin Hope
thxmike
Asked: 2021-09-21 05:43:43 +0800 CST

在 AKS 上使用 Helm 的 Kubernetes NGINX 入口控制器失败

  • 0

设置 K8 入口控制器时,此处记录

我无法通过“创建入口控制器”步骤在 Helm 命令步骤并将命令置于调试模式期间,我看到其中一个步骤超时:

预安装失败:等待条件超时

查看K8 POD日志后发现K8系统因为auth错误无法连接registry。出于安全原因,以下输出已被修改,但显示错误

Failed to pull image "myregistry.azurecr.io/jettech/kube-webhook-certgen:v1.5.1@sha256:...90bd8068": [rpc error: code = NotFound desc = failed to pull and unpack image "....azurecr.io/jettech/kube-webhook-certgen@sha256:....9b9e90bd8068": failed to resolve reference "myregistry.azurecr.io/jettech/kube-webhook-certgen@sha256:...190b1dcbcb9b9e90bd8068": ....azurecr.io/jettech/kube-webhook-certgen@sha256:...9b9e90bd8068: not found, rpc error: code = Unknown desc = failed to pull and unpack image "myregistry.azurecr.io/jettech/kube-webhook-certgen@sha256:...dcbcb9b9e90bd8068": failed to resolve reference "myregistry.azurecr.io/jettech/kube-webhook-certgen@sha256:...b9b9e90bd8068": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized]

我已经根据“az acr import”命令验证了图像位于容器注册表中,并且如果我使用“kubectl”进行标准K8部署,k8能够连接到acr。我还使用以下命令验证了集群和注册表之间的连接,它按预期工作:

az aks check-acr -n <cluster> -g <rg>  --acr <acr>

此故障仅在使用 helm 命令时发生。

编辑

在研究了更多之后,我发现了以下文章

https://stackoverflow.com/questions/68949434/installing-nginx-ingress-controller-into-aks-cluster-cant-pull-image-from-azu

看来摘要有问题。我在 helm 命令中添加/替换了以下内容:

--set controller.image.digest="sha256:e9fb216ace49dfa4a5983b183067e97496e7a8b307d2093f4278cd550c303899" \
--set controller.admissionWebhooks.patch.image.digest="sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" \

但是,在运行修改后的 helm 命令时,POD 处于错误状态,并出现以下错误

unknown flag: --controller-class

我尝试设置环境变量 CONTROLLER_TAG=v1.0.0,如文章中所述,但这无济于事

另一种解决方法是在命令中设置版本号:3.36.0。这是成功的,但需要降级的版本

kubernetes azure aks helm
  • 1 个回答
  • 594 Views
Martin Hope
Don Don Don
Asked: 2021-09-09 23:37:21 +0800 CST

Kubernetes 错误“无法附加或挂载卷”

  • 0

我使用 nginx ingress 作为负载均衡器部署了 bitnami/wordpress helm,就像这里一样。一切正常,但问题出在一些 pod 是手动创建或通过自动缩放自动创建时。其中一些(不是全部)一直处于“ContainerCreating”状态,日志如下所示:

  Normal   Scheduled    33m                  default-scheduler  Successfully assigned default/wordpress-69c8f65d96-wnkfv to main-node-d29388
  Warning  FailedMount  4m28s (x6 over 29m)  kubelet            Unable to attach or mount volumes: unmounted volumes=[wordpress-data], unattached volumes=[default-token-s4gdj wordpress-data]: timed out waiting for the condition
  Warning  FailedMount  0s (x9 over 31m)     kubelet            Unable to attach or mount volumes: unmounted volumes=[wordpress-data], unattached volumes=[wordpress-data default-token-s4gdj]: timed out waiting for the condition

我部署了 bitnami/wordpress,然后使用以下设置进行了升级:

helm install wordpress bitnami/wordpress --set service.type=ClusterIP --set ingress.enabled=true --set ingress.certManager=true --set ingress.annotations."kubernetes\.io/ingress\.class"=nginx --set ingress.annotations."cert-manager\.io/cluster-issuer"=letsencrypt-prod --set ingress.hostname=DOMAIN.com --set ingress.extraTls[0].hosts[0]=DOMAIN.com --set ingress.extraTls[0].secretName=wordpress.local-tls --set wordpressPassword=PASSWORD --set autoscaling.enabled=true --set autoscaling.minReplicas=1 autoscaling.maxReplicas=30

kubectl get pods 看起来像这样

ingress-nginx-ingress-controller-84bff86888-f4tpb                 1/1     Running             0          2d3h
ingress-nginx-ingress-controller-default-backend-c5b786dbbqw5xz   1/1     Running             0          2d3h
load-generator                                                    1/1     Running             0          71s
wordpress-69c8f65d96-48jd9                                        0/1     ContainerCreating   0          18m
wordpress-69c8f65d96-66ftt                                        0/1     ContainerCreating   0          56m
wordpress-69c8f65d96-dq7xq                                        1/1     Running             0          100m
wordpress-69c8f65d96-fbnt6                                        1/1     Running             0          101m
wordpress-69c8f65d96-wnkfv                                        0/1     ContainerCreating   0          56m
wordpress-mariadb-0                                               1/1     Running             0          8h

怎样做才能使新 pod 没有这个问题并让它们启动?

kubernetes nginx-ingress bitnami helm kubectl
  • 1 个回答
  • 4051 Views
Martin Hope
silviud
Asked: 2021-08-31 09:40:43 +0800 CST

为 Bitnami helm chart postgresql 自动创建数据库/用户/密码到 K8

  • 0

我正在将https://github.com/bitnami/charts/tree/master/bitnami/postgresql部署到 k8s 中,并想知道如何自动执行以下操作

  • 创建数据库
  • 使用密码创建一个角色作为上述数据库的所有者

我已经看到了extraDeploy https://github.com/bitnami/charts/blob/master/bitnami/postgresql/values.yaml#L43 参数,但这似乎会创建一个 k8s 特定资源(不涉及 pg)。

我利用的唯一想法extraDeploy是创建一个部署自定义 pod 的作业,该 pod 将连接到 pg 并创建数据库、角色和密码......

谢谢!

postgresql kubernetes bitnami helm
  • 1 个回答
  • 2151 Views
Martin Hope
cclloyd
Asked: 2021-07-10 07:58:05 +0800 CST

迁移到新集群后 GitLab Runner 无法注册

  • 1

我在 Kubernetes 中安装了 GitLab 和他们的 Helm 图表。

我通过以下步骤将旧的 Gitlab 部署从一个集群迁移到另一个集群:

  • 缩小旧集群中的所有 pod
  • 将带有 helm 的 values.yml 应用到新集群(以创建 PVC)
  • 缩减新集群中的所有 pod
  • 更改 DNS 记录、HAProxy 等
  • 手动将数据从旧 PVC 同步到新 PVC(minio、gitaly、redis、postgres、prometheus)
  • 运行 helm upgrade 以使部署在新集群中重新上线

毕竟大部分部署工作正常。能够登录和使用git。

但是跑步者没有注册,所以我不能运行任何 CI。看着gitlab-gitlab-runner吊舱,我看到下面的消息一遍又一遍地重复:

Registration attempt 30 of 30
Runtime platform                                    arch=amd64 os=linux pid=691 revision=3b6f852e version=14.0.0
WARNING: Running in user-mode.
WARNING: The user-mode requires you to manually start builds processing:
WARNING: $ gitlab-runner run
WARNING: Use sudo for system-mode:
WARNING: $ sudo gitlab-runner...
 
ERROR: Registering runner... failed                 runner=y6ixJoR1 status=500 Internal Server Error
PANIC: Failed to register the runner. You may be having network problems.

如您所见,它无法注册跑步者。试图去/admin/runners给我一个 500 错误。

我在哪里可以看到更多关于我为什么会收到这个 500 错误的信息?

kubernetes gitlab helm
  • 2 个回答
  • 1330 Views
Martin Hope
Oyabi
Asked: 2021-05-05 11:06:30 +0800 CST

CreateContainerError:超出上下文期限

  • 1

对于一个项目,我必须使用大容器(500Mb 到 60Gb)。

我没有精确的测量值,但是当我使用 gitlab-runner 运行大于 3-5Gb 的容器时,我在 rancher 中遇到错误:CreateContainerError: context deadline exceeded

我们的 kubernetes 集群是使用 rke 构建的,rancher 作为 web ui 并位于我们的数据中心。

该错误仅在 gitlab-runner 中出现,如果我docker run ...在 kubernetes 节点上启动,一切正常。

也许某处有超时?

你们有没有人遇到过这个问题?

谢谢你。

docker kubernetes gitlab helm rancher-2
  • 1 个回答
  • 4247 Views
Martin Hope
bmy4415
Asked: 2021-01-08 23:17:21 +0800 CST

使用 helm 部署 mysql 版本

  • 1

嗨,我是 k8s 和 helm 生态系统的新手。

我使用 kubespray 和 EC2 构建了自己的 k8s 集群(我可以使用 EKS,但出于练习的目的),下一步是使用 helm。

我正在尝试将 mysql 图表部署到我的 k8s 集群。

我的环境

  • k8s集群有1个master和3个节点(都是ec2 t2.small实例)
  • 使用来自https://github.com/helm/charts/tree/master/stable/mysql的 mysql 图表
# storage class manifest
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
#volumeBindingMode: WaitForFirstConsumer
# values.yaml from mysql chart
## Persist data to a persistent volume
persistence:
  enabled: true
  ## database data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: "local-storage"  # <-- Changed this to use my own storage class
  accessMode: ReadWriteOnce
  size: 1Gi                      # <-- Changed this since only 2.5GB is available on each node
  annotations: {}
...

问题

pvc有错误。以下是相关日志。

ubuntu@nodec1:~/charts/stable/mysql$ kubectl describe pvc/mysqlserver
Name:          mysqlserver
Namespace:     default
StorageClass:  local-storage
Status:        Pending
Volume:
Labels:        app=mysqlserver
               app.kubernetes.io/managed-by=Helm
               chart=mysql-1.6.9
               heritage=Helm
               release=mysqlserver
Annotations:   meta.helm.sh/release-name: mysqlserver
               meta.helm.sh/release-namespace: default
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Mounted By:    mysqlserver-5d5cfcd5f8-922k4
Events:
  Type     Reason              Age               From                         Message
  ----     ------              ----              ----                         -------
  Warning  ProvisioningFailed  6s (x3 over 21s)  persistentvolume-controller  no volume plugin matched name: kubernetes.io/no-provisioner

我不知道为什么 pvc 不能使用kubernetes.io/no-provisioner我自己的存储类中的插件local-storage。有人可以帮助解决这个问题吗?

kubernetes helm
  • 1 个回答
  • 650 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve