在我的公司,我使用 Terraform 构建了一个 K8s 集群,并使用 EFK(Elasticsearch、Fluent-bit、Kibana)配置了日志系统。
k8s 和 Elasticsearch 使用 AWS 的 EKS 和 Opensearch Servcie (ES 7.10),Fluent-bit 通过 Helm 作为 Daemonset 单独安装。
https://artifacthub.io/packages/helm/ Fluent/fluence-bit
在 Fluent-bit Config 的情况下,其结构如下。
config:
inputs: |
[INPUT]
Name tail
Exclude_Path /var/log/containers/cloudwatch-agent*, /var/log/containers/fluent-bit*, /var/log/containers/aws-node*, /var/log/containers/kube-proxy*, / var/log/containers/aws-load-balancer*, /var/log/containers/cert-manager*, /var/log/containers/coredns*, /var/log/containers/ebs-csi*, /var/ log/containers/grafana*, /var/log/containers/karpenter*, /var/log/containers/metrics-server*, /var/log/containers/prometheus*
Path /var/log/containers/*.log
Tag kube.*
Mem_Buf_Limit 50MB
multiline.parser python-custom
outputs: |
[OUTPUT]
Name es
Match kube.*
Host ${es_domain_name}
Port 443
TLS On
AWS_Auth On
AWS_Region ${region}
Retry_Limit 6
Replace_Dots On
Trace_Error On
filters: |
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Merge_Log_Key log_processed
Keep_LogOff
K8S-Logging.Parser On
K8S-Logging.Exclude On
customParsers: |
[MULTILINE_PARSER]
name python-custom
type regex
flush_timeout 1000
rule "start_state" "/stderr F(.*)/" "cont"
rule "cont" "/stderr F(.*)/" "cont"
总的来说,它工作得很好,但是有一天,在某种情况之后,我注意到 Fluent-bit 收集的所有日志都指向相同的时间戳。
之前Pod输出的日志是用准确的时间戳记录的,但是Pod重新加载后,日志继续用固定的@timestamp记录,并且由于日志是在过去的时区累积的,所以出现以下现象:在 kibana 中查看。
日志都是实时传输到ES的,但是由于它们都是在过去固定的时间收集的,因此无法执行正确的查询。
// Unlike the timestamp output in the actual application, the timestamp recorded in Fluent-bit remains the same.
Oct 16, 2023 @ 16:13:36.142 2023-10-16T09:48:55...[2023-10-16 09:48:55]...
Oct 16, 2023 @ 16:13:36.142 2023-10-16T09:48:55...[2023-10-16 09:48:55]...
Oct 16, 2023 @ 16:13:36.142 2023-10-16T09:53:55...[2023-10-16 09:53:55]...
Oct 16, 2023 @ 16:13:36.142 2023-10-16T09:58:03...[2023-10-16 09:58:03]...
Oct 16, 2023 @ 16:13:36.142 2023-10-16T09:58:03...[2023-10-16 09:58:03]...
Oct 16, 2023 @ 16:13:36.142 2023-10-16T09:58:03...[2023-10-16 09:58:03]...
Oct 16, 2023 @ 16:13:36.142 2023-10-16T09:58:55...[2023-10-16 09:58:55]...
Oct 16, 2023 @ 16:13:36.142 2023-10-16T09:58:55...[2023-10-16 09:58:55]...
Oct 16, 2023 @ 16:13:36.142 2023-10-16T10:03:03...[2023-10-16 10:03:03]...
Oct 16, 2023 @ 16:13:36.142 2023-10-16T10:03:55...[2023-10-16 10:03:55]...
更新和重新加载为每个服务运行的 Pod 时会出现此问题。我确认新创建的Pod的inotify_fs_add日志输出到Fluent-bit,无法确认其他错误日志。
吊舱重新装载
│ backend-api-deployment-5d9fc55496-8mppx ● 1/1 0 Running
│ backend-api-deployment-57d68dd4f5-djn8x ● 1/1 0 Terminating
流利位日志
│ Fluent Bit v2.1.8
│ * Copyright (C) 2015-2022 The Fluent Bit Authors
│ * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
│ * https://fluentbit.io
│
│ [2023/10/16 16:05:00] [ info] [fluent bit] version=2.1.8, commit=1d83649441, pid=1
│ [2023/10/16 16:05:00] [ info] [storage] ver=1.4.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
│ [2023/10/16 16:05:00] [ info] [cmetrics] version=0.6.3
│ [2023/10/16 16:05:00] [ info] [ctraces ] version=0.3.1
│ [2023/10/16 16:05:00] [ info] [input:tail:tail.0] initializing
│ [2023/10/16 16:05:00] [ info] [input:tail:tail.0] storage_strategy='memory' (memory only)
│ [2023/10/16 16:05:00] [ info] [input:tail:tail.0] multiline core started
│ [2023/10/16 16:05:00] [ info] [filter:kubernetes:kubernetes.0] https=1 host=kubernetes.default.svc port=443
│ [2023/10/16 16:05:00] [ info] [filter:kubernetes:kubernetes.0] token updated
│ [2023/10/16 16:05:00] [ info] [filter:kubernetes:kubernetes.0] local POD info OK
│ [2023/10/16 16:05:00] [ info] [filter:kubernetes:kubernetes.0] testing connectivity with API server...
│ [2023/10/16 16:05:00] [ info] [filter:kubernetes:kubernetes.0] connectivity OK
│ [2023/10/16 16:05:00] [ info] [output:es:es.0] worker #0 started
│ [2023/10/16 16:05:00] [ info] [output:es:es.0] worker #1 started
│ [2023/10/16 16:05:00] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
│ [2023/10/16 16:05:00] [ info] [sp] stream processor started
│ [2023/10/16 16:05:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=41916664 watch_fd=1 name=/var/log/containers/backend-api-deployment-57d68dd4f5-djn8x_backend-dev_backend-api-f950104ab9f300586803316ad7beef5c9506db9fceeb4af0dc0575b3b9bacfd7.log
│ [2023/10/16 16:05:00] [ info] [input:tail:tail.0] inotify_fs_add(): inode=17488499 watch_fd=2 name=/var/log/containers/backend-api-deployment-6dc5b97645-tqwxh_backend-staging_backend-api-23f14f74af6cc163f233499ba92a2746cb2f263c9379ba53fc47cd3d76e00bbb.log
...
│ [2023/10/17 04:49:01] [error] [output:es:es.0] HTTP status=429 URI=/_bulk, response:
│ 429 Too Many Requests /_bulk
│
│ [2023/10/17 04:49:01] [ warn] [engine] failed to flush chunk '1-1697518140.870529031.flb', retry in 6 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0)
│ [2023/10/17 04:49:01] [error] [output:es:es.0] HTTP status=429 URI=/_bulk, response:
│ 429 Too Many Requests /_bulk
│
│ [2023/10/17 04:49:01] [ warn] [engine] failed to flush chunk '1-1697518140.276173730.flb', retry in 7 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0)
│ [2023/10/17 04:49:07] [ info] [engine] flush chunk '1-1697518140.870529031.flb' succeeded at retry 1: task_id=1, input=tail.0 > output=es.0 (out_id=0)
│ [2023/10/17 04:49:08] [ info] [engine] flush chunk '1-1697518140.276173730.flb' succeeded at retry 1: task_id=0, input=tail.0 > output=es.0 (out_id=0)
│ [2023/10/17 05:08:50] [ info] [input:tail:tail.0] inotify_fs_add(): inode=37547666 watch_fd=8 name=/var/log/containers/backend-api-deployment-5d9fc55496-8mppx_backend-dev_backend-api-839c128d6227f797722496a992ca9149b95c366997c2b1ec28ceb65490582e3b.log
│ [2023/10/17 05:08:52] [ info] [filter:kubernetes:kubernetes.0] token updated
│ [2023/10/17 05:09:25] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=41916664 watch_fd=1
│ [2023/10/17 05:09:37] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=17488499 watch_fd=2
有趣的是,如果所有的流利位守护进程被手动杀死并通过自我修复复活,它们都会暂时再次正常收集。
之后,如果再次更新并重新加载Service Pod,则会再次出现相同的问题。
是什么原因导致这个问题呢?我该如何修复它?如果有任何推荐的 Fluent-bit 设置,欢迎提供额外的建议。
谢谢你!