AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / server / 问题

问题[kubernetes](server)

Martin Hope
pf12345678910
Asked: 2025-03-20 21:38:50 +0800 CST

K8 - Pod 监听端口 8080,但 Node 重置了外部端口上的连接

  • 5

我已经在 Kubernetes 集群上部署了一个 C# api

据我所知,我们应该有:GET http 请求 -> Node(30000) -> Pod(80) -> C# API(8080)

我的 docker 镜像暴露了 8080 端口

FROM our-registry/dotnet/sdk:8.0 AS build
WORKDIR /app

# Copy the project file and restore any dependencies (use .csproj for the project name)
COPY MyApi/MyApi/*.csproj ./
RUN dotnet restore

# Copy the rest of the application code
COPY MyApi/MyApi/. ./

# Publish the application
ARG BUILD_CONFIG=Release
RUN echo "Building with configuration: ${BUILD_CONFIG}"
RUN dotnet publish -c ${BUILD_CONFIG} -o out

# Build the runtime image
FROM our-registry/dotnet/aspnet:8.0 AS runtime
WORKDIR /app
COPY --from=build /app/out ./

# Expose the port your application will run on
EXPOSE 8080

# Start the application
ENTRYPOINT ["dotnet", "MyApi.dll"]

我的 K8 api-service.yaml 设置如下

apiVersion: v1
kind: Service
metadata:
  name: my-api-service
  namespace: somenamespace
spec:
  selector:
    app: my-api
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
      nodePort: 30000
  type: NodePort 

我的 C# API 启动设置设置了端口8080 ,并且它在该端口上的本地调试/发布中运行良好

{
  "profiles": {
    "http": {
      "commandName": "Project",
      "launchBrowser": true,
      "launchUrl": "swagger",
        "environmentVariables": {
            ...
        },
      "dotnetRunMessages": true,
      "applicationUrl": "http://localhost:8080"
    },
    "https": {
      "commandName": "Project",
      "launchBrowser": true,
      "launchUrl": "swagger",
        "environmentVariables": {
            ...
        },
      "dotnetRunMessages": true,
      "applicationUrl": "https://localhost:7084;http://localhost:8080"
    },
    ...

在集群上,服务运行

kubectl get svc -n somenamespace

NAME                  TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
my-api-service        NodePort   10.*.*.*   <none>        80:30000/TCP   3h56m

在那个吊舱里

kubectl get pods -n somenamespace -o wide
NAME                           READY   STATUS    RESTARTS   AGE    IP             NODE                                          NOMINATED NODE   READINESS GATES
my-apipod-*********-*****   1/1     Running   0          138m   10.*.*.*   somenode   <none>           <none>

我检查了吊舱内部

kubectl exec -it my-apipod-*********-***** -n somenamespace -- bash
...
root@my-apipod-*********-*****:/app# netstat -tulnp | grep LISTEN
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      1/dotnet    

获取节点

kubectl get nodes -o wide
NAME                                          STATUS   ROLES                       AGE    VERSION           INTERNAL-IP      EXTERNAL-IP      OS-IMAGE                         KERNEL-VERSION      CONTAINER-RUNTIME
somenode   Ready    worker                      108d   v1.28.11+rke2r1   192.168.1.23   192.168.1.23  Ubuntu 20.04.6 LTS               5.4.0-200-generic   containerd://1.7.17-k3s1

我尝试使用 nodePort 30000 连接到该节点的 IP

curl -X GET http://192.168.1.23:30000/swagger/index.html -v
Note: Unnecessary use of -X or --request, GET is already inferred.
*   Trying 192.168.1.23:30000...
* Connected to 192.168.1.23 (192.168.1.23) port 30000 (#0)
> GET /swagger/index.html HTTP/1.1
> Host: 192.168.1.23:30000
> User-Agent: curl/7.81.0
> Accept: */*
> 
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer

我不确定还要检查什么,我在本地运行了我的 api 代码(.NET),并且 swagger 可以正常工作并且可以访问

感谢您的帮助

[编辑]

当我在浏览器中执行如下请求时:

http://192.168.1.23:30000
http://192.168.1.23:30000/swagger/index.html

日志中没有任何内容

kubectl logs -f -n mynamespace my-apipod-*********-*****
warn: Microsoft.AspNetCore.Hosting.Diagnostics[15]
      Overriding HTTP_PORTS '8080' and HTTPS_PORTS ''. Binding to values defined by URLS instead 'http://0.0.0.0:8080'.
info: Microsoft.Hosting.Lifetime[14]
      Now listening on: http://0.0.0.0:8080
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /app

如果我尝试 https 就会出现这种情况

warn: Microsoft.AspNetCore.HttpsPolicy.HttpsRedirectionMiddleware[3]
      Failed to determine the https port for redirect.

所以 API 似乎运行良好

kubernetes
  • 2 个回答
  • 123 Views
Martin Hope
jamzsabb
Asked: 2025-02-17 23:32:00 +0800 CST

Metallb 未将 IP 地址绑定到节点接口

  • 5

我已经运行 metallb 一年多了,但在一次拙劣的升级之后,我重新安装了整个系统。重新安装 metallb 后,它能够将外部 IP 分配给服务,但这些服务无法访问。进一步检查后,我发现扬声器根本没有响应 ARP 请求,进一步挖掘发现 IP 地址从未绑定到节点的网络接口。当我通过运行手动绑定 IP 地址时ip addr add 192.168.1.29/24 dev enp10s0,该地址立即开始工作,我能够访问该服务。

不过,我不确定为什么它没有自动绑定,我猜应该是扬声器在做这件事?我的环境是一个 2 节点 Talos 1.9.4 集群,运行 kubernetes 1.32.2。使用全新安装的 metallb 0.14.9 和 helm 以及所有默认值,然后添加了以下 l2advertisement 和 ipaddresspool:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: default
spec:
  addresses:
  - 192.168.1.20-192.168.1.99
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: default
  namespace: metallb
spec:
  ipAddressPools:
  - default

我正在使用非常前沿的 kubernetes 和 metallb 版本,所以可能是 bug?我在 GitHub 上查看了演讲者代码,但我不会说 golang

kubernetes
  • 1 个回答
  • 41 Views
Martin Hope
Nilcouv
Asked: 2025-01-24 02:31:20 +0800 CST

Wikijs - Ingress“所有后端服务均处于不健康状态”

  • 5

问题:我在 GKE 区域集群上运行 Wiki.js,并遇到了 Ingress 配置问题。Ingress 控制器返回“所有后端服务处于不健康状态”,区域网络端点组显示 0/1 个可操作端点。这是在编辑 wikijs 面板管理员上的设置后发生的。

环境:

  • GKE 区域集群
  • Wiki.js版本:2.5

当前状态:

  • 所有 Pod 均正在运行
  • 仍然能够通过服务 IP 连接到 wikijs
  • 区域网络端点组:0/1 可运行

当前配置:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-wikijs
  namespace: test
  labels:
    app: test-wikijs
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-wikijs
  template:
    metadata:
      labels:
        app: test-wikijs
    spec:
      initContainers:
      - name: init-permissions
        image: busybox
        command: ["sh", "-c", "chmod -R 777 /wiki/data && chown -R 1000:1000 /wiki/data"]
        volumeMounts:
        - name: wikijs-data
          mountPath: /wiki/data
      containers:
      - name: test-wikijs
        image: requarks/wiki:2.5
        ports:
        - containerPort: 3000
        env:
          - name: DB_TYPE
            valueFrom:
              configMapKeyRef:
                name: test-wikjs-config
                key: DB_TYPE
          - name: DB_HOST
            valueFrom:
              configMapKeyRef:
                name: test-wikjs-config
                key: DB_HOST
          - name: DB_PORT
            valueFrom:
              configMapKeyRef:
                name: test-wikjs-config
                key: DB_PORT
          - name: DB_NAME
            valueFrom:
              secretKeyRef:
                name: sql-secret
                key: POSTGRES_DB
          - name: DB_USER
            valueFrom:
              secretKeyRef:
                name: sql-secret
                key: POSTGRES_USER
          - name: DB_PASS
            valueFrom:
              secretKeyRef:
                name: sql-secret
                key: POSTGRES_PASSWORD
        volumeMounts:
        - name: wikijs-data
          mountPath: /wiki/data
      volumes:
      - name: wikijs-data
        persistentVolumeClaim:
          claimName: wikijs-data-pvc
---
# Service WikiJS
apiVersion: v1
kind: Service
metadata:
  name: test-wikijs         # Nom du service
  namespace: test
  labels:
    app: test-wikijs
spec:
  selector:
    app: test-wikijs         # Sélectionne les pods avec le label app=wikijs
  ports:
  - port: 80            # Port exposé par le service
    targetPort: 3000    # Port de l'application WikiJS
  type: LoadBalancer    # Type de service qui expose l'application à l'extérieur

日志

PS C:\Users\nicol\Git\test> # Liste des ingress
>> kubectl get ingress -n test
>>
>> # Description détaillée de l'ingress
>> kubectl describe ingress wikijs-ingress-multi -n test
NAME                   CLASS    HOSTS   ADDRESS         PORTS   AGE
wikijs-ingress-multi   <none>   *       **.**.***.***   80      51m
Name:             wikijs-ingress-multi
Labels:           <none>
Namespace:        test
Address:          **.**.***.***
Ingress Class:    <none>
Default backend:  test-wikijs:80 (10.112.1.14:3000)
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           *     test-wikijs:80 (10.112.1.14:3000)
Annotations:  ingress.gcp.kubernetes.io/pre-shared-cert:
                mcrt-5eff98e5-8917-4807-9148-79c10698f34a,mcrt-76d117a0-71b0-422f-8947-1e39df945093,mcrt-ac4dc442-6a28-4d0f-b8bf-767f1ee1511a
              ingress.kubernetes.io/backends: {"k8s1-16d8895a-test-test-wikijs-80-1cd44efb":"UNHEALTHY"}
              ingress.kubernetes.io/forwarding-rule: k8s2-fr-3t143vku-test-wikijs-ingress-multi-xlcxo7jh
              ingress.kubernetes.io/https-forwarding-rule: k8s2-fs-3t143vku-test-wikijs-ingress-multi-xlcxo7jh
              ingress.kubernetes.io/https-target-proxy: k8s2-ts-3t143vku-test-wikijs-ingress-multi-xlcxo7jh
              ingress.kubernetes.io/ssl-cert:
                mcrt-5eff98e5-8917-4807-9148-79c10698f34a,mcrt-76d117a0-71b0-422f-8947-1e39df945093,mcrt-ac4dc442-6a28-4d0f-b8bf-767f1ee1511a
              ingress.kubernetes.io/static-ip: k8s2-fr-3t143vku-test-wikijs-ingress-multi-xlcxo7jh
              ingress.kubernetes.io/target-proxy: k8s2-tp-3t143vku-test-wikijs-ingress-multi-xlcxo7jh
              ingress.kubernetes.io/url-map: k8s2-um-3t143vku-test-wikijs-ingress-multi-xlcxo7jh
              networking.gke.io/managed-certificates: shortwikitestbe,shortwikitesteu,shortwikitestcom
Events:
  Type    Reason     Age                  From                     Message
  ----    ------     ----                 ----                     -------
  Normal  Sync       50m                  loadbalancer-controller  UrlMap "k8s2-um-3t143vku-test-wikijs-ingress-multi-xlcxo7jh" created
  Normal  Sync       50m                  loadbalancer-controller  TargetProxy "k8s2-tp-3t143vku-test-wikijs-ingress-multi-xlcxo7jh" created
  Normal  Sync       49m                  loadbalancer-controller  ForwardingRule "k8s2-fr-3t143vku-test-wikijs-ingress-multi-xlcxo7jh" created
  Normal  IPChanged  49m                  loadbalancer-controller  IP is now **.**.***.***
  Normal  Sync       49m                  loadbalancer-controller  TargetProxy "k8s2-ts-3t143vku-test-wikijs-ingress-multi-xlcxo7jh" created
  Normal  Sync       49m                  loadbalancer-controller  ForwardingRule "k8s2-fs-3t143vku-test-wikijs-ingress-multi-xlcxo7jh" created
  Normal  Sync       6m7s (x11 over 51m)  loadbalancer-controller  Scheduled for sync

# État du service
>> kubectl get service test-wikijs -n test
>>
>> # Endpoints
>> kubectl get endpoints test-wikijs -n test
>>
>> # Description détaillée du service
>> kubectl describe service test-wikijs -n test
NAME               TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)        AGE
test-wikijs   LoadBalancer   34.118.239.183   **.**.**.***   80:32537/TCP   28h
NAME               ENDPOINTS          AGE
test-wikijs   10.112.1.14:3000   28h
Name:                     test-wikijs
Namespace:                test
Labels:                   app=test-wikijs
Annotations:              cloud.google.com/neg: {"ingress":true}
                          cloud.google.com/neg-status:
                            {"network_endpoint_groups":{"80":"k8s1-16d8895a-test-test-wikijs-80-1cd44efb"},"zones":["europe-west1-b"]}
Selector:                 app=test-wikijs
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       34.118.239.183
IPs:                      34.118.239.183
LoadBalancer Ingress:     **.**.**.***
Port:                     <unset>  80/TCP
TargetPort:               3000/TCP
NodePort:                 <unset>  32537/TCP
Endpoints:                10.112.1.14:3000
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason                Age                 From                Message
  ----    ------                ----                ----                -------
  Normal  EnsuringLoadBalancer  57m (x3 over 112m)  service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   57m (x3 over 112m)  service-controller  Ensured load balancer
  Normal  Create                57m                 neg-controller      Created NEG "k8s1-16d8895a-test-test-wikijs-80-1cd44efb" for test/test-wikijs-k8s1-16d8895a-test-test-wikijs-80-1cd44efb-/80-3000-GCE_VM_IP_PORT-L7 in "europe-west1-b".
  Normal  Attach                57m (x2 over 83m)   neg-controller      Attach 1 network endpoint(s) (NEG "k8s1-16d8895a-test-test-wikijs-80-1cd44efb" in zone "europe-west1-b")

问题:

  1. 什么原因导致 Ingress 处于“不健康状态”?
  2. 为什么显示 0/1 的网络端点可以运行?
  3. 我应该检查哪些特定的 GKE 配置?
kubernetes
  • 1 个回答
  • 49 Views
Martin Hope
Nathan Fallet
Asked: 2025-01-22 03:02:22 +0800 CST

k3s集群上的新节点无法启动pod

  • 5

我们有一个本地 k3s 集群,用于我们的暂存环境,以重现类似于我们的生产环境的内容。今天,我们的单个节点已达到其极限,因此我们决定添加一个新节点。

我买了一台新的物理服务器,刚刚安装了 Ubuntu Server 24.04.1 LTS。下一步是安装 k3s agent 使其加入现有集群。我按照在线文档进行操作:

curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.1:6443 K3S_TOKEN=<my master token> sh -

然后,我检查一切是否准备就绪kubectl get nodes:

NAME    STATUS   ROLES                  AGE    VERSION
serv1   Ready    control-plane,master   382d   v1.28.5+k3s1
serv2   Ready    <none>                 117s   v1.31.4+k3s1

但是当第一个 pod 被分配给这个新节点时,它的状态为CreateContainerConfigError。使用 描述 pod 时kubectl describe pod,我可以看到这个错误:

Warning  Failed     12s (x2 over 13s)  kubelet            Error: services have not yet been read at least once, cannot construct envvars

我在网上找到了一些关于此错误的信息。看来我们的两台服务器之间出了问题,由于某种原因,它们无法正常通信。但由于新节点被标记为Ready,我不明白问题出在哪里……

我也在这里发现了完全相同的情况,但似乎没有分享真正的解决方案。

有人知道这个问题的原因吗?

kubernetes
  • 1 个回答
  • 34 Views
Martin Hope
Nilcouv
Asked: 2025-01-21 16:34:55 +0800 CST

WikiJS “EACCES:权限被拒绝,mkdir '/wiki/data/cache'”尽管卷已挂载

  • 5

问题: 我在 Kubernetes (GKE) 上运行 WikiJS 并遇到权限问题。应用程序无法创建缓存目录,抛出:“EACCES:权限被拒绝,mkdir '/wiki/data/cache'”

环境:

  • Kubernetes:GKE
  • WikiJS 版本:2.5
  • 体积:PVC

当前配置:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-wikijs
  namespace: test
  labels:
    app: test-wikijs
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-wikijs
  template:
    metadata:
      labels:
        app: test-wikijs
    spec:
      containers:
      - name: test-wikijs
        image: requarks/wiki:2.5
        ports:
        - containerPort: 3000
        env:
          - name: DB_TYPE
            valueFrom:
              configMapKeyRef:
                name: test-wikjs-config
                key: DB_TYPE
          - name: DB_HOST
            valueFrom:
              configMapKeyRef:
                name: test-wikjs-config
                key: DB_HOST
          - name: DB_PORT
            valueFrom:
              configMapKeyRef:
                name: test-wikjs-config
                key: DB_PORT
          - name: DB_NAME
            valueFrom:
              secretKeyRef:
                name: sql-secret
                key: POSTGRES_DB
          - name: DB_USER
            valueFrom:
              secretKeyRef:
                name: sql-secret
                key: POSTGRES_USER
          - name: DB_PASS
            valueFrom:
              secretKeyRef:
                name: sql-secret
                key: POSTGRES_PASSWORD
        volumeMounts:
        - name: wikijs-data
          mountPath: /wiki/data
      volumes:
      - name: wikijs-data
        persistentVolumeClaim:
          claimName: wikijs-data-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wikijs-data-pvc
  namespace: test
  labels:
    app: test-wikijs
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: standard
  resources:
    requests:
      storage: 50Gi

控制台返回

# PVC detail list
kubectl get pvc -n $env:NAMESPACE -o wide

# PVC description
kubectl describe pvc wikijs-data-pvc -n $env:NAMESPACE
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE     VOLUMEMODE
postgres-data-pvc   Bound    pvc-9906e095-5341-451f-a2ff-ffbd8d8991e3   20Gi       RWO            standard       <unset>                 3h29m   Filesystem
wikijs-data-pvc     Bound    pvc-30553c23-75aa-429e-9464-7a567103b320   50Gi       RWO            standard       <unset>                 3h29m   Filesystem
Name:          wikijs-data-pvc
Namespace:     test
StorageClass:  standard
Status:        Bound
Volume:        pvc-30553c23-75aa-429e-9464-7a567103b320
Labels:        app=test-wikijs
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
               volume.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      50Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       test-wikijs-5458d966c9-h97w8
Events:        <none>

# detail PV list
kubectl get pv -o wide

# Description d'un PV spécifique
kubectl describe pv wikijs-data-pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                      STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE     VOLUMEMODE
pvc-30553c23-75aa-429e-9464-7a567103b320   50Gi       RWO            Delete           Bound      test/wikijs-data-pvc              standard       <unset>                          3h32m   Filesystem
pvc-9906e095-5341-451f-a2ff-ffbd8d8991e3   20Gi       RWO            Delete           Bound      test/postgres-data-pvc            standard       <unset>                          3h32m   Filesystem

kubectl describe pv pvc-30553c23-75aa-429e-9464-7a567103b320
Name:              pvc-30553c23-75aa-429e-9464-7a567103b320
Labels:            topology.kubernetes.io/region=europe-west1
                   topology.kubernetes.io/zone=europe-west1-b
Annotations:       pv.kubernetes.io/migrated-to: pd.csi.storage.gke.io
                   pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
                   volume.kubernetes.io/provisioner-deletion-secret-name:
                   volume.kubernetes.io/provisioner-deletion-secret-namespace:
Finalizers:        [kubernetes.io/pv-protection external-attacher/pd-csi-storage-gke-io]
StorageClass:      standard
Status:            Bound
Claim:             test/wikijs-data-pvc
Reclaim Policy:    Delete
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          50Gi
Node Affinity:
  Required Terms:
    Term 0:        topology.kubernetes.io/zone in [europe-west1-b]
                   topology.kubernetes.io/region in [europe-west1]
Message:
Source:
    Type:       GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
    PDName:     pvc-30553c23-75aa-429e-9464-7a567103b320
    FSType:     ext4
    Partition:  0
    ReadOnly:   false
Events:         <none>

kubectl exec -it $env:POD_NAME -n $env:NAMESPACE -- ls -la /wiki/data
total 28
drwxr-xr-x    4 root     root          4096 Jan 20 13:19 .
drwxr-xr-x    1 node     node          4096 Oct 12 09:00 ..
drwxr-xr-x    2 node     node          4096 Oct 12 08:55 content
drwx------    2 root     root         16384 Jan 20 13:05 lost+found

问题:如何正确设置 WikiJS pod 写入其数据目录的权限?卷已安装,但应用程序无法创建所需的目录。

kubernetes
  • 1 个回答
  • 58 Views
Martin Hope
鱼鱼鱼三条鱼
Asked: 2024-12-24 08:37:49 +0800 CST

如何使用 Ceph-CSI 在 Kubernetes 容器中为 CephFS 启用 FS-Cache?

  • 6

我正在使用带有 Ceph-CSI 的 Kubernetes 在 Pod 中挂载 CephFS 卷。我想启用 FS-Cache,以便从 CephFS 读取的文件在节点本地缓存,以便更快地访问。

我在主机上安装了 cachefilesd,并在直接在主机上挂载 CephFS 时使用 -o fsc 选项启用了缓存。缓存是在 /var/cache/fscache/ 下创建的,但在检查 pod 中的挂载时,我没有看到启用了 fsc 选项。

如何使用 Ceph-CSI 为 Kubernetes pod 中的 CephFS 挂载启用 FS-Cache?

kubernetes
  • 1 个回答
  • 48 Views
Martin Hope
Malkavian
Asked: 2024-12-13 17:37:50 +0800 CST

在 kubernetes 中:使用 Traefik、Cert Manager 和 http challenge 保护在端口 3000 上运行的程序

  • 5

您好,感谢您抽出时间。我会尝试解释我的实验。我在 kubernetes 中部署了一个应用程序。我可以使用负载均衡器访问它。使用 traefik,我可以通过 http 访问它。我想通过 Https 访问它。为了实现该结果,我尝试关注 youtube 视频和 traefik 文档并使用证书管理器。我喜欢使用 yml 文件工作,但如果有更好的方法,请告诉我,因为我正在从实践中学习。我将发布所有理论上的 yml 文件,希望 serverfault 给我足够的空间来发布它们。

#001-role.yml
        kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: traefik-role
    
    rules:
      - apiGroups:
          - ""
        resources:
          - services
          - secrets
          - nodes
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - discovery.k8s.io
        resources:
          - endpointslices
        verbs:
          - list
          - watch
      - apiGroups:
          - extensions
          - networking.k8s.io
        resources:
          - ingresses
          - ingressclasses
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - extensions
          - networking.k8s.io
        resources:
          - ingresses/status
        verbs:
          - update
      - apiGroups:
          - traefik.io
        resources:
          - middlewares
          - middlewaretcps
          - ingressroutes
          - traefikservices
          - ingressroutetcps
          - ingressrouteudps
          - tlsoptions
          - tlsstores
          - serverstransports
          - serverstransporttcps
        verbs:
          - get
          - list
          - watch

#002-account.yml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-account

#003-role-binding.yml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-role
subjects:
  - kind: ServiceAccount
    name: traefik-account
    namespace: default 

#004-traefik.yml
kind: Deployment
apiVersion: apps/v1
metadata:
  name: traefik-deployment
  labels:
    app: traefik

spec:
  replicas: 1
  selector:
    matchLabels:
      app: traefik
  template:
    metadata:
      labels:
        app: traefik
    spec:
      serviceAccountName: traefik-account
      containers:
        - name: traefik
          image: traefik:v3.2
          args:
            - --api.insecure
            - --providers.kubernetesingress
          ports:
            - name: web
              containerPort: 80
            - name: dashboard
              containerPort: 8080

#005-traefik-service.yml
apiVersion: v1
kind: Service
metadata:
  name: traefik-dashboard-service

spec:
  type: LoadBalancer
  ports:
    - port: 8080
      targetPort: dashboard
  selector:
    app: traefik
---
apiVersion: v1
kind: Service
metadata:
  name: traefik-web-service

spec:
  type: LoadBalancer
  ports:
    - targetPort: web
      port: 80
  selector:
    app: traefik

#006-program-frontend-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert -f compose.yml
    kompose.version: 1.34.0 (HEAD)
  labels:
    io.kompose.service: program-frontend
  name: program-frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: program-frontend
  template:
    metadata:
      annotations:
        kompose.cmd: kompose convert -f compose.yml
        kompose.version: 1.34.0 (HEAD)
      labels:
        io.kompose.service: program-frontend
    spec:
      containers:
        - env:
            - name: API_GATEWAY_BASE_URL
              value: http://edge-thinghy:9000
          image: program-image
          name: program-frontend
          ports:
            -  name: program-frontend
               containerPort: 3000
               protocol: TCP
      imagePullSecrets:
        - name: ghcr-secret
      restartPolicy: Always

#007-program-frontend-service.yml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kompose.cmd: kompose convert -f compose.yml
    kompose.version: 1.34.0 (HEAD)
  labels:
    io.kompose.service: program-frontend
  name: program-frontend
spec:
  ports:
    - name: program-frontend
      protocol: TCP
      port: 3000
      targetPort: program-frontend
  selector:
    io.kompose.service: program-frontend

#008-edit-program-service.yml
apiVersion: v1
kind: Service
metadata:
  name: program-frontend
spec:
  ports:
    - name: program-frontend
      port: 80
      targetPort: 3000
  selector:
    io.kompose.service: program-frontend

#009-program-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: program-ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: program-frontend
            port: 
              name: program-frontend

#010-challenge.yml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
 name: program-challenge
 namespace: default
spec:
 acme:
   email: [email protected]
   server: https://acme-v02.api.letsencrypt.org/directory
   privateKeySecretRef:
     name: program-issuer-account-key
   solvers:
     - http01:
         ingress:
           class: traefik

#011-ingress-rule.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
 name: program-ssl-ingress
 namespace: default
 annotations:
   cert-manager.io/issuer: "program-challenge"
spec:
 tls:
   - hosts:
       - program-demo.example.domain
     secretName: tls-program-ingress-http
 rules:
   - host: program-demo.example.domain
     http:
       paths:
         - path: /
           pathType: Prefix
           backend:
             service:
               name: program-frontend
               port:
                 name: program-frontend

#012-redirect-http-to-https.yml
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
  name: program-frontend-redirect
spec:
  redirectScheme:
    scheme: https
    permanent: true

如果我理解正确的话,那么我应该能够访问https://program-demo.example.domain,但我只能访问http://program-demo.example.domain,我是不是误读了文档中的某些内容?我的推理有什么问题吗?提前感谢您的时间。

kubernetes
  • 1 个回答
  • 138 Views
Martin Hope
Malkavian
Asked: 2024-12-12 18:09:21 +0800 CST

创建 Kubernetes Ingress 资源时出错(无“Ingress”类型的匹配项)

  • 4

在集群 kubernetes 环境中,我有 Traefik v3.2.1 和 CertManager 1.16.1 以及我正在测试的程序。当我尝试应用此文件:022-red-ing.yml 时出现此错误:

error: error validating "022-red-ing.yml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false

我想正确定义该文件,但我缺少信息,因为我复制该文件的文档没有说明 apiVersion 和 kind。

该文件目前是这样的:

cat 022-红色-ing.yml

   apiVersion: networking.k8s.io/v1
    kind: Ingress
    spec:
      selector:
        istio: ingressgateway
      servers:
      - hosts:
        - program.example.domain
        port:
          name: https
          number: 443
          protocol: HTTPS
        tls:
          credentialName: tls-program-ingress-http
          mode: SIMPLE
      - hosts:
        - program.example.domain
        port:
          name: program-frontend
          number: 80
          protocol: HTTP
        tls:
          httpsRedirect: true
      - hosts:
        - '*'
        port:
          name: http
          number: 80
          protocol: HTTP

我应该设置什么 apiVersion 和 kind?如果我添加以下代码:

apiVersion: extensions/v1beta1
kind: Ingress

我收到另一个错误。现在当我这样做时:

kubectl 应用-f 022-red-ing.yml

我明白了

error: error when retrieving current configuration of:
Resource: "networking.k8s.io/v1, Resource=ingresses", GroupVersionKind: "networking.k8s.io/v1, Kind=Ingress"
Name: "", Namespace: "default"
from server for: "022-red-ing.yml": resource name may not be empty

我做错了什么。

kubernetes
  • 1 个回答
  • 58 Views
Martin Hope
Malkavian
Asked: 2024-12-11 21:40:46 +0800 CST

Kubernetes 和 Traefik 为应用程序创建适当的 Ingress 资源

  • 5

我正在运行一个 kubernetes 集群,并且有一个在 TestPort(实际上是 3000)上运行的 TestApplication。我设法启动并运行了 Traefik v3.2.1,并使用 http 质询启动并运行了 CertManager 1.16.1 以进行 letsencrypt。我想保护 TestApplication,让人们通过 TraefiK 端口 443 并进入 TestApplication:TestPort。如何为我的应用程序创建适当的 Ingress 资源?到目前为止,我做到了:

 #001-app-deployment.yml
    apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert -f compose.yml
    kompose.version: 1.34.0 (HEAD)
  labels:
    io.kompose.service: app-frontend
  name: app-frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: app-frontend
  template:
    metadata:
      annotations:
        kompose.cmd: kompose convert -f compose.yml
        kompose.version: 1.34.0 (HEAD)
      labels:
        io.kompose.service: app-frontend
    spec:
      containers:
        - env:
            - name: API_GATEWAY_BASE_URL
              value: http://edge-thinghy:9000
          image: my-image-I-test
          name: app-frontend
          ports:
            -  name: app-frontend
               containerPort: 3000
               protocol: TCP
      imagePullSecrets:
        - name: ghcr-secret
      restartPolicy: Always

#010-app-service.yml
        apiVersion: v1
    kind: Service
    metadata:
      name: app-frontend
    
    spec:
      ports:
        - name: app-frontend
          port: 80
          targetPort: 3000
    
      selector:
        app: app-frontend
    
    #011-app-ingress.yml
    
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: app-ingress
    spec:
      rules:
      - http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: app-frontend
                port: 
                  name: app-frontend
    
    #012-challenge.yml

    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
     name: app-challenge
     namespace: default
    spec:
     acme:
       email: [email protected]
       server: https://acme-v02.api.letsencrypt.org/directory
       privateKeySecretRef:
          name: app-issuer-account-key
       solvers:
         - http01:
             ingress:
               class: traefik
    
    #013-ingress-rule.yml
    
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
     name: app-ssl-ingress
     namespace: default
     annotations:
       cert-manager.io/issuer: "app-challenge"
    spec:
     tls:
       - hosts:
           - app.domain.example
         secretName: tls-app-ingress-http
     rules:
       - host: app.domain.example
         http:
           paths:
             - path: /
               pathType: Prefix
               backend:
                 service:
                   name: app-frontend
                   port:
                     name: app-frontend

由于证书已颁发,我原本希望 Traefik 能够自动工作,但当我转到https://app.domain.example时却超时了。我想我做错了什么。如果我打开 traefik pod 日志,我可以看到:

    ERR Skipping service: no endpoints found ingress=app-ingress namespace=default providerName=kubernetes serviceName=app-frontend servicePort=&ServiceBackendPort{Name:app-frontend,Number:0,}
    ERR Skipping service: no endpoints found ingress=app-ssl-ingress namespace=default providerName=kubernetes serviceName=app-frontend servicePort=&ServiceBackendPort{Name:app-frontend,Number:0,}

尽管我可以访问http://app.domain.example,但不能访问 https,如果我这样做

    kubectl get ingress
NAME               CLASS     HOSTS              ADDRESS   PORTS 
app-ingress       traefik   *                            80     
app-ssl-ingress   traefik   app.domain.example           80, 443
kubernetes
  • 1 个回答
  • 57 Views
Martin Hope
Malkavian
Asked: 2024-12-11 20:32:16 +0800 CST

kubernetes 配置 traefikv3 作为入口提供者

  • 5

我有一个正在运行的 kubernetes 集群,我想公开一项服务。我有一个在端口 TestPort 3000 上运行的 TestApplication。如何配置 Traefik 以充当入口提供程序?在文档中,我不明白我必须在哪里指定

providers:
  kubernetesIngress: {}

在此处输入图片描述

来源:https ://doc.traefik.io/traefik/providers/kubernetes-ingress/

这足以指定它必须充当 kubernetes 入口吗?

    kind: Deployment
apiVersion: apps/v1
metadata:
  name: traefik-deployment
  labels:
    app: traefik

spec:
  replicas: 1
  selector:
    matchLabels:
      app: traefik
  template:
    metadata:
      labels:
        app: traefik
    spec:
      serviceAccountName: traefik-account
      containers:
        - name: traefik
          image: traefik:v3.2
          args:
            - --api.insecure
            - --providers.kubernetesingress
          ports:
            - name: web
              containerPort: 80
            - name: dashboard
              containerPort: 8080
kubernetes
  • 1 个回答
  • 30 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve