AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / user-931597

eagercoder's questions

Martin Hope
eagercoder
Asked: 2024-02-09 03:05:50 +0800 CST

我怎样才能让 cilium 通过失败的连接测试

  • 5

我正在尝试将 cilium 部署到我的eks集群,就上下文而言,该集群是在私有子网后面运行的私有集群,并通过 NAT 网关和互联网网关路由到互联网。我已经能够按照此处的cilium 安装指南进行操作。我的节点被污染了,我已经按照文档的要求修补了守护进程集。

当我跑步时cilium status,我可以看到一切都好

    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
 \__/¯¯\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        disabled

Deployment        cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet         cilium             Desired: 3, Ready: 3/3, Available: 3/3
Containers:       cilium             Running: 3
                  cilium-operator    Running: 2
Cluster Pods:     2/2 managed by Cilium
Image versions    cilium             quay.io/cilium/cilium:v1.15.0@sha256:9cfd6a0a3a964780e73a11159f93cc363e616f7d9783608f62af6cfdf3759619: 3
                  cilium-operator    quay.io/cilium/operator-aws:v1.15.0@sha256:cf45167a8bb336c763046553c6a97c0d7f12f7e2a498dfb2340fa27832a81b3a: 2

但是,当我运行时cilium connectivity test,并非所有测试都通过。错误如下图所示。

❌ 4/42 tests failed (30/321 actions), 13 tests skipped, 1 scenarios skipped:
Test [no-policies]:
  ❌ no-policies/pod-to-host/ping-ipv4-1: cilium-test/client-846d67868c-mpfrc (10.0.1.217) -> <NODE_IP> (<NODE_IP>:0)
  ❌ no-policies/pod-to-host/ping-ipv4-3: cilium-test/client-846d67868c-mpfrc (10.0.1.217) -> <NODE_IP> (<NODE_IP>:0)
  ❌ no-policies/pod-to-host/ping-ipv4-5: cilium-test/client-846d67868c-mpfrc (10.0.1.217) -> <NODE_IP> (<NODE_IP>:0)
  ❌ no-policies/pod-to-host/ping-ipv4-7: cilium-test/client2-865b7d7b6f-469vq (10.0.1.178) -> <NODE_IP> (<NODE_IP>:0)
  ❌ no-policies/pod-to-host/ping-ipv4-9: cilium-test/client2-865b7d7b6f-469vq (10.0.1.178) -> <NODE_IP> (<NODE_IP>:0)
  ❌ no-policies/pod-to-host/ping-ipv4-11: cilium-test/client2-865b7d7b6f-469vq (10.0.1.178) -> <NODE_IP> (<NODE_IP>:0)
Test [no-policies-extra]:
  ❌ no-policies-extra/pod-to-remote-nodeport/curl-0: cilium-test/client2-865b7d7b6f-469vq (10.0.1.178) -> cilium-test/echo-other-node (echo-other-node:8080)
  ❌ no-policies-extra/pod-to-remote-nodeport/curl-1: cilium-test/client2-865b7d7b6f-469vq (10.0.1.178) -> cilium-test/echo-other-node (echo-other-node:8080)
  ❌ no-policies-extra/pod-to-remote-nodeport/curl-2: cilium-test/client2-865b7d7b6f-469vq (10.0.1.178) -> cilium-test/echo-same-node (echo-same-node:8080)
  ❌ no-policies-extra/pod-to-remote-nodeport/curl-3: cilium-test/client2-865b7d7b6f-469vq (10.0.1.178) -> cilium-test/echo-same-node (echo-same-node:8080)
  ❌ no-policies-extra/pod-to-remote-nodeport/curl-4: cilium-test/client-846d67868c-mpfrc (10.0.1.217) -> cilium-test/echo-other-node (echo-other-node:8080)
  ❌ no-policies-extra/pod-to-remote-nodeport/curl-5: cilium-test/client-846d67868c-mpfrc (10.0.1.217) -> cilium-test/echo-other-node (echo-other-node:8080)
  ❌ no-policies-extra/pod-to-remote-nodeport/curl-6: cilium-test/client-846d67868c-mpfrc (10.0.1.217) -> cilium-test/echo-same-node (echo-same-node:8080)
  ❌ no-policies-extra/pod-to-remote-nodeport/curl-7: cilium-test/client-846d67868c-mpfrc (10.0.1.217) -> cilium-test/echo-same-node (echo-same-node:8080)
  ❌ no-policies-extra/pod-to-local-nodeport/curl-0: cilium-test/client2-865b7d7b6f-469vq (10.0.1.178) -> cilium-test/echo-other-node (echo-other-node:8080)
  ❌ no-policies-extra/pod-to-local-nodeport/curl-1: cilium-test/client2-865b7d7b6f-469vq (10.0.1.178) -> cilium-test/echo-same-node (echo-same-node:8080)
  ❌ no-policies-extra/pod-to-local-nodeport/curl-2: cilium-test/client-846d67868c-mpfrc (10.0.1.217) -> cilium-test/echo-other-node (echo-other-node:8080)
  ❌ no-policies-extra/pod-to-local-nodeport/curl-3: cilium-test/client-846d67868c-mpfrc (10.0.1.217) -> cilium-test/echo-same-node (echo-same-node:8080)
Test [allow-all-except-world]:
  ❌ allow-all-except-world/pod-to-host/ping-ipv4-1: cilium-test/client-846d67868c-mpfrc (10.0.1.217) -> 18.130.173.145 (<NODE_IP>:0)
  ❌ allow-all-except-world/pod-to-host/ping-ipv4-3: cilium-test/client-846d67868c-mpfrc (10.0.1.217) -> 18.171.241.88 (<NODE_IP>:0)
  ❌ allow-all-except-world/pod-to-host/ping-ipv4-5: cilium-test/client-846d67868c-mpfrc (10.0.1.217) -> 13.40.120.114 (<NODE_IP>:0)
  ❌ allow-all-except-world/pod-to-host/ping-ipv4-7: cilium-test/client2-865b7d7b6f-469vq (10.0.1.178) -> 18.130.173.145 (<NODE_IP>:0)
  ❌ allow-all-except-world/pod-to-host/ping-ipv4-9: cilium-test/client2-865b7d7b6f-469vq (10.0.1.178) -> 18.171.241.88 (<NODE_IP>:0)
  ❌ allow-all-except-world/pod-to-host/ping-ipv4-11: cilium-test/client2-865b7d7b6f-469vq (10.0.1.178) -> 13.40.120.114 (<NODE_IP>:0)
Test [host-entity]:
  ❌ host-entity/pod-to-host/ping-ipv4-1: cilium-test/client-846d67868c-mpfrc (10.0.1.217) -> <NODE_IP> (<NODE_IP>:0)
  ❌ host-entity/pod-to-host/ping-ipv4-3: cilium-test/client-846d67868c-mpfrc (10.0.1.217) -> <NODE_IP> (<NODE_IP>:0)
  ❌ host-entity/pod-to-host/ping-ipv4-5: cilium-test/client-846d67868c-mpfrc (10.0.1.217) -> <NODE_IP> (<NODE_IP>:0)
  ❌ host-entity/pod-to-host/ping-ipv4-7: cilium-test/client2-865b7d7b6f-469vq (10.0.1.178) -> <NODE_IP> (<NODE_IP>:0)
  ❌ host-entity/pod-to-host/ping-ipv4-9: cilium-test/client2-865b7d7b6f-469vq (10.0.1.178) -> <NODE_IP> (<NODE_IP>:0)
  ❌ host-entity/pod-to-host/ping-ipv4-11: cilium-test/client2-865b7d7b6f-469vq (10.0.1.178) -> <NODE_IP> (<NODE_IP>:0)
connectivity test failed: 4 tests failed

问题

我怎样才能解决这个问题并让 cilium 运行。

PS 我只是为了发布这个问题而将变量 <NODE_IP> 的节点 IP 地址换掉。

amazon-web-services
  • 1 个回答
  • 22 Views
Martin Hope
eagercoder
Asked: 2022-10-25 09:03:19 +0800 CST

如何将安全组作为入站规则添加到 terraform 中的另一个安全组

  • 5

我有一个 Terraform 代码库,它部署了一个私有 EKS 集群、一个堡垒主机和其他 AWS 服务。我还在 Terraform 中添加了一些安全组。其中一个安全组允许从我的家庭 IP 到堡垒主机的入站流量,以便我可以通过 SSH 连接到该节点。这个安全组被称为bastionSG,它也可以正常工作。

但是,最初我无法从堡垒主机运行 kubectl,这是我用来针对 EKS 集群节点执行 kubernetes 开发的节点。原因是我的 EKS 集群是私有的,只允许来自同一 VPC 中的节点的通信,我需要添加一个安全组,允许从我的堡垒主机到cluster control plane我的安全组 bastionSG 所在的通信。

所以我现在的例程是一旦 Terraform 部署了所有内容,然后我找到自动生成的 EKS 安全组,并bastionSG通过 AWS 控制台 (UI) 将我的作为入站规则添加到它,如下图所示。

在此处输入图像描述

我不想通过 UI 执行此操作,因为我已经在使用 Terraform 来部署我的整个基础架构。

我知道我可以像这样查询现有的安全组

data "aws_security_group" "selectedSG" {
  id = var.security_group_id
}

在这种情况下,假设selectedSG是一旦 terraform 完成应用过程后由 EKS 创建的安全组。然后我想bastionSG向它添加一个入站规则,而不会覆盖它自动添加的其他规则。

更新:> EKS 节点组

resource "aws_eks_node_group" "flmd_node_group" {
  cluster_name    = var.cluster_name
  node_group_name = var.node_group_name
  node_role_arn   = var.node_pool_role_arn
  subnet_ids      = [var.flmd_private_subnet_id]
  instance_types = ["t2.small"]

  scaling_config {
    desired_size = 3
    max_size     = 3
    min_size     = 3
  }

  update_config {
    max_unavailable = 1
  }

  remote_access {
    ec2_ssh_key = "MyPemFile"
    source_security_group_ids = [
      var.allow_tls_id,
      var.allow_http_id, 
      var.allow_ssh_id,
      var.bastionSG_id
     ]
  }

  tags = {
    "Name" = "flmd-eks-node"
  }
}

如上所示,EKS 节点组中包含 bastionSG 安全组。我希望允许从我的堡垒主机连接到 EKS 控制平面。

EKS 集群

resource "aws_eks_cluster" "flmd_cluster" {
  name     = var.cluster_name
  role_arn = var.role_arn

  vpc_config {
    subnet_ids =[var.flmd_private_subnet_id, var.flmd_public_subnet_id, var.flmd_public_subnet_2_id]
    endpoint_private_access = true
    endpoint_public_access = false
    security_group_ids = [ var.bastionSG_id]
  }
}

bastionSG_id是下面创建的安全组的输出,它作为变量传递到上面的代码中。

BastionSG 安全组

resource "aws_security_group" "bastionSG" {
  name        = "Home to bastion"
  description = "Allow SSH - Home to Bastion"
  vpc_id      = var.vpc_id

  ingress {
    description      = "Home to bastion"
    from_port        = 22
    to_port          = 22
    protocol         = "tcp"
    cidr_blocks      = [<MY HOME IP address>]
  }

  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  tags = {
    Name = "Home to bastion"
  }
}
kubernetes terraform
  • 1 个回答
  • 40 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve