是否可以在不关闭虚拟机(QEMU+KVM)的情况下从 QCOW2 映像中删除快照?运行不会破坏数据一致性吗?如果可以,如果在删除快照之前qemu-img snapshot -d ...
运行然后运行,会破坏一致性吗?您觉得如何?virsh pause ...
virsh resume ...
谢谢!
是否可以在不关闭虚拟机(QEMU+KVM)的情况下从 QCOW2 映像中删除快照?运行不会破坏数据一致性吗?如果可以,如果在删除快照之前qemu-img snapshot -d ...
运行然后运行,会破坏一致性吗?您觉得如何?virsh pause ...
virsh resume ...
谢谢!
我需要在网络和 Kubernetes 集群之间建立 VPN 连接,因此在该网络中托管的应用程序可以通过安全隧道访问 K8S 服务。
所以,我在自托管环境中有一堆 K8S 节点。我在这个环境中添加了一个单独的服务器,这个服务器作为一个 VPN 网关工作,它连接到集群节点所连接的同一个 VLAN。节点具有以下 IP 地址:10.13.17.1/22
、10.13.17.2/22
等10.13.17.3/22
。VPN 网关具有10.13.16.253/22
.
Cluster IP CIDR 是10.233.0.0/18
,pod IP CIDR 是10.233.64.0/18
。
VPN 服务器支持与远程网络的 IPSec 站点到站点连接,10.103.103.0/24
. 我使用 Calico 作为网络管理器,因此我设置了我的 VPN 服务器以保持与所有 K8S 节点的 BGP 会话。VPN 服务器的路由表充满了 Calico 节点宣布的前缀(10.233.0.0/18
当然也存在),集群节点10.103.103.0/24
的路由表中还有其他一些网络,所以 BGP 似乎工作正常。到目前为止,一切都很好...
当我从 VPN 服务器与集群内的服务建立连接时,一切都很好。客户端(10.13.16.253
)向服务( )发送一个 SYN 数据包10.233.10.101:1337
,工作人员接收到这个数据包,将其目标 IP 地址更改为 pod 的 IP 地址(10.233.103.49:1337
)并将其源 IP 地址更改为某个 IP 地址(10.233.110.0
) 这将帮助工作人员接收回复并将其返回给连接发起者。以下是收到此 SYN 数据包的工作人员发生的情况。SYN 数据包到达工人:
22:04:25.866546 IP 10.13.16.253.56297 > 10.233.10.101.1337: Flags [S], seq 3575679444, win 65228, options [mss 1460,nop,wscale 7,sackOK,TS val 1385938010 ecr 0], length 0
SYN-packed 正在被 SNATed 和 DNATed,然后被发送到运行 pod 的工作人员:
22:04:25.866656 IP 10.233.110.0.54430 > 10.233.103.49.1337: Flags [S], seq 3575679444, win 65228, options [mss 1460,nop,wscale 7,sackOK,TS val 1385938010 ecr 0], length 0
回复来了:
22:04:25.867313 IP 10.233.103.49.1337 > 10.233.110.0.54430: Flags [S.], seq 2017844946, ack 3575679445, win 28960, options [mss 1460,sackOK,TS val 1201488363 ecr 1385938010,nop,wscale 7], length 0
回复被 deSNATed 和 deDNATed 发送给连接发起者:
22:04:25.867533 IP 10.233.10.101.1337 > 10.13.16.253.56297: Flags [S.], seq 2017844946, ack 3575679445, win 28960, options [mss 1460,sackOK,TS val 1201488363 ecr 1385938010,nop,wscale 7], length 0
因此,建立了联系,每个人都很高兴。
但是当我尝试从外部网络(10.103.103.0/24
)连接到相同的服务时,接收 SYN 数据包的工作人员不会更改源 IP 地址,它只会更改目标 IP 地址,因此数据包的源 IP 地址是不变。SYN 数据包到达工作人员
21:56:05.794171 IP 10.103.103.1.52132 > 10.233.10.101.1337: Flags [S], seq 3759345254, win 29200, options [mss 1460,sackOK,TS val 195801472 ecr 0,nop,wscale 7], length 0
SYN 数据包正在被 DNATed 并重新发送给运行 pod 的工作人员
21:56:05.794242 IP 10.103.103.1.52132 > 10.233.103.49.1337: Flags [S], seq 3759345254, win 29200, options [mss 1460,sackOK,TS val 195801472 ecr 0,nop,wscale 7], length 0
没有任何回应。:-(
因此,我看到目标 IP 地址已更改,因此我可以在运行 pod 的工作人员上看到这些数据包,但没有对它们的回复:
21:56:05.794602 IP 10.103.103.1.52132 > 10.233.103.49.1337: Flags [S], seq 3759345254, win 29200, options [mss 1460,sackOK,TS val 195801472 ecr 0,nop,wscale 7], length 0
10.103.103.0/24
VPN 服务器通过 BGP 通告外部网络 ( ),因此所有工作人员都知道该网络可通过 访问10.13.16.253
。当我从外部网络中的主机 ( 10.103.103.1
) 到服务的 IP 地址( ) 运行 ping 测试时10.233.10.101
,测试通过,VPN 工作正常并且路由表似乎是正确的。
那么,为什么网络“信任”10.13.16.253
而不信任10.103.103.1
呢?为什么工人对来自的数据包执行 SNAT 和 DNAT,10.13.16.253
而不对来自的数据包执行 SNAT 10.103.103.1
?我应该添加一些策略来允许此流量吗?
在此先感谢您提供任何线索!
我想根据部署在另一个命名空间中的入口控制器的指标为部署设置水平自动缩放。
我有一个部署(petclinic
)部署在某个命名空间(petclinic
)中。
我有一个入口控制器 ( nginx-ingress
) 部署在另一个命名空间 ( nginx-ingress
) 中。
入口控制器已经部署了 Helm 和 Tiller,所以我有以下ServiceMonitor
实体:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"monitoring.coreos.com/v1","kind":"ServiceMonitor","metadata":{"annotations":{},"creationTimestamp":"2019-08-19T10:48:00Z","generation":5,"labels":{"app":"nginx-ingress","chart":"nginx-ingress-1.12.1","component":"controller","heritage":"Tiller","release":"nginx-ingress"},"name":"nginx-ingress-controller","namespace":"nginx-ingress","resourceVersion":"7391237","selfLink":"/apis/monitoring.coreos.com/v1/namespaces/nginx-ingress/servicemonitors/nginx-ingress-controller","uid":"0217c466-5b78-4e38-885a-9ee65deb2dcd"},"spec":{"endpoints":[{"interval":"30s","port":"metrics"}],"namespaceSelector":{"matchNames":["nginx-ingress"]},"selector":{"matchLabels":{"app":"nginx-ingress","component":"controller","release":"nginx-ingress"}}}}
creationTimestamp: "2019-08-21T13:12:00Z"
generation: 1
labels:
app: nginx-ingress
chart: nginx-ingress-1.12.1
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: nginx-ingress
resourceVersion: "7663160"
selfLink: /apis/monitoring.coreos.com/v1/namespaces/nginx-ingress/servicemonitors/nginx-ingress-controller
uid: 33421be7-108b-4b81-9673-05db140364ce
spec:
endpoints:
- interval: 30s
port: metrics
namespaceSelector:
matchNames:
- nginx-ingress
selector:
matchLabels:
app: nginx-ingress
component: controller
release: nginx-ingress
我也有 Prometheus Operaton 实例,它找到了这个实体,并用这个节更新了 Prometheus 的配置:
- job_name: nginx-ingress/nginx-ingress-controller/0
honor_labels: false
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- nginx-ingress
scrape_interval: 30s
relabel_configs:
- action: keep
source_labels:
- __meta_kubernetes_service_label_app
regex: nginx-ingress
- action: keep
source_labels:
- __meta_kubernetes_service_label_component
regex: controller
- action: keep
source_labels:
- __meta_kubernetes_service_label_release
regex: nginx-ingress
- action: keep
source_labels:
- __meta_kubernetes_endpoint_port_name
regex: metrics
- source_labels:
- __meta_kubernetes_endpoint_address_target_kind
- __meta_kubernetes_endpoint_address_target_name
separator: ;
regex: Node;(.*)
replacement: ${1}
target_label: node
- source_labels:
- __meta_kubernetes_endpoint_address_target_kind
- __meta_kubernetes_endpoint_address_target_name
separator: ;
regex: Pod;(.*)
replacement: ${1}
target_label: pod
- source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- source_labels:
- __meta_kubernetes_service_name
target_label: service
- source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- source_labels:
- __meta_kubernetes_service_name
target_label: job
replacement: ${1}
- target_label: endpoint
replacement: metrics
我还有一个 Prometheus-Adapter 实例,所以我custom.metrics.k8s.io
在可用 API 列表中有这个 API。
正在收集和公开指标,因此使用以下命令:
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/nginx-ingress/ingresses/petclinic/nginx_ingress_controller_requests" | jq .
给出以下结果:
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/nginx-ingress/ingresses/petclinic/nginx_ingress_controller_requests"
},
"items": [
{
"describedObject": {
"kind": "Ingress",
"namespace": "nginx-ingress",
"name": "petclinic",
"apiVersion": "extensions/v1beta1"
},
"metricName": "nginx_ingress_controller_requests",
"timestamp": "2019-08-20T12:56:50Z",
"value": "11"
}
]
}
到目前为止一切顺利,对吧?
我需要为我的部署设置 HPA 实体。做这样的事情:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: petclinic
namespace: petclinic
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: petclinic
minReplicas: 1
maxReplicas: 10
metrics:
- type: Object
object:
metricName: nginx_ingress_controller_requests
target:
apiVersion: extensions/v1beta1
kind: Ingress
name: petclinic
targetValue: 10k
当然,这是不正确的,因为nginx_ingress_controller_requests
与nginx-ingress
命名空间有关,所以它不起作用(好吧,正如预期的那样):
annotations:
autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"True","lastTransitionTime":"2019-08-19T18:43:42Z","reason":"SucceededGetScale","message":"the
HPA controller was able to get the target''s current scale"},{"type":"ScalingActive","status":"False","lastTransitionTime":"2019-08-19T18:55:26Z","reason":"FailedGetObjectMetric","message":"the
HPA was unable to compute the replica count: unable to get metric nginx_ingress_controller_requests:
Ingress on petclinic petclinic/unable to fetch metrics
from custom metrics API: the server could not find the metric nginx_ingress_controller_requests
for ingresses.extensions petclinic"},{"type":"ScalingLimited","status":"False","lastTransitionTime":"2019-08-19T18:43:42Z","reason":"DesiredWithinRange","message":"the
desired count is within the acceptable range"}]'
autoscaling.alpha.kubernetes.io/current-metrics: '[{"type":""},{"type":"Resource","resource":{"name":"cpu","currentAverageUtilization":1,"currentAverageValue":"10m"}}]'
autoscaling.alpha.kubernetes.io/metrics: '[{"type":"Object","object":{"target":{"kind":"Ingress","name":"petclinic","apiVersion":"extensions/v1beta1"},"metricName":"nginx_ingress_controller_requests","targetValue":"10k"}}]'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"petclinic","namespace":"petclinic"},"spec":{"maxReplicas":10,"metrics":[{"object":{"metricName":"nginx_ingress_controller_requests","target":{"apiVersion":"extensions/v1beta1","kind":"Ingress","name":"petclinic"},"targetValue":"10k"},"type":"Object"}],"minReplicas":1,"scaleTargetRef":{"apiVersion":"apps/v1","kind":"Deployment","name":"petclinic"}}}
这是我在 Prometheus-Adapter 的日志文件中看到的内容:
I0820 15:42:13.467236 1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/petclinic/ingresses.extensions/petclinic/nginx_ingress_controller_requests: (6.124398ms) 404 [[kube-controller-manager/v1.15.1 (linux/amd64) kubernetes/4485c6f/system:serviceaccount:kube-system:horizontal-pod-autoscaler] 10.103.98.0:37940]
HPA 在部署的命名空间中查找此指标,但我需要它从nginx-ingress
命名空间中获取它,就像这样:
I0820 15:44:40.044797 1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/nginx-ingress/ingresses/petclinic/nginx_ingress_controller_requests: (2.210282ms) 200 [[kubectl/v1.15.2 (linux/amd64) kubernetes/f627830] 10.103.97.0:35142]
唉,autoscaling/v2beta1
API 没有spec.metrics.object.target.namespace
实体,所以我不能“要求”它从另一个命名空间获取值。:-(
有人能帮我解决这个难题吗?有没有办法根据属于另一个命名空间的自定义指标设置自动缩放?
也许有办法让这个指标在这个 ingress.extension 所属的同一个命名空间中可用?
在此先感谢您提供任何线索和提示。
是否可以定义一个入口(ingresses.networking.k8s.io
)将请求转发到专用网络中的具体静态 IP 地址?
我有一个不在 K8S 集群上运行的服务,但它驻留在专用网络中,并且所有 K8S pod 都可以访问该网络。这个网络中有很多辅助服务,由于某些原因,我不想将这些服务部署在 K8S 集群中。而且我希望通过 ingress-nginx 公开其中一些服务,但目前我没有看到将后端定义为静态 IP 地址的方法。
那可能吗?
谢谢!
我有一个关于如何为不同的 K8S 命名空间划分对同一个 gluster 的访问的问题。假设我在一个 gluster 中有 3 个不同的卷(vol-a
, vol-b
, vol-c
),我想允许对每个不同的命名空间(namespace-a
- to vol-a
, namespace-b
- to vol-b
, namespace-c
- to vol-c
)访问它们中的每一个。
有可能实施这样的计划吗?也许有一种方法可以使用一些用户名和密码来验证 gluster 的客户端?有没有办法在端点配置中定义这些凭据?
提前致谢!
我遇到了一个很奇怪的问题,我想和你分享。也许你会帮助我对正在发生的事情提出一些想法。
KVM 驱动的主机上有 3 个虚拟机。实际上有大约 50 台虚拟机,但它们都运行良好,尽管这 3 台虚拟机的行为有点不寻常。
当一切正常时,这些会话之间的 TCP 会话(“GET / HTTP/1.0” - “HTTP 200 OK”)如下所示:
00:58:43.885118 IP 192.168.111.2.55480 > 192.168.113.2.http: Flags [S], seq 926382744, win 14600, options [mss 1460,sackOK,TS val 277997 ecr 0,nop,wscale 7], length 0
00:58:43.885380 IP 192.168.113.2.http > 192.168.111.2.55480: Flags [S.], seq 1849545379, ack 926382745, win 14480, options [mss 1460,sackOK,TS val 3702103 ecr 277997,nop,wscale 7], length 0
00:58:43.885957 IP 192.168.111.2.55480 > 192.168.113.2.http: Flags [.], ack 1, win 115, options [nop,nop,TS val 277998 ecr 3702103], length 0
00:58:43.886000 IP 192.168.111.2.55480 > 192.168.113.2.http: Flags [P.], seq 1:213, ack 1, win 115, options [nop,nop,TS val 277998 ecr 3702103], length 212
00:58:43.886061 IP 192.168.113.2.http > 192.168.111.2.55480: Flags [.], ack 213, win 122, options [nop,nop,TS val 3702104 ecr 277998], length 0
00:58:43.922286 IP 192.168.113.2.http > 192.168.111.2.55480: Flags [P.], seq 1:372, ack 213, win 122, options [nop,nop,TS val 3702140 ecr 277998], length 371
00:58:43.922335 IP 192.168.113.2.http > 192.168.111.2.55480: Flags [F.], seq 372, ack 213, win 122, options [nop,nop,TS val 3702140 ecr 277998], length 0
00:58:43.923150 IP 192.168.111.2.55480 > 192.168.113.2.http: Flags [.], ack 372, win 123, options [nop,nop,TS val 278035 ecr 3702140], length 0
00:58:43.923622 IP 192.168.111.2.55480 > 192.168.113.2.http: Flags [F.], seq 213, ack 373, win 123, options [nop,nop,TS val 278036 ecr 3702140], length 0
00:58:43.923671 IP 192.168.113.2.http > 192.168.111.2.55480: Flags [.], ack 214, win 122, options [nop,nop,TS val 3702142 ecr 278036], length 0
好的,到目前为止一切都很好。
然后我们保存 pfSense 配置,销毁这个 VM,创建一个新的,从头开始安装 pfSense 并从备份文件中恢复其配置。
这是我们之后看到的:
00:46:39.218193 IP 192.168.111.2.51674 > 192.168.113.2.http: Flags [S], seq 3622924060, win 14600, options [mss 1460,sackOK,TS val 674608862 ecr 0,nop,wscale 7], length 0
00:46:39.218316 IP 192.168.113.2.http > 192.168.111.2.51674: Flags [S.], seq 152904245, ack 3622924061, win 14480, options [mss 1460,sackOK,TS va l 2977436 ecr 674608862,nop,wscale 7], length 0
00:46:39.218570 IP 192.168.111.2.51674 > 192.168.113.2.http: Flags [.], ack 1, win 115, options [nop,nop,TS val 674608862 ecr 2977436], length 0
00:46:40.417623 IP 192.168.113.2.http > 192.168.111.2.51674: Flags [S.], seq 152904245, ack 3622924061, win 14480, options [mss 1460,sackOK,TS val 2978636 ecr 674608862,nop,wscale 7], length 0
00:46:40.417947 IP 192.168.111.2.51674 > 192.168.113.2.http: Flags [.], ack 1, win 115, options [nop,nop,TS val 674610062 ecr 2977436], length 0
00:46:43.158907 IP 192.168.111.2.51674 > 192.168.113.2.http: Flags [P.], seq 1:17, ack 1, win 115, options [nop,nop,TS val 674612803 ecr 2977436], length 16
00:46:43.360103 IP 192.168.111.2.51674 > 192.168.113.2.http: Flags [P.], seq 1:17, ack 1, win 115, options [nop,nop,TS val 674613004 ecr 2977436], length 16
00:46:43.761787 IP 192.168.111.2.51674 > 192.168.113.2.http: Flags [P.], seq 1:17, ack 1, win 115, options [nop,nop,TS val 674613406 ecr 2977436], length 16
00:46:44.565890 IP 192.168.111.2.51674 > 192.168.113.2.http: Flags [P.], seq 1:17, ack 1, win 115, options [nop,nop,TS val 674614210 ecr 2977436], length 16
00:46:46.174039 IP 192.168.111.2.51674 > 192.168.113.2.http: Flags [P.], seq 1:17, ack 1, win 115, options [nop,nop,TS val 674615818 ecr 2977436], length 16
00:46:49.389921 IP 192.168.111.2.51674 > 192.168.113.2.http: Flags [P.], seq 1:17, ack 1, win 115, options [nop,nop,TS val 674619034 ecr 2977436], length 16
00:46:51.753723 IP 192.168.113.2.http > 192.168.111.2.51672: Flags [F.], seq 1, ack 1, win 114, options [nop,nop,TS val 2989972 ecr 674560137], length 0
00:46:55.821824 IP 192.168.111.2.51674 > 192.168.113.2.http: Flags [P.], seq 1:17, ack 1, win 115, options [nop,nop,TS val 674625466 ecr 2977436], length 16
00:46:57.221625 IP 192.168.113.2.http > 192.168.111.2.51672: Flags [F.], seq 1, ack 1, win 114, options [nop,nop,TS val 2995440 ecr 674560137], length 0
00:47:08.157575 IP 192.168.113.2.http > 192.168.111.2.51672: Flags [F.], seq 1, ack 1, win 114, options [nop,nop,TS val 3006376 ecr 674560137], length 0
00:47:08.685886 IP 192.168.111.2.51674 > 192.168.113.2.http: Flags [P.], seq 1:17, ack 1, win 115, options [nop,nop,TS val 674638330 ecr 2977436], length 16
00:47:30.029609 IP 192.168.113.2.http > 192.168.111.2.51672: Flags [F.], seq 1, ack 1, win 114, options [nop,nop,TS val 3028248 ecr 674560137], length 0
00:47:34.413785 IP 192.168.111.2.51674 > 192.168.113.2.http: Flags [P.], seq 1:17, ack 1, win 115, options [nop,nop,TS val 674664058 ecr 2977436], length 16
00:47:40.478757 IP 192.168.113.2.http > 192.168.111.2.51674: Flags [F.], seq 1, ack 1, win 114, options [nop,nop,TS val 3038697 ecr 674610062], length 0
00:47:34.413785 IP 192.168.111.2.51674 > 192.168.113.2.http: Flags [P.], seq 1:17, ack 1, win 115, options [nop,nop,TS val 674664058 ecr 2977436], length 16
00:47:40.478757 IP 192.168.113.2.http > 192.168.111.2.51674: Flags [F.], seq 1, ack 1, win 114, options [nop,nop,TS val 3038697 ecr 674610062], length 0
00:47:40.479216 IP 192.168.111.2.51674 > 192.168.113.2.http: Flags [FP.], seq 17:19, ack 2, win 115, options [nop,nop,TS val 674670123 ecr 3038697], length 2
00:47:45.946604 IP 192.168.113.2.http > 192.168.111.2.51674: Flags [F.], seq 1, ack 1, win 114, options [nop,nop,TS val 3044165 ecr 674610062], length 0
00:47:45.946979 IP 192.168.111.2.51674 > 192.168.113.2.http: Flags [.], ack 2, win 115, options [nop,nop,TS val 674675591 ecr 3044165,nop,nop,sack 1 {1:2}], length 0
看起来……我不知道,就像他们没有听到对方的声音一样。他们可以互相 ping 通,甚至可以互相交互,但看起来他们只是忽略了一些数据包。
两个 VM 显示相同,因此 pfSense 不会丢弃任何数据包。尽管那里的数据包似乎出现了问题。就像他们被什么东西弄坏了一样。
这是我无法理解的事情,真的。如果您与我分享任何想法,那将是非常棒的。
在此先感谢大家!