查看Kubernetes 开发文档,我可以看到节点有额外的日志记录可用。最后,我正在尝试在 Kublet 上启用调试或跟踪日志记录,以便解决我遇到的问题。我似乎在任何地方都找不到任何关于调整它的指导。
Brett Larson's questions
在过去的几个月里,我们一直在经历一个让我们感到痛苦的问题。
问题似乎是,有时当我们通过 Kubernetes 执行器请求 pod 时,它无法创建。
例如,spark pod 可能会失败并出现以下错误:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreatePodSandBox 20m (x3 over 32m) kubelet, k8s-agentpool1-123456789-vmss00000q (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to create a sandbox for pod "spark-worker-cc1d28bf3de8428a826c04471e58487c-8577d5d654-2jg89": operation timeout: context deadline exceeded
Normal SandboxChanged 16m (x150 over 159m) kubelet, k8s-agentpool1-123456789-vmss00000q Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 5m7s (x14 over 161m) kubelet, k8s-agentpool1-123456789-vmss00000q Failed create pod sandbox: rpc error: code = Unknown desc = failed to create a sandbox for pod "spark-worker-cc1d28bf3de8428a826c04471e58487c-8577d5d654-2jg89": operation timeout: context deadline exceeded
查看日志,我们看到 kubelet 请求新 pod 的“SyncLoop”。
Jul 16 16:33:58 k8s-agentpool1-123456789-vmss00000T kubelet[4797]: I0716 16:33:58.001997 4797 kubelet.go:1908] SyncLoop (ADD, "api"): "d9b3911585c4461c9728aefa39716c44_default(8272d74f-a7e7-11e9-8f1c-000d3a7b202b)
还有一些日志表明已安装卷...
Jul 16 16:34:29 k8s-agentpool1-123456789-vmss00000T kubelet[4797]: I0716 16:33:58.175573 4797 reconciler.go:252] operationExecutor.MountVolume started for volume "default-shared" (UniqueName: "kubernetes.io/glusterfs/8272d74f-a7e7-11e9-8f1c-000d3a7b202b-default-shared") pod "d9b3911585c4461c9728aefa39716c44" (UID: "8272d74f-a7e7-11e9-8f1c-000d3a7b202b")
我们看到将要创建一个 pod Sandbox:
Jul 16 16:34:29 k8s-agentpool1-123456789-vmss00000T kubelet[4797]: I0716 16:34:29.627374 4797 kuberuntime_manager.go:397] No sandbox for pod "d9b3911585c4461c9728aefa39716c44_default(8272d74f-a7e7-11e9-8f1c-000d3a7b202b)" can be found. Need to start a new one
在我们看到这个记录之前,我们似乎没有看到其他任何东西:
Jul 16 16:36:29 k8s-agentpool1-123456789-vmss00000T kubelet[4797]: E0716 16:36:29.629252 4797 kuberuntime_manager.go:662] createPodSandbox for pod "d9b3911585c4461c9728aefa39716c44_default(8272d74f-a7e7-11e9-8f1c-000d3a7b202b)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "d9b3911585c4461c9728aefa39716c44": operation timeout: context deadline exceeded
Jul 16 16:36:29 k8s-agentpool1-123456789-vmss00000T kubelet[4797]: E0716 16:36:29.629301 4797 pod_workers.go:190] Error syncing pod 8272d74f-a7e7-11e9-8f1c-000d3a7b202b ("d9b3911585c4461c9728aefa39716c44_default(8272d74f-a7e7-11e9-8f1c-000d3a7b202b)"), skipping: failed to "CreatePodSandbox" for "d9b3911585c4461c9728aefa39716c44_default(8272d74f-a7e7-11e9-8f1c-000d3a7b202b)" with CreatePodSandboxError: "CreatePodSandbox for pod \"d9b3911585c4461c9728aefa39716c44_default(8272d74f-a7e7-11e9-8f1c-000d3a7b202b)\" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod \"d9b3911585c4461c9728aefa39716c44\": operation timeout: context deadline exceeded"
Jul 16 16:36:43 k8s-agentpool1-123456789-vmss00000T kubelet[4797]: I0716 16:36:43.937085 4797 kuberuntime_manager.go:397] No sandbox for pod "d9b3911585c4461c9728aefa39716c44_default(8272d74f-a7e7-11e9-8f1c-000d3a7b202b)" can be found. Need to start a new one
Jul 16 16:36:43 k8s-agentpool1-123456789-vmss00000T kubelet[4797]: E0716 16:36:43.940691 4797 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "d9b3911585c4461c9728aefa39716c44": Error response from daemon: Conflict. The container name "/k8s_POD_d9b3911585c4461c9728aefa39716c44_default_8272d74f-a7e7-11e9-8f1c-000d3a7b202b_0" is already in use by container "2a7ecfd3725bbe6604b3006abf6c59a36eb8a5d7142e71a3791f5f7378bf5e27". You have to remove (or rename) that container to be able to reuse that name.
Jul 16 16:36:43 k8s-agentpool1-123456789-vmss00000T kubelet[4797]: E0716 16:36:43.940731 4797 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "d9b3911585c4461c9728aefa39716c44_default(8272d74f-a7e7-11e9-8f1c-000d3a7b202b)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "d9b3911585c4461c9728aefa39716c44": Error response from daemon: Conflict. The container name "/k8s_POD_d9b3911585c4461c9728aefa39716c44_default_8272d74f-a7e7-11e9-8f1c-000d3a7b202b_0" is already in use by container "2a7ecfd3725bbe6604b3006abf6c59a36eb8a5d7142e71a3791f5f7378bf5e27". You have to remove (or rename) that container to be able to reuse that name.
Jul 16 16:36:43 k8s-agentpool1-123456789-vmss00000T kubelet[4797]: E0716 16:36:43.940747 4797 kuberuntime_manager.go:662] createPodSandbox for pod "d9b3911585c4461c9728aefa39716c44_default(8272d74f-a7e7-11e9-8f1c-000d3a7b202b)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "d9b3911585c4461c9728aefa39716c44": Error response from daemon: Conflict. The container name "/k8s_POD_d9b3911585c4461c9728aefa39716c44_default_8272d74f-a7e7-11e9-8f1c-000d3a7b202b_0" is already in use by container "2a7ecfd3725bbe6604b3006abf6c59a36eb8a5d7142e71a3791f5f7378bf5e27". You have to remove (or rename) that container to be able to reuse that name.
Jul 16 16:36:43 k8s-agentpool1-123456789-vmss00000T kubelet[4797]: E0716 16:36:43.940805 4797 pod_workers.go:190] Error syncing pod 8272d74f-a7e7-11e9-8f1c-000d3a7b202b ("d9b3911585c4461c9728aefa39716c44_default(8272d74f-a7e7-11e9-8f1c-000d3a7b202b)"), skipping: failed to "CreatePodSandbox" for "d9b3911585c4461c9728aefa39716c44_default(8272d74f-a7e7-11e9-8f1c-000d3a7b202b)" with CreatePodSandboxError: "CreatePodSandbox for pod \"d9b3911585c4461c9728aefa39716c44_default(8272d74f-a7e7-11e9-8f1c-000d3a7b202b)\" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod \"d9b3911585c4461c9728aefa39716c44\": Error response from daemon: Conflict. The container name \"/k8s_POD_d9b3911585c4461c9728aefa39716c44_default_8272d74f-a7e7-11e9-8f1c-000d3a7b202b_0\" is already in use by container \"2a7ecfd3725bbe6604b3006abf6c59a36eb8a5d7142e71a3791f5f7378bf5e27\". You have to remove (or rename) that container to be able to reuse that name."
Jul 16 16:36:44 k8s-agentpool1-123456789-vmss00000T kubelet[4797]: W0716 16:36:44.221607 4797 docker_sandbox.go:384] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "d9b3911585c4461c9728aefa39716c44_default": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "2a7ecfd3725bbe6604b3006abf6c59a36eb8a5d7142e71a3791f5f7378bf5e27"
Jul 16 16:36:44 k8s-agentpool1-123456789-vmss00000T kubelet[4797]: I0716 16:36:44.222749 4797 kubelet.go:1953] SyncLoop (PLEG): "d9b3911585c4461c9728aefa39716c44_default(8272d74f-a7e7-11e9-8f1c-000d3a7b202b)", event: &pleg.PodLifecycleEvent{ID:"8272d74f-a7e7-11e9-8f1c-000d3a7b202b", Type:"ContainerDied", Data:"2a7ecfd3725bbe6604b3006abf6c59a36eb8a5d7142e71a3791f5f7378bf5e27"}
Jul 16 16:36:44 k8s-agentpool1-123456789-vmss00000T kubelet[4797]: I0716 16:36:44.739387 4797 kuberuntime_manager.go:415] No ready sandbox for pod "d9b3911585c4461c9728aefa39716c44_default(8272d74f-a7e7-11e9-8f1c-000d3a7b202b)" can be found. Need to start a new one
我们认为这可能是一个 CNI 错误,但是我们查看了 Azure CNI 日志,它似乎甚至没有到达它开始请求 IP 的部分,只显示 DEL 命令以 err 完成的错误。
2019/07/16 16:23:06 [net] Deleting veth pair azv4ea5d9d9527 eth0.
2019/07/16 16:23:06 [net] Deleted endpoint &{Id:8e963d34-eth0 HnsId: SandboxKey: IfName:eth0 HostIfName:azv4ea5d9d9527 MacAddress:ce:93:bf:4d:e9:19 InfraVnetIP:{IP:<nil> Mask:<nil>} IPAddresses:[{IP:10.250.18.22
5 Mask:fffff800}] Gateways:[10.250.16.1] DNS:{Suffix: Servers:[168.63.129.16]} Routes:[{Dst:{IP:0.0.0.0 Mask:00000000} Src:<nil> Gw:10.250.16.1 Protocol:0 DevName: Scope:0}] VlanID:0 EnableSnatOnHost:false Enabl
eInfraVnet:false EnableMultitenancy:false NetworkNameSpace:/proc/10781/ns/net ContainerID:8e963d340597f1c9f789b93a7784e8d44ffb00687086de8ee6561338aab7c72d PODName:jupyter-some-person-10 PODNameSpace:defaul
t InfraVnetAddressSpace:}.
2019/07/16 16:23:06 [net] Save succeeded.
2019/07/16 16:23:06 [cni] Calling plugin azure-vnet-ipam DEL nwCfg:&{CNIVersion:0.3.0 Name:azure Type:azure-vnet Mode:bridge Master: Bridge:azure0 LogLevel: LogTarget: InfraVnetAddressSpace: PodNamespaceForDualN
etwork:[] MultiTenancy:false EnableSnatOnHost:false EnableExactMatchForPodName:false CNSUrl: Ipam:{Type:azure-vnet-ipam Environment: AddrSpace: Subnet:10.250.16.0/21 Address:10.250.18.225 QueryInterval:} DNS:{Na
meservers:[] Domain: Search:[] Options:[]} RuntimeConfig:{PortMappings:[] DNS:{Servers:[] Searches:[] Options:[]}} AdditionalArgs:[]}.
2019/07/16 16:23:06 [cni] Plugin azure-vnet-ipam returned err:<nil>.
2019/07/16 16:23:06 Get number of endpoints for ifname eth0 network azure
2019/07/16 16:23:06 [cni-net] DEL command completed with err:<nil>.
2019/07/16 16:23:06 [cni-net] Plugin stopped.
2019/07/16 16:36:38 [cni-net] Plugin azure-vnet version v1.0.18.
2019/07/16 16:36:38 [cni-net] Running on Linux version 4.15.0-1040-azure (buildd@lgw01-amd64-030) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.10)) #44-Ubuntu SMP Thu Feb 21 14:24:01 UTC 2019
Client:
Version: 3.0.3
API version: 1.40
Go version: go1.11.4
Git commit: 48bd4c6d
Built: Wed Jan 23 16:17:56 2019
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 3.0.4
API version: 1.40 (minimum version 1.12)
Go version: go1.11.4
Git commit: 8ecd530
Built: Fri Jan 25 01:45:38 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.2
GitCommit: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
runc:
Version: 1.0.0-rc6+dev
GitCommit: 96ec2177ae841256168fcf76954f7177af9446eb
docker-init:
Version: 0.18.0
GitCommit: fec3683
这是k8s版本信息:
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:19:22Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
这是我们的节点信息 - 我们正在使用 AKS 引擎创建一个使用 Azure VMSS 节点的 Kubernetes 集群。
Kernel Version: 4.15.0-1050-azure
OS Image: Ubuntu 16.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://3.0.4
Kubelet Version: v1.13.5
Kube-Proxy Version: v1.13.5
我有点不知道我们甚至可以做些什么来进一步解决这个问题,因为我们甚至无法重新创建这个问题。
我知道这个“超出上下文期限”警报是一个通用的 gRPC 超时,但我不确定哪个 gRPC 事务正在挂起。
我正在尝试在一个单独的订阅和目录中从 Azure 中的某些资源收集 azure 诊断数据,并将它们发送到另一个订阅/目录上的 OMS/日志分析工作区。
我使用的帐户可以访问这两个组织,但是当我运行“启用将诊断日志发送到 Log Analytics 工作区”的命令时(如此处所述)
Set-AzureRmDiagnosticSetting -ResourceId [your resource id] -WorkspaceId [resource id of the log analytics workspace] -Enabled $true
我的命令:
$resourceid = "/subscriptions/e12d538c-xxxx-xxxx-xxxx-e60xxxxx2144/resourceGroups/xxx-xxxx/providers/Microsoft.Cache/Redis/xxxxxxxxxx"
$workspaceid = "/subscriptions/6a9axxxx-8xxx-4xxx-92xx-1bxxxxxx5fc23/resourceGroups/xxxxx-oms-rg/providers/Microsoft.OperationalInsights/workspaces/xxxxxxx"
Set-AzureRmDiagnosticSetting -ResourceId $ResourceId -WorkspaceId $workspaceId -Enabled $true
我收到以下错误:
Set-AzureRmDiagnosticSetting:访问令牌来自错误的颁发者“ https://sts.windows.net/5xxxxxxx-cxxx-4xxx-axxx-2xxxxxxxxxxxxx/ ”。它必须与与此订阅关联的租户“ https://sts.windows.net/2xxxxxxx-cxxx-2xxx-bxxx-3xxxxxxxxxxxxx/ ”匹配。请使用权限(URL)' https://login.windows.net/2xxxxxxx-cxxx-2xxx-bxxx-3xxxxxxxxxxxxx' 来获取令牌。请注意,如果将订阅转移给另一个租户,则对服务没有影响,但有关新租户的信息可能需要一段时间才能传播(最多一个小时)。如果您刚刚转移订阅并看到此错误消息,请稍后再试。在 line:1 char:1 + Set-AzureRmDiagnosticSetting -ResourceId $ResourceId -WorkspaceId $w ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : CloseError: (:) [Set-AzureRmDiagnosticSetting],CloudException + FullyQualifiedErrorId:Microsoft.Azure.Commands.Insights.Diagnostics.SetAzureRmDiagnosticSettingCommand
我今天早些时候解决了一个问题,但我有兴趣了解它为什么有效。我们设置了一个新的 Hyper-V 虚拟机,结果发现 HTTP 流量无法正常工作。HTTPS、ping,其他一切正常。
经过几个月的摸索,我在黑暗中开了一枪。在 Hyper-V 主机服务器上,物理 NIC 卡的“最大以太网帧大小”高级设置设置为 1500。将此设置设置为 1514 后,问题得到解决。或者,将其设置为 1512 也没有解决问题;1514 是神奇的数字。
我最好的猜测是,当此设置设置为 1500 时,它允许传入 ping,因为数据负载比 HTTP 流量小得多。至于 HTTPS 流量,我读到了一些叫做“路径 MTU 发现”的东西,我将假设为什么 HTTPS 流量可以正常通过,尽管速度较慢。
查看这篇文章,人们同意 1518 是最大总帧大小。为什么我不需要将其更改为 1518 而不是 1514 字节?如果这是以太网有效负载的最大大小而不是最大大小,为什么默认帧大小为 1500 。
我正在尝试将物理服务器 2003 服务器转换为虚拟机。
我在 Microsoft Server 2008 R2 数据中心版 hyper-v 服务器上使用 System Center Virtual Machine Manager 2008 R2(工作组版)。
是否可以在不中断生产服务器的情况下使用 SCVMM 进行测试 P2V?如果是这样,如何?