AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / ubuntu / 问题

问题[microk8s](ubuntu)

Martin Hope
EM90
Asked: 2020-12-20 02:30:45 +0800 CST

树莓派 4 上的 Ubuntu 20.04 无法安装 MicroK8s

  • 0

我已经按照此处包含的指示在全新的 RPi 4 上安装了 Ubuntu 服务器 20.04 。

我启动我的系统,它正确启动,然后我尝试按照Canonical 的建议从这里安装 MicroK8s 。

建议的命令

sudo snap install microk8s --classic

给出错误

error: snap "microk8s" is not available on stable for
       this architecture (armhf) but exists on other
       architectures (amd64, arm64, ppc64el).

这是为什么?我可以完全理解这样一条消息的含义(不支持架构),但是为什么会这样呢?

server raspberrypi microk8s
  • 2 个回答
  • 2063 Views
Martin Hope
Conor
Asked: 2020-06-09 13:47:37 +0800 CST

如何通过网络访问 microk8s 外部暴露的服务 ip?

  • 0

我在 Ubuntu 服务器上运行 microk8s 实例作为香草安装,使用 MetalLB 配置以动态分配 10.0.2.1 到 10.0.2.200 并启用 Nginx 入口控制器。我已使用以下命令在此实例( https://github.com/bitnami/charts/tree/master/bitnami/wordpress/#installing-the-chart )上安装了 wordpress helm 图表:

helm install wordpress \
  --set wordpressUsername=admin \
  --set wordpressPassword=password \
  --set mariadb.mariadbRootPassword=secretpassword \
  --set ingress.enabled=true \
  --set ingress.hostname=wordpress.internal \
    bitnami/wordpress

服务启动并成功运行,当我运行时

kubectl describe services wordpress

我得到以下信息:

Name:                     wordpress
Namespace:                default
Labels:                   app.kubernetes.io/instance=wordpress
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=wordpress
                          helm.sh/chart=wordpress-9.3.10
Annotations:              meta.helm.sh/release-name: wordpress
                          meta.helm.sh/release-namespace: default
Selector:                 app.kubernetes.io/instance=wordpress,app.kubernetes.io/name=wordpress
Type:                     LoadBalancer
IP:                       10.152.183.73
LoadBalancer Ingress:     10.0.2.1
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  31799/TCP
Endpoints:                10.1.70.14:8080
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  30087/TCP
Endpoints:                10.1.70.14:8443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason        Age                  From                Message
  ----    ------        ----                 ----                -------
  Normal  IPAllocated   32m                  metallb-controller  Assigned IP "10.0.2.1"
  Normal  nodeAssigned  6m41s (x3 over 31m)  metallb-speaker     announcing from node "k8s"

当我通过 SSH 连接到安装了 microk8s 的节点时,实例会按照我的预期响应:

curl 10.0.2.1

<!DOCTYPE html>

<html class="no-js" lang="en-US">

    <head>

        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width, initial-scale=1.0" >

        <link rel="profile" href="https://gmpg.org/xfn/11">

        <title>User&#039;s Blog! &#8211; Just another WordPress site</title>

但是,当我在联网的 Macbook 上运行相同的命令时,我无法得到响应:

curl 10.0.2.1

curl: (7) Failed to connect to 10.0.2.1 port 80: Operation timed out
networking kubernetes microk8s
  • 1 个回答
  • 5705 Views
Martin Hope
Xevious202
Asked: 2020-04-23 07:57:56 +0800 CST

树莓派 4 Microk8s 集群没有启动容器?

  • 0

谢谢你的时间,

我的 homelab 中有一个 master 和 worker Microk8s 集群在运行。我的构建文档在这里;GitHub

运行时我没有收到任何错误sudo microk8s.inspect

如果不进入 crashloopbackoff,我什至无法启动一个简单的容器

sudo microk8s.kubectl run http --image=katacoda/docker-http-server:latest

豆荚描述如下;

ubuntu@MasterControl:~$ sudo microk8s.kubectl describe pod/http
Name:         http
Namespace:    default
Priority:     0
Node:         mastercontrol/192.168.123.10
Start Time:   Wed, 22 Apr 2020 15:33:41 +0000
Labels:       run=http
Annotations:  <none>
Status:       Running
IP:           10.1.39.13
IPs:
  IP:  10.1.39.13
Containers:
  http:
    Container ID:   containerd://2cb60ab4a7c25775b0b2acd5320145bf8b0f491b12fdeb32e879a68f18eb492f
    Image:          katacoda/docker-http-server:latest
    Image ID:       docker.io/katacoda/docker-http-server@sha256:76dc8a47fd019f80f2a3163aba789faf55b41b2fb06397653610c754cb12d3ee
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 22 Apr 2020 15:33:46 +0000
      Finished:     Wed, 22 Apr 2020 15:33:46 +0000
    Ready:          False
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-j9x8g (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-j9x8g:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-j9x8g
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From                    Message
  ----     ------     ----               ----                    -------
  Normal   Scheduled  <unknown>          default-scheduler       Successfully assigned default/http to mastercontrol
  Normal   Created    15s (x2 over 18s)  kubelet, mastercontrol  Created container http
  Normal   Started    15s (x2 over 17s)  kubelet, mastercontrol  Started container http
  Warning  BackOff    13s (x2 over 14s)  kubelet, mastercontrol  Back-off restarting failed container
  Normal   Pulling    1s (x3 over 19s)   kubelet, mastercontrol  Pulling image "katacoda/docker-http-server:latest"
  Normal   Pulled     1s (x3 over 18s)   kubelet, mastercontrol  Successfully pulled image "katacoda/docker-http-server:latest"

非常感谢任何帮助,但我很想知道我哪里出错了!

docker raspberrypi microk8s
  • 1 个回答
  • 368 Views
Martin Hope
MrMowgli
Asked: 2020-02-23 00:57:09 +0800 CST

如何防止启动时启动快照?

  • 2

有没有办法禁止快照自动启动?

我安装了 microk8s snap,非常酷,但是每次我重新启动计算机时它都会自动启动。我可以在登录后停止它,但它会占用我所有的 CPU 并占用磁盘时间。有时可能需要几分钟才能启动 gui。

microk8s.stop一旦终端运行,我就可以停止使用该服务。

我希望能够在需要时启动快照,但这似乎集成为核心服务。

绝对感谢任何帮助!

snap autorun microk8s
  • 3 个回答
  • 3293 Views
Martin Hope
Saurabh Gupta
Asked: 2019-12-19 00:09:59 +0800 CST

'/snap/bin' 不包含在 Ubuntu 18.0.4 中

  • 0

在单个节点上安装MicroK8s时,在获取状态时出现以下错误microk8s:

The command could not be located because '/snap/bin' is not included in the PATH environment variable.
microk8s.status: command not found

我已经完成了出口PATH=$PATH:/usr/bin

sudo nano /etc/environment
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"

仍然得到同样的错误。我该如何解决这个问题?

docker microk8s
  • 1 个回答
  • 2125 Views
Martin Hope
Zoyd
Asked: 2019-07-25 00:47:03 +0800 CST

microk8s 卡坏了

  • 1

我正在尝试让 Microk8s 使用 snap 来工作。当我运行任何与 microk8s 相关的 snap 命令时,我收到以下错误消息:

error : cannot perform the following tasks:
- Arrêter les services du paquet Snap "microk8s" ([stop snap.microk8s.daemon-kubelet.service] failed with exit status 5: Failed to stop snap.microk8s.daemon-kubelet.service: Unit snap.microk8s.daemon-kubelet.service not loaded.
)

(Arrêter les services du paquet的意思是停止服务包)。

当我尝试启动 Microk8s、删除 Snap 包或几乎其他任何东西时,我得到了完全相同的消息。所以,我不仅不能使用它,我什至不能卸载它。任何人都可以帮忙吗?

快照版本:

snap       2.39.3
snapd      2.39.3
series     16
linuxmint  19.1
kernel     4.18.0-25-generic
snap microk8s
  • 1 个回答
  • 562 Views
Martin Hope
Mirto Busico
Asked: 2019-06-05 03:47:12 +0800 CST

如何在 LXD 上运行 microk8s?

  • 0

当我尝试snap install microk8s在 LXD 机器上运行时,它无法启动并产生以下错误:

sysop@hoseplavm:~$ lxc list
+------------+---------+----------------------+------+------------+-----------+
|    NAME    |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
+------------+---------+----------------------+------+------------+-----------+
| kubernetes | RUNNING | 10.144.28.123 (eth0) |      | PERSISTENT |           |
+------------+---------+----------------------+------+------------+-----------+
sysop@hoseplavm:~$ lxc exec kubernetes bash
root@kubernetes:~# microk8s.inspect
Inspecting services
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-apiserver is running
FAIL:  Service snap.microk8s.daemon-proxy is not running
For more details look at: sudo journalctl -u snap.microk8s.daemon-proxy
FAIL:  Service snap.microk8s.daemon-kubelet is not running
For more details look at: sudo journalctl -u snap.microk8s.daemon-kubelet
Service snap.microk8s.daemon-scheduler is running
Service snap.microk8s.daemon-controller-manager is running
Service snap.microk8s.daemon-etcd is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system info
Copy network configuration to the final report tarball
Copy processes list to the final report tarball
Copy snap list to the final report tarball
Inspect kubernetes cluster

Building the report tarball
Report tarball is at /var/snap/microk8s/522/inspection-report-20190604_133500.tar.gz
root@kubernetes:~#

是否可以在 LXD 容器内安装 microk8s?

lxd microk8s
  • 1 个回答
  • 1173 Views
Martin Hope
Kalle Richter
Asked: 2019-04-14 09:10:12 +0800 CST

由于“字段不可变”,microk8s 上的“microk8s.enable istio”失败

  • 0

microk8s.enable istio在 microk8s 上由于field is immutable. 相关的失败输出是

Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{\"helm.sh/hook\":\"post-install\",\"helm.sh/hook-delete-policy\":\"hook-succeeded\"},\"labels\":{\"app\":\"istio-grafana\",\"chart\":\"grafana-1.0.5\",\"heritage\":\"Tiller\",\"release\":\"istio\"},\"name\":\"istio-grafana-post-install\",\"namespace\":\"istio-system\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"app\":\"istio-grafana\",\"release\":\"istio\"},\"name\":\"istio-grafana-post-install\"},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"/tmp/grafana/run.sh\",\"/tmp/grafana/custom-resources.yaml\"],\"image\":\"quay.io/coreos/hyperkube:v1.7.6_coreos.0\",\"name\":\"hyperkube\",\"volumeMounts\":[{\"mountPath\":\"/tmp/grafana\",\"name\":\"tmp-configmap-grafana\"}]}],\"restartPolicy\":\"OnFailure\",\"serviceAccountName\":\"istio-grafana-post-install-account\",\"volumes\":[{\"configMap\":{\"name\":\"istio-grafana-custom-resources\"},\"name\":\"tmp-configmap-grafana\"}]}}}}\n"},"labels":{"chart":"grafana-1.0.5","release":"istio"}},"spec":{"template":{"metadata":{"labels":{"release":"istio"}}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "istio-grafana-post-install", Namespace: "istio-system"
Object: &{map["apiVersion":"batch/v1" "kind":"Job" "metadata":map["annotations":map["helm.sh/hook":"post-install" "helm.sh/hook-delete-policy":"hook-succeeded" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{\"helm.sh/hook\":\"post-install\",\"helm.sh/hook-delete-policy\":\"hook-succeeded\"},\"labels\":{\"app\":\"istio-grafana\",\"chart\":\"grafana-0.1.0\",\"heritage\":\"Tiller\",\"release\":\"RELEASE-NAME\"},\"name\":\"istio-grafana-post-install\",\"namespace\":\"istio-system\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"app\":\"istio-grafana\",\"release\":\"RELEASE-NAME\"},\"name\":\"istio-grafana-post-install\"},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"/tmp/grafana/run.sh\",\"/tmp/grafana/custom-resources.yaml\"],\"image\":\"quay.io/coreos/hyperkube:v1.7.6_coreos.0\",\"name\":\"hyperkube\",\"volumeMounts\":[{\"mountPath\":\"/tmp/grafana\",\"name\":\"tmp-configmap-grafana\"}]}],\"restartPolicy\":\"OnFailure\",\"serviceAccountName\":\"istio-grafana-post-install-account\",\"volumes\":[{\"configMap\":{\"name\":\"istio-grafana-custom-resources\"},\"name\":\"tmp-configmap-grafana\"}]}}}}\n"] "creationTimestamp":"2019-03-06T17:31:59Z" "labels":map["app":"istio-grafana" "chart":"grafana-0.1.0" "heritage":"Tiller" "release":"RELEASE-NAME"] "name":"istio-grafana-post-install" "namespace":"istio-system" "resourceVersion":"1603" "selfLink":"/apis/batch/v1/namespaces/istio-system/jobs/istio-grafana-post-install" "uid":"bf3accfa-4035-11e9-99ce-208984866d4f"] "spec":map["backoffLimit":'\x06' "completions":'\x01' "parallelism":'\x01' "selector":map["matchLabels":map["controller-uid":"bf3accfa-4035-11e9-99ce-208984866d4f"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["app":"istio-grafana" "controller-uid":"bf3accfa-4035-11e9-99ce-208984866d4f" "job-name":"istio-grafana-post-install" "release":"RELEASE-NAME"] "name":"istio-grafana-post-install"] "spec":map["containers":[map["command":["/bin/bash" "/tmp/grafana/run.sh" "/tmp/grafana/custom-resources.yaml"] "image":"quay.io/coreos/hyperkube:v1.7.6_coreos.0" "imagePullPolicy":"IfNotPresent" "name":"hyperkube" "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "volumeMounts":[map["mountPath":"/tmp/grafana" "name":"tmp-configmap-grafana"]]]] "dnsPolicy":"ClusterFirst" "restartPolicy":"OnFailure" "schedulerName":"default-scheduler" "securityContext":map[] "serviceAccount":"istio-grafana-post-install-account" "serviceAccountName":"istio-grafana-post-install-account" "terminationGracePeriodSeconds":'\x1e' "volumes":[map["configMap":map["defaultMode":'\u01a4' "name":"istio-grafana-custom-resources"] "name":"tmp-configmap-grafana"]]]]] "status":map["completionTime":"2019-03-06T17:34:00Z" "conditions":[map["lastProbeTime":"2019-03-06T17:34:00Z" "lastTransitionTime":"2019-03-06T17:34:00Z" "status":"True" "type":"Complete"]] "startTime":"2019-03-06T17:31:59Z" "succeeded":'\x01']]}
for: "/snap/microk8s/483/actions/istio/istio-demo.yaml": Job.batch "istio-grafana-post-install" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"istio-grafana-post-install", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"istio-grafana", "controller-uid":"bf3accfa-4035-11e9-99ce-208984866d4f", "job-name":"istio-grafana-post-install", "release":"istio"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume{core.Volume{Name:"tmp-configmap-grafana", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(nil), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(0xc00c2d6640), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil)}}}, InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"hyperkube", Image:"quay.io/coreos/hyperkube:v1.7.6_coreos.0", Command:[]string{"/bin/bash", "/tmp/grafana/run.sh", "/tmp/grafana/custom-resources.yaml"}, Args:[]string(nil), WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount{core.VolumeMount{Name:"tmp-configmap-grafana", ReadOnly:false, MountPath:"/tmp/grafana", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc004976530), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"istio-grafana-post-install-account", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc00354ccb0), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}: field is immutable
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{\"helm.sh/hook\":\"post-delete\",\"helm.sh/hook-delete-policy\":\"hook-succeeded\",\"helm.sh/hook-weight\":\"3\"},\"labels\":{\"app\":\"security\",\"chart\":\"security-1.0.5\",\"heritage\":\"Tiller\",\"release\":\"istio\"},\"name\":\"istio-cleanup-secrets\",\"namespace\":\"istio-system\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"app\":\"security\",\"release\":\"istio\"},\"name\":\"istio-cleanup-secrets\"},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"kubectl get secret --all-namespaces | grep \\\"istio.io/key-and-cert\\\" |  while read -r entry; do\\n  ns=$(echo $entry | awk '{print $1}');\\n  name=$(echo $entry | awk '{print $2}');\\n  kubectl delete secret $name -n $ns;\\ndone\\n\"],\"image\":\"quay.io/coreos/hyperkube:v1.7.6_coreos.0\",\"name\":\"hyperkube\"}],\"restartPolicy\":\"OnFailure\",\"serviceAccountName\":\"istio-cleanup-secrets-service-account\"}}}}\n"},"labels":{"chart":"security-1.0.5","release":"istio"}},"spec":{"template":{"metadata":{"labels":{"release":"istio"}}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "istio-cleanup-secrets", Namespace: "istio-system"
Object: &{map["apiVersion":"batch/v1" "kind":"Job" "metadata":map["annotations":map["helm.sh/hook":"post-delete" "helm.sh/hook-delete-policy":"hook-succeeded" "helm.sh/hook-weight":"3" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{\"helm.sh/hook\":\"post-delete\",\"helm.sh/hook-delete-policy\":\"hook-succeeded\",\"helm.sh/hook-weight\":\"3\"},\"labels\":{\"app\":\"security\",\"chart\":\"security-1.0.0\",\"heritage\":\"Tiller\",\"release\":\"RELEASE-NAME\"},\"name\":\"istio-cleanup-secrets\",\"namespace\":\"istio-system\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"app\":\"security\",\"release\":\"RELEASE-NAME\"},\"name\":\"istio-cleanup-secrets\"},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-c\",\"kubectl get secret --all-namespaces | grep \\\"istio.io/key-and-cert\\\" |  while read -r entry; do\\n  ns=$(echo $entry | awk '{print $1}');\\n  name=$(echo $entry | awk '{print $2}');\\n  kubectl delete secret $name -n $ns;\\ndone\\n\"],\"image\":\"quay.io/coreos/hyperkube:v1.7.6_coreos.0\",\"name\":\"hyperkube\"}],\"restartPolicy\":\"OnFailure\",\"serviceAccountName\":\"istio-cleanup-secrets-service-account\"}}}}\n"] "creationTimestamp":"2019-03-06T17:31:59Z" "labels":map["app":"security" "chart":"security-1.0.0" "heritage":"Tiller" "release":"RELEASE-NAME"] "name":"istio-cleanup-secrets" "namespace":"istio-system" "resourceVersion":"1596" "selfLink":"/apis/batch/v1/namespaces/istio-system/jobs/istio-cleanup-secrets" "uid":"bf4f1cdb-4035-11e9-99ce-208984866d4f"] "spec":map["backoffLimit":'\x06' "completions":'\x01' "parallelism":'\x01' "selector":map["matchLabels":map["controller-uid":"bf4f1cdb-4035-11e9-99ce-208984866d4f"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["app":"security" "controller-uid":"bf4f1cdb-4035-11e9-99ce-208984866d4f" "job-name":"istio-cleanup-secrets" "release":"RELEASE-NAME"] "name":"istio-cleanup-secrets"] "spec":map["containers":[map["command":["/bin/bash" "-c" "kubectl get secret --all-namespaces | grep \"istio.io/key-and-cert\" |  while read -r entry; do\n  ns=$(echo $entry | awk '{print $1}');\n  name=$(echo $entry | awk '{print $2}');\n  kubectl delete secret $name -n $ns;\ndone\n"] "image":"quay.io/coreos/hyperkube:v1.7.6_coreos.0" "imagePullPolicy":"IfNotPresent" "name":"hyperkube" "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"OnFailure" "schedulerName":"default-scheduler" "securityContext":map[] "serviceAccount":"istio-cleanup-secrets-service-account" "serviceAccountName":"istio-cleanup-secrets-service-account" "terminationGracePeriodSeconds":'\x1e']]] "status":map["completionTime":"2019-03-06T17:33:59Z" "conditions":[map["lastProbeTime":"2019-03-06T17:33:59Z" "lastTransitionTime":"2019-03-06T17:33:59Z" "status":"True" "type":"Complete"]] "startTime":"2019-03-06T17:31:59Z" "succeeded":'\x01']]}
for: "/snap/microk8s/483/actions/istio/istio-demo.yaml": Job.batch "istio-cleanup-secrets" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"istio-cleanup-secrets", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"security", "controller-uid":"bf4f1cdb-4035-11e9-99ce-208984866d4f", "job-name":"istio-cleanup-secrets", "release":"istio"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"hyperkube", Image:"quay.io/coreos/hyperkube:v1.7.6_coreos.0", Command:[]string{"/bin/bash", "-c", "kubectl get secret --all-namespaces | grep \"istio.io/key-and-cert\" |  while read -r entry; do\n  ns=$(echo $entry | awk '{print $1}');\n  name=$(echo $entry | awk '{print $2}');\n  kubectl delete secret $name -n $ns;\ndone\n"}, Args:[]string(nil), WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc003271308), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"istio-cleanup-secrets-service-account", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc00357ca10), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}: field is immutable
Failed to enable istio

完整的输出可以在https://pastebin.com/v4CAec14找到。

这似乎与已知问题https://github.com/ubuntu/microk8s/issues/414和https://github.com/ubuntu/microk8s/issues/386无关。

我snap在 Ubuntu 18.10 上安装了 stable 1.14 (492) 和 edge 1.14.1 (522) 时遇到了这个问题。

microk8s
  • 1 个回答
  • 564 Views
Martin Hope
Kalle Richter
Asked: 2019-04-05 09:16:15 +0800 CST

如何在 microk8s 上安装 helm 等应用程序?

  • 1

为了helm在 Kubernetes on Google Cloud (gCloud) 上安装应用程序,我从仪表板启动了一个云 shell。如何helm在 microk8s 上做到这一点或以不同的方式安装?

我在 Ubuntu 18.10 上使用 microk8s 1.14。

microk8s
  • 1 个回答
  • 4753 Views
Martin Hope
Kalle Richter
Asked: 2019-03-09 00:42:16 +0800 CST

如何访问 micro8ks 的仪表板 Web UI?

  • 10

有关于如何启用仪表板扩展的信息

microk8s.enable dashboard

(我运行)以及如何显示启用的其他扩展的 URL,如下所示:

kubectl cluster-info

如何获取在 Ubuntu 18.10 上本地运行的 microk8s 安装仪表板的 URL?

url kubernetes microk8s
  • 3 个回答
  • 12619 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    如何运行 .sh 脚本?

    • 16 个回答
  • Marko Smith

    如何安装 .tar.gz(或 .tar.bz2)文件?

    • 14 个回答
  • Marko Smith

    如何列出所有已安装的软件包

    • 24 个回答
  • Marko Smith

    无法锁定管理目录 (/var/lib/dpkg/) 是另一个进程在使用它吗?

    • 25 个回答
  • Martin Hope
    Flimm 如何在没有 sudo 的情况下使用 docker? 2014-06-07 00:17:43 +0800 CST
  • Martin Hope
    Ivan 如何列出所有已安装的软件包 2010-12-17 18:08:49 +0800 CST
  • Martin Hope
    La Ode Adam Saputra 无法锁定管理目录 (/var/lib/dpkg/) 是另一个进程在使用它吗? 2010-11-30 18:12:48 +0800 CST
  • Martin Hope
    David Barry 如何从命令行确定目录(文件夹)的总大小? 2010-08-06 10:20:23 +0800 CST
  • Martin Hope
    jfoucher “以下软件包已被保留:”为什么以及如何解决? 2010-08-01 13:59:22 +0800 CST
  • Martin Hope
    David Ashford 如何删除 PPA? 2010-07-30 01:09:42 +0800 CST

热门标签

10.10 10.04 gnome networking server command-line package-management software-recommendation sound xorg

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve