我试图找出哪种数据类型更适合将时间数据存储在 MySQL 或 Cassandra 数据库中,以用于 Facebook 或 Instagram 等大型应用程序?有很多类似的问题和答案,但我最终无法意识到哪个是更好的选择?
所以我决定问这样一个问题,看看有没有人知道大型应用程序使用什么数据类型来存储他们的时间数据?
- 时间戳
- 约会时间
- UNIX 时间戳
我试图找出哪种数据类型更适合将时间数据存储在 MySQL 或 Cassandra 数据库中,以用于 Facebook 或 Instagram 等大型应用程序?有很多类似的问题和答案,但我最终无法意识到哪个是更好的选择?
所以我决定问这样一个问题,看看有没有人知道大型应用程序使用什么数据类型来存储他们的时间数据?
我正在编写一个 TypeScript-NodeJS 应用程序,并希望在 NodeJS 应用程序中处理对象id
和 created_at TIMESTAMP
,而不是使用 MySQL 或 Cassandra 内置UUID
或TIMESTAMP
生成器。
1- 首先我想知道在 Web 服务器应用程序中生成id
和created_at
值而不是让数据库生成它们是一个好主意吗?
2-其次,我想知道我是否使用数据库的内置函数,如uuid()
CassandratoTimestamp(now())
或MySQL,与使用 NodeJS uuid()库UUID_TO_BIN(UUID())
相比,这会增加我的应用程序的开销/延迟吗?
我有一个这样的 Skaffold 文件:
apiVersion: skaffold/v4beta1
kind: Config
build:
artifacts:
- image: app/auth
context: auth
sync:
manual:
- src: src/**/*.ts
dest: .
docker:
dockerfile: Dockerfile
和deploymant
如下文件:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: app/auth
env:
- name: MONGO_URI
value: 'mongodb://auth-mongo-srv:27017/auth'
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
像Dockerfile
下面这样:
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install --only=prod
COPY . .
CMD ["npm", "start"]
但是在运行之后skaffold dev
我看到了这样一个 docker 镜像列表:
REPOSITORY TAG IMAGE ID CREATED SIZE
app/auth 429eadf4af2ad66af9f364eb3cfe8926e05276c4f579fe39c5112e3201b1e4ac 429eadf4af2a 15 minutes ago 345MB
app/auth latest 429eadf4af2a 15 minutes ago 345MB
每个容器有 2 个不同的图像是否正常?
我正在尝试在我的机器上运行的Cassandra
本地Kind
集群上进行部署。Ubuntu 22.04
我发现的唯一指令是this,它为此使用了 a StatefulSet
。我只是想知道,Deployment
文件不是更新的东西吗?为什么他们不使用Deployment
file 而不是StatefulSet
?如果使用Deployment
文件更好,有人可以帮我将其转换StatefulSet
为Deployment
文件吗?
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: gcr.io/google-samples/cassandra:v13
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "K8Demo"
- name: CASSANDRA_DC
value: "DC1-K8Demo"
- name: CASSANDRA_RACK
value: "Rack1-K8Demo"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: cassandra-data
mountPath: /cassandra_data
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
# do not use these in production until ssd GCEPersistentDisk or other
# ssd pd
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: k8s.io/minikube-hostpath
parameters:
type: pd-ssd
我已经kubelet 1.26.0
使用命令在 Ubuntu 22.04 上安装apt install kubelet
,但是当我尝试时,journalctl -xeu kubelet
我得到以下结果:
░░
░░ The unit kubelet.service has entered the 'failed' state with result 'exit-code'.
Dec 14 15:41:16 a systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 86.
░░ Subject: Automatic restarting of a unit has been scheduled
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ Automatic restarting of the unit kubelet.service has been scheduled, as the result for
░░ the configured Restart= setting for the unit.
Dec 14 15:41:16 a systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
░░ Subject: A stop job for unit kubelet.service has finished
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ A stop job for unit kubelet.service has finished.
░░
░░ The job identifier is 26301 and the job result is done.
Dec 14 15:41:16 a systemd[1]: Started kubelet: The Kubernetes Node Agent.
░░ Subject: A start job for unit kubelet.service has finished successfully
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ A start job for unit kubelet.service has finished successfully.
░░
░░ The job identifier is 26301.
Dec 14 15:41:16 a kubelet[18015]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Dec 14 15:41:16 a kubelet[18015]: I1214 15:41:16.367525 18015 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the rem>
Dec 14 15:41:16 a kubelet[18015]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Dec 14 15:41:16 a kubelet[18015]: I1214 15:41:16.371255 18015 server.go:412] "Kubelet version" kubeletVersion="v1.26.0"
Dec 14 15:41:16 a kubelet[18015]: I1214 15:41:16.371272 18015 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 14 15:41:16 a kubelet[18015]: I1214 15:41:16.371499 18015 server.go:836] "Client rotation is on, will bootstrap in background"
Dec 14 15:41:16 a kubelet[18015]: I1214 15:41:16.372757 18015 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Dec 14 15:41:16 a kubelet[18015]: I1214 15:41:16.373608 18015 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 14 15:41:16 a kubelet[18015]: I1214 15:41:16.399357 18015 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
Dec 14 15:41:16 a kubelet[18015]: I1214 15:41:16.399717 18015 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 14 15:41:16 a kubelet[18015]: I1214 15:41:16.399832 18015 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName>
Dec 14 15:41:16 a kubelet[18015]: I1214 15:41:16.399866 18015 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Dec 14 15:41:16 a kubelet[18015]: I1214 15:41:16.399883 18015 container_manager_linux.go:308] "Creating device plugin manager"
Dec 14 15:41:16 a kubelet[18015]: I1214 15:41:16.399940 18015 state_mem.go:36] "Initialized new in-memory state store"
Dec 14 15:41:16 a kubelet[18015]: E1214 15:41:16.402173 18015 run.go:74] "command failed" err="failed to run Kubelet: validate service connection: CRI v1 runtime API is not implemented for endpoint \">
Dec 14 15:41:16 a systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ An ExecStart= process belonging to unit kubelet.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Dec 14 15:41:16 a systemd[1]: kubelet.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ The unit kubelet.service has entered the 'failed' state with result 'exit-code'.
lines 2547-2600/2600 (END)
我不知道是什么问题,我怎样才能解决它?
我正在尝试在 Ubuntu 22.04 机器上做sudo kubeadm init
。kubeadm 1.26.0
但我得到以下结果:
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [a kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.2]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [a localhost] and IPs [192.168.1.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [a localhost] and IPs [192.168.1.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher