首页 > 解决方案 > Istio Bookinfo k8 部署

问题描述

我有一个主节点和两个工作节点(worker-1 和 worker-2)。所有节点都已启动并运行,没有任何问题。当我计划安装 istio 服务网格时,我尝试部署示例书信息部署。

部署 bookinfo 后,我验证了在命令下运行的 pod 状态

root@master:~# kubectl get pod -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE       NOMINATED NODE   READINESS GATES
details-v1-79c697d759-9k98l       2/2     Running   0          11h   10.200.226.104   worker-1   <none>           <none>
productpage-v1-65576bb7bf-zsf6f   2/2     Running   0          11h   10.200.226.107   worker-1   <none>           <none>
ratings-v1-7d99676f7f-zxrtq       2/2     Running   0          11h   10.200.226.105   worker-1   <none>           <none>
reviews-v1-987d495c-hsnmc         1/2     Running   0          21m   10.200.133.194   worker-2   <none>           <none>
reviews-v2-6c5bf657cf-jmbkr       1/2     Running   0          11h   10.200.133.252   worker-2   <none>           <none>
reviews-v3-5f7b9f4f77-g2s6p       2/2     Running   0          11h   10.200.226.106   worker-1   <none>           <none>

我注意到两个 pod 没有在这里运行状态显示 1/2(在 worker-2 节点中),我几乎花了两天时间但找不到任何东西来解决上述问题。这里是描述 pod 状态

Warning  Unhealthy  63s (x14 over 89s)  kubelet            Readiness probe failed: Get "http://10.244.133.194:15021/healthz/ready": 
dial tcp 10.200.133.194:15021: connect: connection refused

然后今天早上我意识到当 pod 没有以 1/2 状态运行时,worker-2 节点出现了问题,我计划了如下所示的警戒节点

kubectl cordon worker-2
kubectl delete pod <worker-2 pod>
kubectl get pod -o wide

在封锁 worker-2 节点之后,我可以看到所有 pod 在 worker-1 节点中的状态为 2/2,没有任何问题。

root@master:~# kubectl get pod -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE       NOMINATED NODE   READINESS GATES
details-v1-79c697d759-9k98l       2/2     Running   0          11h   10.200.226.104   worker-1   <none>           <none>
productpage-v1-65576bb7bf-zsf6f   2/2     Running   0          11h   10.200.226.107   worker-1   <none>           <none>
ratings-v1-7d99676f7f-zxrtq       2/2     Running   0          11h   10.200.226.105   worker-1   <none>           <none>
reviews-v1-987d495c-2n4d9         2/2     Running   0          17s   10.200.226.113   worker-1   <none>           <none>
reviews-v2-6c5bf657cf-wzqpt       2/2     Running   0          17s   10.200.226.112   worker-1   <none>           <none>
reviews-v3-5f7b9f4f77-g2s6p       2/2     Running   0          11h   10.200.226.106   worker-1   <none>           <none>

您能否请有人帮我解决这个问题,以便在 worker-2 节点中安排(待处理的 pod)pod。

注意:当我尝试重新部署所有节点(worker-1 和 worker-2)时,pod 状态又回到 1/2 状态

oot@master:~/istio-1.9.1/samples# kubectl logs -f ratings-v1-b6994bb9-wfckn -c istio-proxy
ates: 0 successful, 0 rejected
2021-04-21T07:12:19.941679Z     warn    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2021-04-21T07:12:21.942096Z     warn    Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected

标签: kubernetes

解决方案


推荐阅读