首页 > 解决方案 > 单个控制平面节点显示未就绪

问题描述

发生了什么:主节点不再显示就绪。可能是在更新失败后发生的(下载的 kubeadm 和 kubelet 版本太高了)

s-rtk8s01     Ready                         Node     2y120d   v1.14.1
s-rtk8s02     Ready                         Node     2y173d   v1.14.1
s-rtk8s03     Ready                         Node     2y174d   v1.14.1
s-rtk8s04     Ready                         Node     2y174d   v1.14.1
s-rtk8s05     Ready                         Node     2y174d   v1.14.1
s-rtk8sma01   NotReady,SchedulingDisabled   master   2y174d   v1.14.1

调度程序未在 pod 列表中显示(在被强制删除后),但 docker ps 显示静态 pod 正在后台启动。在里面

NAME                                  READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-hvh6b               1/1     Running   56         288d
coredns-fb8b8dccf-x5r5h               1/1     Running   58         302d
etcd-s-rtk8sma01                      1/1     Running   45         535d
kube-apiserver-s-rtk8sma01            1/1     Running   13         535d
kube-controller-manager-s-rtk8sma01   1/1     Running   7          485d
kube-flannel-ds-2fmj4                 1/1     Running   6          485d
kube-flannel-ds-5g47f                 1/1     Running   5          485d
kube-flannel-ds-5k27n                 1/1     Running   5          485d
kube-flannel-ds-cj967                 1/1     Running   8          485d
kube-flannel-ds-drjff                 1/1     Running   9          485d
kube-flannel-ds-v4sfg                 1/1     Running   5          485d
kube-proxy-6ngn6                      1/1     Running   11         535d
kube-proxy-85g6c                      1/1     Running   10         535d
kube-proxy-gd5jb                      1/1     Running   13         535d
kube-proxy-grvsk                      1/1     Running   11         535d
kube-proxy-lpht9                      1/1     Running   13         535d
kube-proxy-pmdmj                      0/1     Pending   0          25h

kubelet 的 systemd 日志显示如下(我看到这些错误带有主机名大小写注释以及缺少镜像 pod 的错误 - 也许是调度程序?)

kubelet_node_status.go:94] Unable to register node "s-rtk8sma01" with API server: nodes "s-rtk8sma01" is forbidden: node "S-RTK8SMA01" is not allowed to modify node "s-rtk8sma01"


setters.go:739] Error getting volume limit for plugin kubernetes.io/azure-disk
setters.go:739] Error getting volume limit for plugin kubernetes.io/cinder
setters.go:739] Error getting volume limit for plugin kubernetes.io/aws-ebs
setters.go:739] Error getting volume limit for plugin kubernetes.io/gce-pd

Generated UID "56ba6ffcb6b23178170f8063052292ee" pod "kube-scheduler" from /etc/kubernetes/manifests/kube-scheduler.yaml
Generated Name "kube-scheduler-s-rtk8sma01" for UID "56ba6ffcb6b23178170f8063052292ee" from URL /etc/kubernetes/manifests/kube-scheduler.yaml
Using namespace "kube-system" for pod "kube-scheduler-s-rtk8sma01" from /etc/kubernetes/manifests/kube-scheduler.yaml
Reading config file "/etc/kubernetes/manifests/kube-scheduler.yaml_bck"
Generated UID "56ba6ffcb6b23178170f8063052292ee" pod "kube-scheduler" from /etc/kubernetes/manifests/kube-scheduler.yaml_bck
Generated Name "kube-scheduler-s-rtk8sma01" for UID "56ba6ffcb6b23178170f8063052292ee" from URL /etc/kubernetes/manifests/kube-scheduler.yaml_bck
Using namespace "kube-system" for pod "kube-scheduler-s-rtk8sma01" from /etc/kubernetes/manifests/kube-scheduler.yaml_bck
Setting pods for source file
anager.go:445] Static pod "56ba6ffcb6b23178170f8063052292ee" (kube-scheduler-s-rtk8sma01/kube-system) does not have a corresponding mirror pod; skipping
anager.go:464] Status Manager: syncPod in syncbatch. pod UID: "24db95fbbd2e618dc6ed589132ed7158"

docker ps 显示

aec23e01ee2a        2c4adeb21b4f           "etcd --advertise-cl…"   7 hours ago         Up 7 hours                              k8s_etcd_etcd-s-rtk8sma01_kube-system_24db95fbbd2e618dc6ed589132ed7158_59
97910491f3b2        20a2d7035165           "/usr/local/bin/kube…"   26 hours ago        Up 26 hours                             k8s_kube-proxy_kube-proxy-pmdmj_kube-system_3e807b5e-041d-11eb-a61a-001dd8b72689_0
37d87cdd8886        k8s.gcr.io/pause:3.1   "/pause"                 26 hours ago        Up 26 hours                             k8s_POD_kube-proxy-pmdmj_kube-system_3e807b5e-041d-11eb-a61a-001dd8b72689_0
83a8af0407e5        cfaa4ad74c37           "kube-apiserver --ad…"   39 hours ago        Up 39 hours                             k8s_kube-apiserver_kube-apiserver-s-rtk8sma01_kube-system_57d405cdab537a9a32ce375f1242e4b5_1
85250c421db4        k8s.gcr.io/pause:3.1   "/pause"                 39 hours ago        Up 39 hours                             k8s_POD_kube-apiserver-s-rtk8sma01_kube-system_57d405cdab537a9a32ce375f1242e4b5_1
984a3628068c        3fa2504a839b           "kube-scheduler --bi…"   40 hours ago        Up 40 hours                             k8s_kube-scheduler_kube-scheduler-s-rtk8sma01_kube-system_56ba6ffcb6b23178170f8063052292ee_7
4d5446906cc5        efb3887b411d           "kube-controller-man…"   40 hours ago        Up 40 hours                             k8s_kube-controller-manager_kube-controller-manager-s-rtk8sma01_kube-system_ffbb7c0e6913f72111f95f08ad36e944_3
544423226bed        k8s.gcr.io/pause:3.1   "/pause"                 40 hours ago        Up 40 hours                             k8s_POD_kube-scheduler-s-rtk8sma01_kube-system_56ba6ffcb6b23178170f8063052292ee_4
a75feece56b5        k8s.gcr.io/pause:3.1   "/pause"                 2 days ago          Up 2 days                               k8s_POD_etcd-s-rtk8sma01_kube-system_24db95fbbd2e618dc6ed589132ed7158_20
1b17cb3ef1c1        k8s.gcr.io/pause:3.1   "/pause"                 2 days ago          Up 2 days                               k8s_POD_kube-controller-manager-s-rtk8sma01_kube-system_ffbb7c0e6913f72111f95f08ad36e944_0
c7c7235ed0dc        ff281650a721           "/opt/bin/flanneld -…"   2 months ago        Up 2 months                             k8s_kube-flannel_kube-flannel-ds-v4sfg_kube-system_bc432e78-878f-11e9-9c4b-001dd8b72689_8
d56fe3708565        k8s.gcr.io/pause:3.1   "/pause"                 2 months ago        Up 2 months                             k8s_POD_kube-flannel-ds-v4sfg_kube-system_bc432e78-878f-11e9-9c4b-001dd8b72689_7

你预期会发生什么:master 再次准备好,静态 pod 和 daemonsets 再次生成,所以我可以开始升级集群 如何重现它(尽可能最小和精确)

还有什么我们需要知道的吗?: 在这一点上我真的很迷茫,我自己尝试了很多小时来寻找解决方案,希望能从专家那里得到一点帮助,以了解问题,或者找到某种解决方法。

环境

有人知道如何解决这些镜像 pod 问题并且知道如何解决节点名称大小写的问题吗?

到目前为止,我尝试的是,我使用主机名覆盖启动了 kubelet,但这没有任何效果。

标签: kuberneteskubelet

解决方案


推荐阅读