首页 > 解决方案 > Kubernetes AWS 上的艰难之路 - 部署和配置 cloud-controller-manager

问题描述

我已经测试了指南Kubernetes the hard way和 AWS Kubernetes The Hard Way - AWS的适配。

使用 DNS 插件,甚至仪表板,一切都运行良好,如此处所述

但是,如果我创建一个 LoadBalancer 服务,它就不起作用,因为没有部署 cloud-controller-manager(作为主组件或守护程序集)。

我阅读了此https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/以获取有关如何部署它的一些信息,但如果我应用所需的更改(在 kubelet 上:--cloud-provider =external) 并部署 daemonset :

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    k8s-app: cloud-controller-manager
  name: cloud-controller-manager
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: cloud-controller-manager
  template:
    metadata:
      labels:
        k8s-app: cloud-controller-manager
    spec:
      serviceAccountName: cloud-controller-manager
      containers:
      - name: cloud-controller-manager
        image: k8s.gcr.io/cloud-controller-manager:v1.8.0
        command:
        - /usr/local/bin/cloud-controller-manager
        - --cloud-provider=aws
        - --leader-elect=true
        - --use-service-account-credentials
        - --allocate-node-cidrs=true
        - --configure-cloud-routes=true
        - --cluster-cidr=${CLUSTERCIRD}
      tolerations:
      - key: node.cloudprovider.kubernetes.io/uninitialized
        value: "true"
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      nodeSelector:
        node-role.kubernetes.io/master: ""

实例(控制器和工作人员)具有所有正确的角色。

我什至无法创建 pod,状态保持“待定”...

您知道如何在 AWS 集群上将 cloud-controller-manager 部署为守护程序集或主组件(不使用 kops、kubeadm...)吗?

你知道可以帮助我的指南吗?

您能举一个 cloud-controller-manager daemonset 配置的例子吗?

提前致谢

更新

执行时,kubectl get nodes我得到一个No resources found.

在描述一个启动的 pod 时,我只得到一个事件: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 28s (x2 over 28s) default-scheduler no nodes available to schedule pods

现在的问题应该是:如何使用为 aws 部署的 cloud-controller-manager 准备好节点?

标签: amazon-web-serviceskubernetescloud

解决方案


正如 samhain1138 所提到的,您的集群看起来不适合安装任何东西。在简单的情况下,它可以修复,但有时最好重新安装所有内容。

让我们试着调查一下这个问题。
首先,检查您的主节点状态。通常,这意味着您应该kubelet运行服务。
检查 kubelet 日志是否有错误:

$ journalctl -u kubelet

接下来,检查静态 pod 的状态。您可以在/etc/kubernetes/manifets目录中找到它们的列表:

$ ls /etc/kubernetes/manifests

etcd.yaml  
kube-apiserver.yaml  
kube-controller-manager.yaml  
kube-scheduler.yaml

$ docker ps

CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
5cbdc1c13c25        8a7739f672b4           "/sidecar --v=2 --..."   2 weeks ago         Up 2 weeks                              k8s_sidecar_kube-dns-86c47599bd-l7d6m_kube-system_...
bd96ffafdfa6        6816817d9dce           "/dnsmasq-nanny -v..."   2 weeks ago         Up 2 weeks                              k8s_dnsmasq_kube-dns-86c47599bd-l7d6m_kube-system_...
69931b5b4cf9        55ffe31ac578           "/kube-dns --domai..."   2 weeks ago         Up 2 weeks                              k8s_kubedns_kube-dns-86c47599bd-l7d6m_kube-system_...
60885aeffc05        k8s.gcr.io/pause:3.1   "/pause"                 2 weeks ago         Up 2 weeks                              k8s_POD_kube-dns-86c47599bd-l7d6m_kube-system_...
93144593660c        9f355e076ea7           "/install-cni.sh"        2 weeks ago         Up 2 weeks                              k8s_install-cni_calico-node-nxljq_kube-system_...
b55f57529671        7eca10056c8e           "start_runit"            2 weeks ago         Up 2 weeks                              k8s_calico-node_calico-node-nxljq_kube-system_...
d8767b9c07c8        46a3cd725628           "/usr/local/bin/ku..."   2 weeks ago         Up 2 weeks                              k8s_kube-proxy_kube-proxy-lf8gd_kube-system_...
f924cefb953f        k8s.gcr.io/pause:3.1   "/pause"                 2 weeks ago         Up 2 weeks                              k8s_POD_calico-node-nxljq_kube-system_...
09ceddabdeb9        k8s.gcr.io/pause:3.1   "/pause"                 2 weeks ago         Up 2 weeks                              k8s_POD_kube-proxy-lf8gd_kube-system_...
9fc90839bb6f        821507941e9c           "kube-apiserver --..."   2 weeks ago         Up 2 weeks                              k8s_kube-apiserver_kube-apiserver-kube-master_kube-system_...
8ea410ce00a6        b8df3b177be2           "etcd --advertise-..."   2 weeks ago         Up 2 weeks                              k8s_etcd_etcd-kube-master_kube-system_...
dd7f9b381e4f        38521457c799           "kube-controller-m..."   2 weeks ago         Up 2 weeks                              k8s_kube-controller-manager_kube-controller-manager-kube-master_kube-system_...
f6681365bea8        37a1403e6c1a           "kube-scheduler --..."   2 weeks ago         Up 2 weeks                              k8s_kube-scheduler_kube-scheduler-kube-master_kube-system_...
0638e47ec57e        k8s.gcr.io/pause:3.1   "/pause"                 2 weeks ago         Up 2 weeks                              k8s_POD_etcd-kube-master_kube-system_...
5bbe35abb3a3        k8s.gcr.io/pause:3.1   "/pause"                 2 weeks ago         Up 2 weeks                              k8s_POD_kube-controller-manager-kube-master_kube-system_...
2dc6ee716bb4        k8s.gcr.io/pause:3.1   "/pause"                 2 weeks ago         Up 2 weeks                              k8s_POD_kube-scheduler-kube-master_kube-system_...
b15dfc9f089a        k8s.gcr.io/pause:3.1   "/pause"                 2 weeks ago         Up 2 weeks                              k8s_POD_kube-apiserver-kube-master_kube-system_...

您可以使用以下命令查看任何 pod 容器的详细描述:

$ docker inspect <container_id>

或者查看日志:

$ docker logs <container_id>

这应该足以了解下一步该做什么,要么尝试修复集群,要么拆除所有东西并从头开始。

为了简化配置 Kubernetes 集群的过程,您可以使用kubeadm如下方式:

# This instruction is for ubuntu VMs, if you use CentOS, the commands will be
# slightly different.

### These steps are the same for the master and the worker nodes
# become root
$ sudo su

# add repository and keys
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

$ cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

# install components
$ apt-get update
$ apt-get -y install ebtables ethtool docker.io apt-transport-https kubelet kubeadm kubectl

# adjust sysctl settings
$ cat <<EOF >>/etc/ufw/sysctl.conf
net/ipv4/ip_forward = 1
net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1
EOF

$ sysctl --system

### Next steps are for the master node only.

# Create Kubernetes cluster
$ kubeadm init --pod-network-cidr=192.168.0.0/16
or if you want to use older KubeDNS instead of CoreDNS:
$ kubeadm init --pod-network-cidr=192.168.0.0/16 --feature-gates=CoreDNS=false

# Configure kubectl
$ mkdir -p $HOME/.kube
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ chown $(id -u):$(id -g) $HOME/.kube/config

# install Calico network
$ kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
# or install Flannel (not both)
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# Untaint master or/and join other nodes:
$ kubectl taint nodes --all node-role.kubernetes.io/master-

# run on master if you forgot the join command:
$ kubeadm token create --print-join-command

# run command printed on the previous step on the worker node to join it to the existing cluster.

# At this point you should have ready to user Kubernetes cluster.
$ kubectl get nodes -o wide
$ kubectl get pods,svc,deployments,daemonsets --all-namespaces

恢复集群后,能否尝试重新安装cloud-controller-manager并分享结果?


推荐阅读