首页 > 解决方案 > Kubernetes 多主设置

问题描述

[已解决]法兰绒不适合我改成编织网。如果您不想在 config.yaml 中提供 pod-network-cidr: "10.244.0.0/16" 标志

我想用 kubernetes 进行多主设置,并尝试了很多不同的方法。即使我采取的最后一种方式也行不通。问题是 dns 和 flannel 网络插件不想启动。他们每次都获得 CrashLoopBackOff 状态。我的做法如下。

首先在每个节点上使用此命令创建一个外部 etcd 集群(仅更改地址)

nohup etcd --name kube1 --initial-advertise-peer-urls http://192.168.100.110:2380 \
  --listen-peer-urls http://192.168.100.110:2380 \
  --listen-client-urls http://192.168.100.110:2379,http://127.0.0.1:2379 \
  --advertise-client-urls http://192.168.100.110:2379 \
  --initial-cluster-token etcd-cluster-1 \
  --initial-cluster kube1=http://192.168.100.110:2380,kube2=http://192.168.100.108:2380,kube3=http://192.168.100.104:2380 \
  --initial-cluster-state new &

然后我为 kubeadm init 命令创建了一个 config.yaml 文件。

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
  advertiseAddress: 192.168.100.110
etcd:
  endpoints:
  - "http://192.168.100.110:2379"
  - "http://192.168.100.108:2379"
  - "http://192.168.100.104:2379"
apiServerExtraArgs:
  apiserver-count: "3"
apiServerCertSANs:
- "192.168.100.110"
- "192.168.100.108"
- "192.168.100.104"
- "127.0.0.1"
token: "64bhyh.1vjuhruuayzgtykv"
tokenTTL: "0"

启动命令:kubeadm init --config /root/config.yaml

所以现在复制其他节点上的 /etc/kubernetes/pki 和配置,并以相同的方式启动其他主节点。但它不起作用。

那么初始化多主 kubernetes 集群的正确方法是什么,或者为什么我的 flannel 网络无法启动?

法兰绒吊舱的状态:

Events:
  Type     Reason                 Age               From            Message
  ----     ------                 ----              ----            -------
  Normal   SuccessfulMountVolume  8m                kubelet, kube2  MountVolume.SetUp succeeded for volume "run"
  Normal   SuccessfulMountVolume  8m                kubelet, kube2  MountVolume.SetUp succeeded for volume "cni"
  Normal   SuccessfulMountVolume  8m                kubelet, kube2  MountVolume.SetUp succeeded for volume "flannel-token-swdhl"
  Normal   SuccessfulMountVolume  8m                kubelet, kube2  MountVolume.SetUp succeeded for volume "flannel-cfg"
  Normal   Pulling                8m                kubelet, kube2  pulling image "quay.io/coreos/flannel:v0.10.0-amd64"
  Normal   Pulled                 8m                kubelet, kube2  Successfully pulled image "quay.io/coreos/flannel:v0.10.0-amd64"
  Normal   Created                8m                kubelet, kube2  Created container
  Normal   Started                8m                kubelet, kube2  Started container
  Normal   Pulled                 8m (x4 over 8m)   kubelet, kube2  Container image "quay.io/coreos/flannel:v0.10.0-amd64" already present on machine
  Normal   Created                8m (x4 over 8m)   kubelet, kube2  Created container
  Normal   Started                8m (x4 over 8m)   kubelet, kube2  Started container
  Warning  BackOff                3m (x23 over 8m)  kubelet, kube2  Back-off restarting failed container

等版本

etcd --version
etcd Version: 3.3.6
Git SHA: 932c3c01f
Go Version: go1.9.6
Go OS/Arch: linux/amd64

 kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:00:59Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

来自 etcd 的 nohup 中的最后几行

2018-06-06 19:44:28.441304 I | etcdserver: name = kube1
2018-06-06 19:44:28.441327 I | etcdserver: data dir = kube1.etcd
2018-06-06 19:44:28.441331 I | etcdserver: member dir = kube1.etcd/member
2018-06-06 19:44:28.441334 I | etcdserver: heartbeat = 100ms
2018-06-06 19:44:28.441336 I | etcdserver: election = 1000ms
2018-06-06 19:44:28.441338 I | etcdserver: snapshot count = 100000
2018-06-06 19:44:28.441343 I | etcdserver: advertise client URLs = http://192.168.100.110:2379
2018-06-06 19:44:28.441346 I | etcdserver: initial advertise peer URLs = http://192.168.100.110:2380
2018-06-06 19:44:28.441352 I | etcdserver: initial cluster = kube1=http://192.168.100.110:2380,kube2=http://192.168.100.108:2380,kube3=http://192.168.100.104:2380
2018-06-06 19:44:28.443825 I | etcdserver: starting member a4df4f699dd66909 in cluster 73f203cf831df407
2018-06-06 19:44:28.443843 I | raft: a4df4f699dd66909 became follower at term 0
2018-06-06 19:44:28.443848 I | raft: newRaft a4df4f699dd66909 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2018-06-06 19:44:28.443850 I | raft: a4df4f699dd66909 became follower at term 1
2018-06-06 19:44:28.447834 W | auth: simple token is not cryptographically signed
2018-06-06 19:44:28.448857 I | rafthttp: starting peer 9e0f381e79b9b9dc...
2018-06-06 19:44:28.448869 I | rafthttp: started HTTP pipelining with peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450791 I | rafthttp: started peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450803 I | rafthttp: added peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450809 I | rafthttp: starting peer fc9c29e972d01e69...
2018-06-06 19:44:28.450816 I | rafthttp: started HTTP pipelining with peer fc9c29e972d01e69
2018-06-06 19:44:28.453543 I | rafthttp: started peer fc9c29e972d01e69
2018-06-06 19:44:28.453559 I | rafthttp: added peer fc9c29e972d01e69
2018-06-06 19:44:28.453570 I | etcdserver: starting server... [version: 3.3.6, cluster version: to_be_decided]
2018-06-06 19:44:28.455414 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (writer)
2018-06-06 19:44:28.455431 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (writer)
2018-06-06 19:44:28.455445 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (stream MsgApp v2 reader)
2018-06-06 19:44:28.455578 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (stream Message reader)
2018-06-06 19:44:28.455697 I | rafthttp: started streaming with peer fc9c29e972d01e69 (writer)
2018-06-06 19:44:28.455704 I | rafthttp: started streaming with peer fc9c29e972d01e69 (writer)
@

标签: dockerkuberneteskubectlorchestrationkubeadm

解决方案


如果您没有任何托管偏好,并且您可以在 AWS 上创建集群,那么可以使用 KOPS 轻松完成。

https://github.com/kubernetes/kops

通过 KOPS,您可以轻松地为 master 配置自动缩放组,并可以指定集群所需的 master 和节点的数量。


推荐阅读