首页 > 技术文章 > Centos 7 安装搭建K8S集群

xuanbao 2019-07-04 16:14 原文

1、环境准备
CentOS Linux release 7.6.1810 (Core)       #查看系统版本 cat /etc/redhat-release
内核版本:3.10.0-957.el7.x86_64              #查看内核版本 uname -a

节点主机名IP
Master、etcd、registry k8s-master 10.10.1.103
Node1 k8s-node1 10.10.1.104
Node2 k8s-node2 10.10.1.105



1.1 设置主机名:hostname

执行脚本:
++++++++++++++++++++++++++++++++++++++++++
#!/bin/bash
#阿里云ECS主机名
#sh aliyun_hostname.sh 主机名

if [ $# -ne 1 ]
then
echo "Usage: $0 hostname"
exit 2
fi

HOSTNAME="$1"
ORIGIN="$(awk -F[=] '/^HOSTNAME/ {print $2}' /etc/sysconfig/network)"
LOCAL_IP="$(ifconfig -a |grep -Eo '[1-2][0-9]{1,2}([\.][0-9]{1,3}){3}' |head -1)"
sed -i "/^HOSTNAME/s/$ORIGIN/$HOSTNAME/" /etc/sysconfig/network
echo "$LOCAL_IP $ORIGIN $HOSTNAME" >> /etc/hosts

hostname $HOSTNAME
++++++++++++++++++++++++++++++++++++++++++

#master主机
[root@localhost ~]# ./hostname.sh k8s-master

#node1主机
[root@localhost ~]# ./hostname.sh k8s-node1

#node2主机
[root@localhost ~]# ./hostname.sh k8s-node2



1.2 关闭系统防火墙及设置selinux状态
[root@k8s-master ~]# systemctl stop firewalld && systemctl disable firewalld && setenforce 0
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@k8s-node1 ~]# systemctl stop firewalld && systemctl disable firewalld && setenforce 0
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@k8s-node2 ~]# systemctl stop firewalld && systemctl disable firewalld && setenforce 0
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.



1.3 分别配置master及node主机的host文件:
[root@k8s-master ~]# vim /etc/hosts
[root@k8s-node1 ~]# vim /etc/hosts
[root@k8s-node2 ~]# vim /etc/hosts
++++++++++++++++++++++++++++++++++++++++++
10.10.1.103 k8s-master
10.10.1.103 etcd
10.10.1.103 registry
10.10.1.104 k8s-node1
10.10.1.105 k8s-node2
++++++++++++++++++++++++++++++++++++++++++



1.4 在master、node主机上分别配置时间同步:
[root@k8s-master ~]# yum install ntpdate -y
[root@k8s-master ~]# ntpdate ntp1.aliyun.com
29 Jun 03:32:09 ntpdate[7611]: adjust time server 120.25.115.20 offset -0.000166 sec

[root@k8s-node1 ~]# yum install ntpdate -y
[root@k8s-node1 ~]# ntpdate ntp1.aliyun.com

[root@k8s-node2 ~]# yum install ntpdate -y
[root@k8s-node2 ~]# ntpdate ntp1.aliyun.com






2、部署master主机

2.1 安装etcd
2.1.1 k8s的运行依赖于etcd,所以需要先部署etcd。
[root@k8s-master ~]# yum install etcd -y

2.1.2 yum安装的etcd默认的配置文件为/etc/etcd/etcd.conf,编辑配置文件:
[root@k8s-master ~]# vim /etc/etcd/etcd.conf
++++++++++++++++++++++++++++++++++++++++++++++
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_NAME="master"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379"
++++++++++++++++++++++++++++++++++++++++++++++

2.1.3 启动并验证etcd的运行状态:
[root@k8s-master ~]# systemctl start etcd
[root@k8s-master ~]# etcdctl -C http://etcd:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy


2.2 安装Docker和Docker私有仓库

2.2.1 安装docker
[root@k8s-master ~]# yum install docker docker-distribution -y

2.2.2 配置Docker 配置文件,允许从registry中拉取镜像。
[root@k8s-master ~]# vim /etc/sysconfig/docker
++++++++++++++++++++++++++++++++++++++++++++++
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
if [ -z "${DOCKER_CERT_PATH}" ]; then
DOCKER_CERT_PATH=/etc/docker
fi

OPTIONS='--insecure-registry registry:5000'
++++++++++++++++++++++++++++++++++++++++++++++

2.2.3 设置开机自启动并开启服务:
[root@k8s-master ~]# systemctl start docker docker-distribution
[root@k8s-master ~]# systemctl enable docker docker-distribution
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker-distribution.service to /usr/lib/systemd/system/docker-distribution.service.


2.3 安装kubernetes
[root@k8s-master ~]# yum install -y kubernetes

在master 主机上需要运行的kubernetes 组件有:kubernetes API server,kubernetes Controller Manager ,Kubernetes Scheduler,需要分别修改下述对应配置:
2.3.1 /etc/kubernetes/apiserver
[root@k8s-master ~]# vim /etc/kubernetes/apiserver
++++++++++++++++++++++++++++++++++++++++++++
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" #取消 ServiceAccount
KUBE_API_ARGS=""
++++++++++++++++++++++++++++++++++++++++++++

2.3.2 /etc/kubernetes/config
[root@k8s-master ~]# vim /etc/kubernetes/config
+++++++++++++++++++++++++++++++++++++++++++
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://k8s-master:8080"
+++++++++++++++++++++++++++++++++++++++++++

2.3.3 最后分别启动并设置开机自启动:
[root@k8s-master ~]# systemctl enable kube-apiserver kube-controller-manager kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@k8s-master ~]# systemctl start kube-apiserver kube-controller-manager kube-scheduler





3、部署node主机

3.1 安装docker
3.1.1安装Docker软件

参考2.2安装docker,不需要安装docker-distribution。
[root@k8s-node1 ~]# yum install docker -y
[root@k8s-node2 ~]# yum install docker -y

3.1.2配置Docker 配置文件,允许从registry中拉取镜像。
[root@k8s-node1 ~]# vim /etc/sysconfig/docker
[root@k8s-node2 ~]# vim /etc/sysconfig/docker
++++++++++++++++++++++++++++++++++++++++
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
if [ -z "${DOCKER_CERT_PATH}" ]; then
DOCKER_CERT_PATH=/etc/docker
fi

OPTIONS='--insecure-registry registry:5000'
++++++++++++++++++++++++++++++++++++++++

3.1.3 设置开机自启动并开启服务:
[root@k8s-node1 ~]# systemctl start docker && systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

[root@k8s-node2 ~]# systemctl start docker && systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.


3.2 安装kubernetes
3.2.1 安装kubernetes软件

[root@k8s-node1 ~]# yum install -y kubernetes
[root@k8s-node2 ~]# yum install -y kubernetes


在node节点上需要启动kubernetes 下述组件:kubelet、kubernets-Proxy,因此需要相应修改下述配置。

3.2.2 /etc/kubernetes/config
[root@k8s-node1 ~]# vim /etc/kubernetes/config
[root@k8s-node2 ~]# vim /etc/kubernetes/config
+++++++++++++++++++++++++++++++++++++
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://k8s-master:8080"
+++++++++++++++++++++++++++++++++++++



3.2.3 /etc/kubernetes/kubelet
[root@k8s-node1 ~]# vim /etc/kubernetes/kubelet
+++++++++++++++++++++++++++++++++++++++++++++
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=k8s-node1" #注意配置为对应node的hostname
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""
+++++++++++++++++++++++++++++++++++++++++++++

[root@k8s-node2 ~]# vim /etc/kubernetes/kubelet
+++++++++++++++++++++++++++++++++++++++++++++
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=k8s-node2" #注意配置为对应node的hostname
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""
+++++++++++++++++++++++++++++++++++++++++++++


3.2.4 最后设置开机自启动:
[root@k8s-node1 ~]# systemctl start kubelet kube-proxy && systemctl enable kubelet kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

[root@k8s-node2 ~]# systemctl start kubelet kube-proxy && systemctl enable kubelet kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.



3.2.5 在master 上查看集群节点状态:
[root@k8s-master ~]# kubectl -s http://k8s-master:8080 get node
NAME STATUS AGE
k8s-node1 Ready 8s
k8s-node2 Ready 21m

[root@k8s-master ~]# kubectl get node
NAME STATUS AGE
k8s-node1 Ready 1h
k8s-node2 Ready 1h





4、安装覆盖网络-Flannel

4.1 yum 安装 Flannel
[root@k8s-master ~]# yum install -y flannel
[root@k8s-node1 ~]# yum install flannel -y
[root@k8s-node2 ~]# yum install flannel -y


4.2 配置Flannel
分别在master和node 主机上配置/etc/sysconfig/flanneld配置文件,如:
[root@k8s-master ~]# vim /etc/sysconfig/flanneld
[root@k8s-node1 ~]# vim /etc/sysconfig/flanneld
[root@k8s-node2 ~]# vim /etc/sysconfig/flanneld
+++++++++++++++++++++++++++++++++++++++++++++++++
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
FLANNEL_ETCD_PREFIX="/atomic.io/network"
+++++++++++++++++++++++++++++++++++++++++++++++++


4.3 配置etcd中关于flannel的key
4.3.1 在etcd上进行网段key的配置

由于Flannel 是使用Etcd来进行配置保证多个Flannel实例之间的配置的一致性,因此需要在etcd上进行网段key的配置(‘/atomic.io/network/config’这个key与上文/etc/sysconfig/flanneld中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)
[root@k8s-master ~]# etcdctl mk /atomic.io/network/config '{"Network": "10.1.10.0/16"}'
{"Network": "10.1.10.0/16"}

4.3.2 最后启动Flannel 并依次重启docker和kubernetes。
a. 在master 执行
[root@k8s-master ~]# systemctl start flanneld && systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.

[root@k8s-master ~]# systemctl restart docker kube-apiserver kube-controller-manager kube-scheduler


b. 在node上执行
[root@k8s-node1 ~]# systemctl start flanneld
[root@k8s-node1 ~]# systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k8s-node1 ~]# systemctl restart kubelet kube-proxy

[root@k8s-node2 ~]# systemctl start flanneld && systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k8s-node2 ~]# systemctl restart kubelet kube-proxy

推荐阅读