首页 > 技术文章 > Kubernetes 入门

zjq-blogs 2020-11-30 19:15 原文

(一)什么是 Kubernetes

1、概述

Kubernetes 是 Google 2014 年创建管理的,是 Google 10 多年大规模容器管理技术 Borg 的开源版本。

Kubernetes 是容器集群管理系统,是一个开源的平台,可以实现容器集群的自动化部署、自动扩缩容、维护等功能。使用 Kubernetes 我们可以:

  • 快速部署应用
  • 快速扩展应用
  • 无缝对接新的应用功能
  • 节省资源,优化硬件资源的使用

Kubernetes 的目标是促进完善组件和工具的生态系统,以减轻应用程序在公有云或私有云中运行的负担。

2、特点

  • 可移植: 支持公有云,私有云,混合云,多重云(多个公共云)
  • 可扩展: 模块化,插件化,可挂载,可组合
  • 自动化: 自动部署,自动重启,自动复制,自动伸缩/扩展

3、从传统到容器化部署

a、传统的部署方式

传统的应用部署方式是通过插件或脚本来安装应用。这样做的缺点是应用的运行、配置、管理、所有生存周期将与当前操作系统绑定,这样做并不利于应用的升级更新/回滚等操作,当然也可以通过创建虚机的方式来实现某些功能,但是虚拟机非常重,并不利于可移植性。

b、容器化部署的优势

  • 快速创建/部署应用: 与虚拟机相比,容器镜像的创建更加容易。
  • 持续开发、集成和部署: 提供可靠且频繁的容器镜像构建/部署,并使用快速和简单的回滚(由于镜像不可变性)。
  • 开发和运行相分离: 在 build 或者 release 阶段创建容器镜像,使得应用和基础设施解耦。
  • 开发,测试和生产环境一致性: 在本地或外网(生产环境)运行的一致性。
  • 云平台或其他操作系统: 可以在 Ubuntu、RHEL、CoreOS、on-prem、Google Container Engine 或其它任何环境中运行。
  • 分布式,弹性,微服务化: 应用程序分为更小的、独立的部件,可以动态部署和管理。
  • 资源隔离
  • 资源利用更高效

4、为什么需要 Kubernetes

可以在物理或虚拟机的 Kubernetes 集群上运行容器化应用,Kubernetes 能提供一个以 “容器为中心的基础架构”,满足在生产环境中运行应用的一些常见需求,如:

  • 多个进程协同工作
  • 存储系统挂载
  • 应用健康检查
  • 应用实例的复制
  • 自动伸缩/扩展
  • 注册与发现
  • 负载均衡
  • 滚动更新
  • 资源监控
  • 日志访问
  • 调试应用程序
  • 提供认证和授权

(二)Kubernetes 安装前的准备

1、概述

本次安装采用 Ubuntu Server X64 18.04 LTS 版本安装 kubernetes 集群环境,集群节点为 1 主 2 从模式,此次对虚拟机会有些基本要求,如下:

  • OS:Ubuntu Server X64 18.04 LTS(16.04 版本步骤相同,再之前则不同)
  • CPU:最低要求,1 CPU 2 核
  • 内存:最低要求,2GB
  • 磁盘:最低要求,20GB

创建三台虚拟机,分别命名如下:

  • Ubuntu Server 18.04 X64 Kubernetes Master
  • Ubuntu Server 18.04 X64 Kubernetes Slave1
  • Ubuntu Server 18.04 X64 Kubernetes Slave2

对虚拟机系统的配置:

  • 关闭交换空间:sudo swapoff -a
  • 避免开机启动交换空间:注释 /etc/fstab 中的 swap
  • 关闭防火墙:ufw disable

2、使用 APT 安装 Docker

安装

    # 更新软件源
    sudo apt-get update
    # 安装所需依赖
    sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
    # 安装 GPG 证书
    curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
    # 新增软件源信息
    sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
    # 再次更新软件源
    sudo apt-get -y update
    # 安装 Docker CE 版
    sudo apt-get -y install docker-ce
 

验证

    docker version
    Client:
     Version:           18.09.6
     API version:       1.39
     Go version:        go1.10.8
     Git commit:        481bc77
     Built:             Sat May  4 02:35:57 2019
     OS/Arch:           linux/amd64
     Experimental:      false
    
    Server: Docker Engine - Community
     Engine:
      Version:          18.09.6
      API version:      1.39 (minimum version 1.12)
      Go version:       go1.10.8
      Git commit:       481bc77
      Built:            Sat May  4 01:59:36 2019
      OS/Arch:          linux/amd64
      Experimental:     false
 

配置加速器

对于使用 systemd 的系统,请在 /etc/docker/daemon.json 中写入如下内容(如果文件不存在请新建该文件)

    {
      "registry-mirrors": [
        "https://registry.docker-cn.com"
      ]
    }
 

注意,一定要保证该文件符合 JSON 规范,否则 Docker 将不能启动。

验证加速器是否配置成功:

    sudo systemctl restart docker
    docker info
    ...
    # 出现如下语句即表示配置成功
    Registry Mirrors:
     https://registry.docker-cn.com/
    ...
 

3、修改主机名

在同一局域网中主机名不应该相同,所以我们需要做修改,下列操作步骤为修改 18.04 版本的 Hostname,如果是 16.04 或以下版本则直接修改 /etc/hostname 里的名称即可

查看当前 Hostname

    # 查看当前主机名
    hostnamectl
    # 显示如下内容
       Static hostname: ubuntu
             Icon name: computer-vm
               Chassis: vm
            Machine ID: 33011e0a95094672b99a198eff07f652
               Boot ID: dc856039f0d24164a9f8a50c506be96d
        Virtualization: vmware
      Operating System: Ubuntu 18.04.2 LTS
                Kernel: Linux 4.15.0-48-generic
          Architecture: x86-64
 

修改 Hostname

    # 使用 hostnamectl 命令修改,其中 kubernetes-master 为新的主机名
    hostnamectl set-hostname kubernetes-master
 

修改 cloud.cfg

如果 cloud-init package 安装了,需要修改 cloud.cfg 文件。该软件包通常缺省安装用于处理 cloud

    # 如果有该文件
    vi /etc/cloud/cloud.cfg
    
    # 该配置默认为 false,修改为 true 即可
    preserve_hostname: true
 

验证

    root@kubernetes-master:~# hostnamectl
       Static hostname: kubernetes-master
             Icon name: computer-vm
               Chassis: vm
            Machine ID: 33011e0a95094672b99a198eff07f652
               Boot ID: 8c0fd75d08c644abaad3df565e6e4cbd
        Virtualization: vmware
      Operating System: Ubuntu 18.04.2 LTS
                Kernel: Linux 4.15.0-48-generic
          Architecture: x86-64

(三)安装 kubeadm

1、概述

kubeadm 是 kubernetes 的集群安装工具,能够快速安装 kubernetes 集群。

2、配置软件源

    # 安装系统工具
    apt-get update && apt-get install -y apt-transport-https
    # 安装 GPG 证书
    curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
    # 写入软件源;注意:我们用系统代号为 bionic,但目前阿里云不支持,所以沿用 16.04 的 xenial
    cat << EOF >/etc/apt/sources.list.d/kubernetes.list
    > deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
    > EOF
 

3、安装 kubeadm,kubelet,kubectl

    # 安装
    apt-get update  
    apt-get install -y kubelet kubeadm kubectl
    指定版本:
    apt-get install -y kubelet=1.14.1-00 kubeadm=1.14.1-00 kubectl=1.14.1-00
    
    # 安装过程如下,注意 kubeadm 的版本号
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following additional packages will be installed:
      conntrack cri-tools kubernetes-cni socat
    The following NEW packages will be installed:
      conntrack cri-tools kubeadm kubectl kubelet kubernetes-cni socat
    0 upgraded, 7 newly installed, 0 to remove and 96 not upgraded.
    Need to get 50.6 MB of archives.
    After this operation, 290 MB of additional disk space will be used.
    Get:1 http://mirrors.aliyun.com/ubuntu bionic/main amd64 conntrack amd64 1:1.4.4+snapshot20161117-6ubuntu2 [30.6 kB]
    Get:2 http://mirrors.aliyun.com/ubuntu bionic/main amd64 socat amd64 1.7.3.2-2ubuntu2 [342 kB]
    Get:3 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 cri-tools amd64 1.12.0-00 [5,343 kB]
    Get:4 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.7.5-00 [6,473 kB]
    Get:5 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubelet amd64 1.14.1-00 [21.5 MB]
    Get:6 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubectl amd64 1.14.1-00 [8,806 kB]
    Get:7 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubeadm amd64 1.14.1-00 [8,150 kB]
    Fetched 50.6 MB in 5s (9,912 kB/s) 
    Selecting previously unselected package conntrack.
    (Reading database ... 67205 files and directories currently installed.)
    Preparing to unpack .../0-conntrack_1%3a1.4.4+snapshot20161117-6ubuntu2_amd64.deb ...
    Unpacking conntrack (1:1.4.4+snapshot20161117-6ubuntu2) ...
    Selecting previously unselected package cri-tools.
    Preparing to unpack .../1-cri-tools_1.12.0-00_amd64.deb ...
    Unpacking cri-tools (1.12.0-00) ...
    Selecting previously unselected package kubernetes-cni.
    Preparing to unpack .../2-kubernetes-cni_0.7.5-00_amd64.deb ...
    Unpacking kubernetes-cni (0.7.5-00) ...
    Selecting previously unselected package socat.
    Preparing to unpack .../3-socat_1.7.3.2-2ubuntu2_amd64.deb ...
    Unpacking socat (1.7.3.2-2ubuntu2) ...
    Selecting previously unselected package kubelet.
    Preparing to unpack .../4-kubelet_1.14.1-00_amd64.deb ...
    Unpacking kubelet (1.14.1-00) ...
    Selecting previously unselected package kubectl.
    Preparing to unpack .../5-kubectl_1.14.1-00_amd64.deb ...
    Unpacking kubectl (1.14.1-00) ...
    Selecting previously unselected package kubeadm.
    Preparing to unpack .../6-kubeadm_1.14.1-00_amd64.deb ...
    Unpacking kubeadm (1.14.1-00) ...
    Setting up conntrack (1:1.4.4+snapshot20161117-6ubuntu2) ...
    Setting up kubernetes-cni (0.7.5-00) ...
    Setting up cri-tools (1.12.0-00) ...
    Setting up socat (1.7.3.2-2ubuntu2) ...
    Setting up kubelet (1.14.1-00) ...
    Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
    Setting up kubectl (1.14.1-00) ...
    Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
    # 注意这里的版本号,我们使用的是 kubernetes v1.14.1
    Setting up kubeadm (1.14.1-00) ...
    
    # 设置 kubelet 自启动,并启动 kubelet
    systemctl enable kubelet && systemctl start kubelet
 
    • kubeadm:用于初始化 Kubernetes 集群
    • kubectl:Kubernetes 的命令行工具,主要作用是部署和管理应用,查看各种资源,创建,删除和更新组件
    • kubelet:主要负责启动 Pod 和容器

(四)配置 kubeadm

1、概述

安装 kubernetes 主要是安装它的各个镜像,而 kubeadm 已经为我们集成好了运行 kubernetes 所需的基本镜像。但由于国内的网络原因,在搭建环境时,无法拉取到这些镜像。此时我们只需要修改为阿里云提供的镜像服务即可解决该问题。

2、创建并修改配置

    # 导出配置文件
    kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml
 
    # 修改配置为如下内容
    apiVersion: kubeadm.k8s.io/v1beta1
    bootstrapTokens:
    - groups:
      - system:bootstrappers:kubeadm:default-node-token
      token: abcdef.0123456789abcdef
      ttl: 24h0m0s
      usages:
      - signing
      - authentication
    kind: InitConfiguration
    localAPIEndpoint:
      # 修改为主节点 IP
      advertiseAddress: 192.168.141.130
      bindPort: 6443
    nodeRegistration:
      criSocket: /var/run/dockershim.sock
      name: kubernetes-master
      taints:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
    ---
    apiServer:
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta1
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controlPlaneEndpoint: ""
    controllerManager: {}
    dns:
      type: CoreDNS
    etcd:
      local:
        dataDir: /var/lib/etcd
    # 国内不能访问 Google,修改为阿里云
    imageRepository: registry.aliyuncs.com/google_containers
    kind: ClusterConfiguration
    # 修改版本号
    kubernetesVersion: v1.14.1
    networking:
      dnsDomain: cluster.local
      # 配置成 Calico 的默认网段
      podSubnet: "192.168.0.0/16"
      serviceSubnet: 10.96.0.0/12
    scheduler: {}
    ---
    # 开启 IPVS 模式
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    kind: KubeProxyConfiguration
    featureGates:
      SupportIPVSProxyMode: true
    mode: ipvs
 

3、查看和拉取镜像

    # 查看所需镜像列表
    kubeadm config images list --config kubeadm.yml
    # 拉取镜像
    kubeadm config images pull --config kubeadm.yml

(五)使用 kubeadm 搭建 kubernetes 集群

1、安装 kubernetes 主节点

执行以下命令初始化主节点,该命令指定了初始化时需要使用的配置文件,其中添加 --experimental-upload-certs 参数可以在后续执行加入节点时自动分发证书文件。追加的 tee kubeadm-init.log 用以输出日志。

Flag --experimental-upload-certs has been deprecated, use --upload-certs instead
则将–experimental-upload-certs 替换为 --upload-certs
kubeadm init --config=kubeadm.yml --upload-certs | tee kubeadm-init.log
    kubeadm init --config=kubeadm.yml --experimental-upload-certs | tee kubeadm-init.log
    
    # 安装成功则会有如下输出
    [init] Using Kubernetes version: v1.14.1
    [preflight] Running pre-flight checks
            [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Activating the kubelet service
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [kubernetes-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.141.130]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.141.130 127.0.0.1 ::1]
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.141.130 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 20.003326 seconds
    [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Storing the certificates in ConfigMap "kubeadm-certs" in the "kube-system" Namespace
    [upload-certs] Using certificate key:
    2cd5b86c4905c54d68cc7dfecc2bf87195e9d5d90b4fff9832d9b22fc5e73f96
    [mark-control-plane] Marking the node kubernetes-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
    [mark-control-plane] Marking the node kubernetes-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: abcdef.0123456789abcdef
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    # 后面子节点加入需要如下命令
    kubeadm join 192.168.141.130:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:cab7c86212535adde6b8d1c7415e81847715cfc8629bb1d270b601744d662515
 

注意:如果安装 kubernetes 版本和下载的镜像版本不统一则会出现 timed out waiting for the condition 错误。中途失败或是想修改配置可以使用 kubeadm reset 命令重置配置,再做初始化操作即可。

2、配置 kubectl

    mkdir -p $HOME/.kube
    cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    
    # 非 ROOT 用户执行
    chown $(id -u):$(id -g) $HOME/.kube/config


3、验证是否成功

    kubectl get node
    
    # 能够打印出节点信息即表示成功
    NAME                STATUS     ROLES    AGE     VERSION
    kubernetes-master   NotReady   master   8m40s   v1.14.1
 

至此主节点配置完成

4、kubeadm init 的执行过程

  • init:指定版本进行初始化操作
  • preflight:初始化前的检查和下载所需要的 Docker 镜像文件
  • kubelet-start:生成 kubelet 的配置文件 var/lib/kubelet/config.yaml,没有这个文件 kubelet 无法启动,所以初始化之前的 kubelet 实际上启动不会成功
  • certificates:生成 Kubernetes 使用的证书,存放在 /etc/kubernetes/pki 目录中
  • kubeconfig:生成 KubeConfig 文件,存放在 /etc/kubernetes 目录中,组件之间通信需要使用对应文件
  • control-plane:使用 /etc/kubernetes/manifest 目录下的 YAML 文件,安装 Master 组件
  • etcd:使用 /etc/kubernetes/manifest/etcd.yaml 安装 Etcd 服务
  • wait-control-plane:等待 control-plan 部署的 Master 组件启动
  • apiclient:检查 Master 组件服务状态。
  • uploadconfig:更新配置
  • kubelet:使用 configMap 配置 kubelet
  • patchnode:更新 CNI 信息到 Node 上,通过注释的方式记录
  • mark-control-plane:为当前节点打标签,打了角色 Master,和不可调度标签,这样默认就不会使用 Master 节点来运行 Pod
  • bootstrap-token:生成 token 记录下来,后边使用 kubeadm join 往集群中添加节点时会用到
  • addons:安装附加组件 CoreDNS 和 kube-proxy

推荐阅读