首页 > 技术文章 > 使用 Ceph 集群为 Kubernetes 集群提供动态存储卷供给

imirsh 2020-07-22 14:40 原文

  1. 创建存储池
ceph-cluster]$ ceph osd pool create kube-cluster 64
ceph-cluster]$ ceph osd pool application enable kube-cluster rbd
  1. 授权 ceph 用户
ceph-cluster]$ ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube-cluster' -o ceph.client.kube.keyring 
  1. 获取 ceph 管理员的信息并编码,提供给 k8s 集群管理 ceph 集群
ceph-cluster]$ ceph auth get-key client.admin|base64
QVFDb21CWmYzQnZ0Q3hBQWxVOFlWanRkdTRMaitJblBlOHRYcUE9PQ==
  1. 获取 ceph 普通用户的信息编码,提供给 pod 使用
ceph-cluster]$ ceph auth get-key client.kube|base64
QVFEczFCZGZhbzM2TkJBQURzZFlyUjRxbHhYTmF3dEoyUlBVT2c9PQ==
  1. 创建 Secret
ceph]# cat secret-cluster.yaml
---
apiVersion: v1
kind: Secret
metadata:
  name: ceph-admin-secret
  namespace: kube-system
data:
  key: QVFDb21CWmYzQnZ0Q3hBQWxVOFlWanRkdTRMaitJblBlOHRYcUE9PQ==
type: "kubernetes.io/rbd"
---
apiVersion: v1
kind: Secret
metadata:
  name: ceph-kube-secret
  namespace: default
data:
  key: QVFEczFCZGZhbzM2TkJBQURzZFlyUjRxbHhYTmF3dEoyUlBVT2c9PQ==
type: "kubernetes.io/rbd"
  1. 创建 StorageClass 存储类对象
ceph]# cat ceph-storageclass.yaml 
---
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: rbd-dynamic
  annotations:
    storageclass.beta.kubernetes.io/is-defautl-class: "true"
provisioner: kubernetes.io/rbd
parameters:
  monitors: 192.168.124.161:6789,192.168.124.162:6789,192.168.124.163:6789
  adminId: admin
  adminSecretName: ceph-admin-secret
  adminSecretNamespace: kube-system
  pool: kube-cluster
  userId: kube
  userSecretName: ceph-kube-secret

注意: 动态供给要求 kube-conntroller-manager 所在的节点上拥有 rbd 命令(安装 ceph-common 程序即可), 而以 kubeadm 部署的 以Pod 形式运行的 kube-controller-manager 在其容器内部不具有此程序,且无法额外安装,因此需要以外部 external-provision 的方式供给rbd管理工具

  1. 创建测试 Pod
ceph]# cat rbd-pod-test.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-claim
spec:
  storageClassName: rbd-dynamic
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
---
kind: Pod
apiVersion: v1
metadata:
  name: rbd-pod
spec:
  containers:
  - name: busybox
    image: busybox
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && sleep 3600"
    volumeMounts:
      - name: rbd-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: rbd-pvc
      persistentVolumeClaim:
        claimName: rbd-claim
  1. 验证
ceph]# kubectl  get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP           NODE              NOMINATED NODE   READINESS GATES
rbd-pod   1/1     Running   0          41s   172.20.2.8   192.168.124.221   <none>           <none>
~]# rbd showmapped
id pool         image                                                       snap device    
0  kube-cluster kubernetes-dynamic-pvc-1017c477-162c-491f-9027-56bcfcd75919 -    /dev/rbd0 
ceph]# kubectl  exec -it pod/rbd-pod -- ls /mnt
SUCCESS     lost+found

推荐阅读