首页 > 解决方案 > 如何在 k8s 中的两个不同部署之间定义共享持久卷?

问题描述

我正在尝试在 k8s 中定义两个不同部署之间的共享持久卷,但遇到了一些问题:

我为每个部署和部署之间有 2 个 pod,我试图配置一个共享卷 - 这意味着如果我在 deplyment1/pod1 中创建一个 txt 文件并查看 deplyment1/pod2 - 我看不到文件。

第二个问题是我在另一个部署 (deplyment2) 中看不到文件 - 当前发生的情况是每个 pod 创建了自己的独立卷,而不是共享同一个卷。

最后,我的目标是在 pod 和部署之间创建一个共享卷。重要的是要注意我在 GKE 上运行。

以下是我目前的配置

部署一:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1
  namespace: test
spec:
  selector:
    matchLabels:
      app: app1
  template:
    metadata:
      labels:
        app: app1
    spec:
      containers:
        - name: server
          image: app1
          ports:
            - name: grpc
              containerPort: 11111
          resources:
            requests:
              cpu: 300m
            limits:
              cpu: 500m
          volumeMounts:
            - name: test
              mountPath: /etc/test/configs
      volumes:
        - name: test
          persistentVolumeClaim:
            claimName: my-claim

部署 2:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: app2
      namespace: test
    spec:
      selector:
        matchLabels:
          app: app2
      template:
        metadata:
          labels:
            app: app2
        spec:
          containers:
            - name: server
              image: app2
              ports:
                - name: http
                  containerPort: 22222
              resources:
                requests:
                  cpu: 300m
                limits:
                  cpu: 500m
              volumeMounts:
                - name: test
                  mountPath: /etc/test/configs
          volumes:
            - name: test
              persistentVolumeClaim:
                claimName: my-claim

持久卷:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: test-pv
      namespace: test
    spec:
      capacity:
        storage: 5Gi
      volumeMode: Filesystem
      accessModes:
      - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      storageClassName: fast
      local:
        path: /etc/test/configs
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: cloud.google.com/gke-nodepool
              operator: In
              values:
              - default-pool
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: my-claim
      namespace: test
      annotations:
        volume.beta.kubernetes.io/storage-class: fast
    spec:
      accessModes:
      - ReadWriteMany
      storageClassName: fast
      resources:
        requests:
          storage: 5Gi
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: fast
    provisioner: kubernetes.io/gce-pd
    parameters:
      type: pd-ssd
      fstype: ext4
      replication-type: regional-pd

并描述 pv 和 pvc:

     $ kubectl describe pvc -n test
        Name:          my-claim
        Namespace:     test
        StorageClass:  fast
        Status:        Bound
        Volume:        test-pv
        Labels:        <none>
        Annotations:   pv.kubernetes.io/bind-completed: yes
                       pv.kubernetes.io/bound-by-controller: yes
                       volume.beta.kubernetes.io/storage-class: fast
        Finalizers:    [kubernetes.io/pvc-protection]
        Capacity:      5Gi
        Access Modes:  RWX
        VolumeMode:    Filesystem
        Mounted By:    <none>
        Events:        <none>
      $ kubectl describe pv -n test
        Name:              test-pv
        Labels:            <none>
        Annotations:       pv.kubernetes.io/bound-by-controller: yes
        Finalizers:        [kubernetes.io/pv-protection]
        StorageClass:      fast
        Status:            Bound
        Claim:             test/my-claim
        Reclaim Policy:    Retain
        Access Modes:      RWX
        VolumeMode:        Filesystem
        Capacity:          5Gi
        Node Affinity:
          Required Terms:
            Term 0:        cloud.google.com/gke-nodepool in [default-pool]
        Message:
        Source:
            Type:  LocalVolume (a persistent volume backed by local storage on a node)
            Path:  /etc/test/configs
        Events:    <none>

标签: kubernetesgoogle-cloud-platformgoogle-kubernetes-enginepersistent-storagekubernetes-pvc

解决方案


GCE-PD CSI 存储驱动程序不支持ReadWriteMany. 你需要使用ReadOnlyMany. 因为ReadWriteMany您需要使用 GFS 挂载。

来自有关如何使用具有多个阅读器的永久性磁盘的文档

创建一个PersistentVolumePersistentVolumeClaim

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-readonly-pv
spec:
  storageClassName: ""
  capacity:
    storage: 10Gi
  accessModes:
    - ReadOnlyMany
  claimRef:
    namespace: default
    name: my-readonly-pvc
  gcePersistentDisk:
    pdName: my-test-disk
    fsType: ext4
    readOnly: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-readonly-pvc
spec:
  # Specify "" as the storageClassName so it matches the PersistentVolume's StorageClass.
  # A nil storageClassName value uses the default StorageClass. For details, see
  # https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
  storageClassName: ""
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 10Gi

PersistentVolumeClaim在 Pod 中使用

apiVersion: v1
kind: Pod
metadata:
  name: pod-pvc
spec:
  containers:
  - image: k8s.gcr.io/busybox
    name: busybox
    command:
      - "sleep"
      - "3600"
    volumeMounts:
    - mountPath: /test-mnt
      name: my-volume
      readOnly: true
  volumes:
  - name: my-volume
    persistentVolumeClaim:
      claimName: my-readonly-pvc
      readOnly: true

现在,您可以在不同的节点上拥有多个 Pod,它们都可以PersistentVolumeClaim以只读模式挂载它。但是,您不能同时在多个节点上以写入模式附加永久磁盘


推荐阅读