首页 > 解决方案 > 如何将 ECK 集群的 PV 设置为我的 Azure 存储帐户中的现有文件共享?

问题描述

一般来说,我是 ECK 和 Azure 服务的初学者,因此希望能在设置方面提供一些帮助。

我正在尝试设置弹性和 kibana 集群,并将 PV 设置为我的 azure 存储帐户上的现有文件共享。

但是,在浏览了教程之后,我无法弄清楚。

这些是 PV 和弹性集群 yaml 文件:

pv-storage.yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: elastic-storage2
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  mountOptions:
    - dir_mode=0755
    - file_mode=0755
  # storageClassName: storage-azure
  persistentVolumeReclaimPolicy: Retain
  azureFile:
    secretName: azure-secret
    shareName: elasticsearchfile2
    readOnly: false

弹性集群.yaml

apiVersion: elasticsearch.k8s.elastic.co/v1 
kind: Elasticsearch 
metadata: 
  name: quickstart
spec: 
  version: 7.11.1 #Make sure you use the version of your choice 
  http: 
    service: 
      spec: 
        type: LoadBalancer #Adds a External IP 
  nodeSets: 
  - name: default 
    count: 1 
    config: 
      node.master: true 
      node.data: true 
      node.ingest: true 
      node.store.allow_mmap: false 
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
        annotations:
          volume.beta.kubernetes.io/storage-class: ""
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 1Gi
        volumeName: elastic-storage2

描述弹性 pod 的错误消息:

Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  17m                   default-scheduler  Successfully assigned default/quickstart-es-default-0 to aks-agentpool-23635388-vmss000001
  Normal   Pulled     17m                   kubelet            Container image "docker.elastic.co/elasticsearch/elasticsearch:7.11.1" already present on machine
  Normal   Created    17m                   kubelet            Created container elastic-internal-init-filesystem
  Normal   Started    17m                   kubelet            Started container elastic-internal-init-filesystem
  Warning  Unhealthy  16m                   kubelet            Readiness probe failed: {"timestamp": "2021-02-25T04:27:10+00:00", "message": "readiness probe failed", "curl_rc": "7"}
  Normal   Started    15m (x4 over 17m)     kubelet            Started container elasticsearch
  Warning  Unhealthy  15m                   kubelet            Readiness probe failed: OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: process_linux.go:103: executing setns process caused: exit status 1: unknown
  Normal   Pulled     15m (x5 over 17m)     kubelet            Container image "docker.elastic.co/elasticsearch/elasticsearch:7.11.1" already present on machine
  Normal   Created    15m (x5 over 17m)     kubelet            Created container elasticsearch
  Warning  BackOff    2m13s (x62 over 16m)  kubelet            Back-off restarting failed container

来自弹性 pod 的错误日志:

{"type": "deprecation", "timestamp": "2021-02-25T07:53:03,175Z", "level": "DEPRECATION", "component": "o.e.d.c.s.Settings", "cluster.name": "quickstart", "node.name": "quickstart-es-default-0", "message": "[node.data] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version." }
{"type": "server", "timestamp": "2021-02-25T07:53:03,596Z", "level": "ERROR", "component": "o.e.b.ElasticsearchUncaughtExceptionHandler", "cluster.name": "quickstart", "node.name": "quickstart-es-default-0", "message": "uncaught exception in thread [main]", 
"stacktrace": ["org.elasticsearch.bootstrap.StartupException: ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];",

提前致谢,如果您需要更多详细信息,请告诉我。

标签: elasticsearchkubernetes

解决方案


推荐阅读