elasticsearch - kubenetes elasticsearch - 无法解析主机 elasticsearch-master-headless
问题描述
我在 kubernetes 集群中有 3 个节点 - 其中一个是主节点。我使用自定义值通过 helm 安装创建了 elasticsearch
---
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100M
其余配置为默认配置。这是kubectl get all
NAME READY STATUS RESTARTS AGE
pod/elasticsearch-master-0 0/1 Running 0 24m
pod/elasticsearch-master-1 0/1 Running 0 24m
pod/elasticsearch-master-2 0/1 Running 0 24m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elasticsearch-master ClusterIP 10.110.224.134 <none> 9200/TCP,9300/TCP 24m
service/elasticsearch-master-headless ClusterIP None <none> 9200/TCP,9300/TCP 24m
service/glusterfs-dynamic-3b22309e-ecb2-4874-9186-1d0d8e16e364 ClusterIP 10.99.52.135 <none> 1/TCP 24m
service/glusterfs-dynamic-5c51d80c-8a33-4240-bc1d-2a578aca6ff9 ClusterIP 10.100.75.99 <none> 1/TCP 24m
service/glusterfs-dynamic-75293711-ee36-4f98-b645-9162e362c4a9 ClusterIP 10.98.220.143 <none> 1/TCP 24m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 27h
NAME READY AGE
statefulset.apps/elasticsearch-master 0/3 24m
其中一个豆荚的结果describe
,其余豆荚相似
Warning FailedScheduling 25m (x3 over 25m) default-scheduler error while running "VolumeBinding" filter plugin for pod "elasticsearch-master-1": pod has unbound immediate PersistentVolumeClaims
Normal Scheduled 25m default-scheduler Successfully assigned default/elasticsearch-master-1 to kube.node1
Normal Pulling 25m kubelet, kube.node1 Pulling image "docker.elastic.co/elasticsearch/elasticsearch:7.6.0"
Normal Pulled 20m kubelet, kube.node1 Successfully pulled image "docker.elastic.co/elasticsearch/elasticsearch:7.6.0"
Normal Created 20m kubelet, kube.node1 Created container configure-sysctl
Normal Started 20m kubelet, kube.node1 Started container configure-sysctl
Normal Pulled 20m kubelet, kube.node1 Container image "docker.elastic.co/elasticsearch/elasticsearch:7.6.0" already present on machine
Normal Created 20m kubelet, kube.node1 Created container elasticsearch
Normal Started 20m kubelet, kube.node1 Started container elasticsearch
Warning Unhealthy 10s (x124 over 20m) kubelet, kube.node1 Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )
Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )
豆荚日志每隔几秒就会出现一次此错误kubectl logs pod/xxxxxxx
{"type": "server", "timestamp": "2020-02-13T17:49:28,299Z", "level": "WARN", "component": "o.e.d.SeedHostsResolver", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-2", "message": "failed to resolve host [elasticsearch-master-headless]",
"stacktrace": ["java.net.UnknownHostException: elasticsearch-master-headless",
"at java.net.InetAddress$CachedAddresses.get(InetAddress.java:798) ~[?:?]",
"at java.net.InetAddress.getAllByName0(InetAddress.java:1489) ~[?:?]",
"at java.net.InetAddress.getAllByName(InetAddress.java:1348) ~[?:?]",
"at java.net.InetAddress.getAllByName(InetAddress.java:1282) ~[?:?]",
"at org.elasticsearch.transport.TcpTransport.parse(TcpTransport.java:528) ~[elasticsearch-7.6.0.jar:7.6.0]",
"at org.elasticsearch.transport.TcpTransport.addressesFromString(TcpTransport.java:470) ~[elasticsearch-7.6.0.jar:7.6.0]",
"at org.elasticsearch.transport.TransportService.addressesFromString(TransportService.java:801) ~[elasticsearch-7.6.0.jar:7.6.0]",
"at org.elasticsearch.discovery.SeedHostsResolver.lambda$resolveHostsLists$0(SeedHostsResolver.java:144) ~[elasticsearch-7.6.0.jar:7.6.0]",
"at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633) ~[elasticsearch-7.6.0.jar:7.6.0]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]",
"at java.lang.Thread.run(Thread.java:830) [?:?]"] }
我尝试将副本更改为 1 - pod 正在以就绪 1/1 运行,但将主机解析为无头的问题也是如此。任何人都可以提供帮助
环境:3 台主机,使用 centos8,Kubernetes v1.17.3,storageClass:gluster-provisioner(默认)kubernetes.io/glusterfs 和 heketi api。
解决方案
推荐阅读
- here-api - HERE Traffic Incidents API 响应的任何文档?
- javascript - 在反应中选择不会更新标签
- google-apps-script - 如何通过appscript将导入范围应用于单元格?
- google-cloud-endpoints - 使用 Cloud Endpoints 的好处
- c++ - 访问库函数中的结构成员时出现分段错误
- python - 用特定的外推方法对分散的二维数据进行插值
- javascript - Hightcharts - 一个饼图中的多个数据
- android - 如何在 2020/21 年获得活动中的 ViewModel 实例?
- performance - 应该如何解释聚类问题中的调整兰德指数 (ARI)?
- reactjs - 为动态表单字段抽象 React Hooks