首页 > 解决方案 > AWS ElasticSearch 集群的堆大小

问题描述

我有一个运行 2 个节点的 AWS ElasticSearch t2.medium 实例,几乎没有任何负载。它仍然一直在崩溃。

我看到了度量 JVMMemoryPressure 的下图:

JVM内存压力

当我转到 Kibana 时,我看到以下错误消息:

Kibana 状态

问题:

  1. 我是否正确解释机器只有 64 MB 可用内存,而不是应该与此实例类型关联的 4 GB?是否有其他地方可以验证堆内存的绝对数量,而不是仅在出现问题时才在 Kibana 上验证?
  2. 如果是这样,我该如何改变这种行为?
  3. 如果这是正常的,我在哪里可以查找内存占用达到 100% 时 ElasticSearch 崩溃的可能原因。我对实例的负载很小。

在实例的日志记录中,我看到了很多警告,例如下面的警告。他们没有提供从哪里开始调试问题的任何线索。

[2018-08-15T07:36:37,021][WARN ][r.suppressed ] path: __PATH__ params:
{}

org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [__PATH__ master];
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:165) ~[elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.handleBlockExceptions(TransportBulkAction.java:387) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.doRun(TransportBulkAction.java:273) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$2.onTimeout(TransportBulkAction.java:421) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:317) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:244) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:578) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.1.jar:6.0.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_172]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_172]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172]

或者

[2018-08-15T07:36:37,691][WARN ][o.e.d.z.ZenDiscovery ] [U1DMgyE] not enough master nodes discovered during pinging (found [[Candidate{node={U1DMgyE}{U1DMgyE1Rn2gId2aRgRDtw}{F-tqTFGDRZaovQF8ILC44w}{__IP__}{__IP__}{__AMAZON_INTERNAL__, __AMAZON_INTERNAL__}, clusterStateVersion=207939}]], but needed [2]), pinging again

或者

[2018-08-15T07:36:42,303][WARN ][o.e.t.n.Netty4Transport ] [U1DMgyE] write and flush on the network layer failed (channel: [id: 0x385d3b63, __PATH__ ! __PATH__])
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.writev0(Native Method) ~[?:1.8.0_172]
at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51) ~[?:1.8.0_172]
at sun.nio.ch.IOUtil.write(IOUtil.java:148) ~[?:1.8.0_172]
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504) ~[?:1.8.0_172]
at io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:432) ~[netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:856) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.forceFlush(AbstractNioChannel.java:368) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:638) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.13.Final.jar:4.1.13.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172]

标签: elasticsearchjvmkibana

解决方案


我知道那个数字是不正确的。我不知道它是从哪里来的。要获得正确的内存使用情况,请运行以下查询:

GET "<es_url>:9200/_nodes/stats"


推荐阅读