首页 > 解决方案 > Cassandra修复导致节点超时

问题描述

我们正在使用具有 5 个节点的 Cassandra (3.10-1) 集群 - 每个节点有 8 个内核和 23Gi 内存,都在同一个 DC 中。复制因子 - 2 一致性级别 - 2

最近在大约每周进行一次的计划修复期间,来自一个节点的查询有很多超时 - 在不同的修复中总是相同的节点。从查看它的日志:

debug.log 显示在构建 merkel 树进行修复时有几个错误。

DEBUG [MemtablePostFlush:7354] 2018-12-30 23:52:08,314 ColumnFamilyStore.java:954 - forceFlush requested but everything is clean in user_device_validation_configuration
ERROR [ValidationExecutor:973] 2018-12-30 23:52:08,315 Validator.java:268 - Failed creating a merkle tree for [repair #b3887f60-0c8d-11e9-b894-754001cf0917 on keyspace1/user_device_validation_configuration, [(-8096630393642361664,-8073407512196669022]]], /10.52.5.42 (see log for details)
DEBUG [AntiEntropyStage:1] 2018-12-30 23:52:08,318 RepairMessageVerbHandler.java:114 - Validating ValidationRequest{gcBefore=1545349928} org.apache.cassandra.repair.messages.ValidationRequest@5c1c2b28
DEBUG [ValidationExecutor:973] 2018-12-30 23:52:08,319 StorageService.java:3313 - Forcing flush on keyspace keyspace1, CF raw_sensors
DEBUG [MemtablePostFlush:7354] 2018-12-30 23:52:08,319 ColumnFamilyStore.java:954 - forceFlush requested but everything is clean in raw_sensors
ERROR [ValidationExecutor:973] 2018-12-30 23:52:08,319 Validator.java:268 - Failed creating a merkle tree for [repair #b3887f60-0c8d-11e9-b894-754001cf0917 on keyspace1/raw_sensors, [(-8096630393642361664,-8073407512196669022]]], /10.52.5.42 (see log for details)
DEBUG [AntiEntropyStage:1] 2018-12-30 23:52:08,321 RepairMessageVerbHandler.java:114 - Validating ValidationRequest{gcBefore=1545349928} org.apache.cassandra.repair.messages.ValidationRequest@5c1c2b28
DEBUG [AntiEntropyStage:1] 2018-12-30 23:52:08,321 RepairMessageVerbHandler.java:142 - Got anticompaction request AnticompactionRequest{parentRepairSession=b387e320-0c8d-11e9-b894-754001cf0917} org.apache.cassandra.repair.messages.AnticompactionRequest@d4b7ed7b
ERROR [AntiEntropyStage:1] 2018-12-30 23:52:08,322 RepairMessageVerbHandler.java:168 - Got error, removing parent repair session
ERROR [AntiEntropyStage:1] 2018-12-30 23:52:08,322 CassandraDaemon.java:229 - Exception in thread Thread[AntiEntropyStage:1,5,main]
java.lang.RuntimeException: java.lang.RuntimeException: Parent repair session with id = b387e320-0c8d-11e9-b894-754001cf0917 has failed.
    at org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:171) ~[apache-cassandra-3.10.jar:3.10]
    at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66) ~[apache-cassandra-3.10.jar:3.10]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_131]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_131]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_131]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_131]
    at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79) [apache-cassandra-3.10.jar:3.10]
    at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_131]
Caused by: java.lang.RuntimeException: Parent repair session with id = b387e320-0c8d-11e9-b894-754001cf0917 has failed.
    at org.apache.cassandra.service.ActiveRepairService.getParentRepairSession(ActiveRepairService.java:400) ~[apache-cassandra-3.10.jar:3.10]
    at org.apache.cassandra.service.ActiveRepairService.doAntiCompaction(ActiveRepairService.java:435) ~[apache-cassandra-3.10.jar:3.10]
    at org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:143) ~[apache-cassandra-3.10.jar:3.10]
    ... 7 common frames omitted
DEBUG [ValidationExecutor:973] 2018-12-30 23:52:08,323 StorageService.java:3313 - Forcing flush on keyspace keyspace1, CF mouse_interactions
DEBUG [MemtablePostFlush:7354] 2018-12-30 23:52:08,323 ColumnFamilyStore.java:954 - forceFlush requested but everything is clean in mouse_interactions
ERROR [ValidationExecutor:973] 2018-12-30 23:52:08,327 Validator.java:268 - Failed creating a merkle tree for [repair #b3887f60-0c8d-11e9-b894-754001cf0917 on keyspace1/mouse_interactions, [(-8096630393642361664,-8073407512196669022]]], /10.52.5.42 (see log for details)
DEBUG [GossipStage:1] 2018-12-30 23:52:10,643 FailureDetector.java:457 - Ignoring interval time of 2000164738 for /10.52.3.47
DEBUG [GossipStage:1] 2018-12-30 23:52:13,643 FailureDetector.java:457 - Ignoring interval time of 2000239040 for /10.52.3.47
DEBUG [ReadRepairStage:407] 2018-12-30 23:52:15,133 ReadCallback.java:242 - Digest mismatch:
org.apache.cassandra.service.DigestMismatchException: Mismatch for key DecoratedKey(7486012912397474412, 000467657474000020376337363933643363613837616231643531633936396564616234636363613400) (a0e45fcd73255bcd93a63b15d41e0843 vs 7dff1a87a755cf62150befc8352f59e8)
    at org.apache.cassandra.service.DigestResolver.compareResponses(DigestResolver.java:92) ~[apache-cassandra-3.10.jar:3.10]
    at org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:233) ~[apache-cassandra-3.10.jar:3.10]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_131]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_131]
    at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79) [apache-cassandra-3.10.jar:3.10]
    at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_131]
DEBUG [GossipStage:1] 2018-12-30 23:52:26,639 FailureDetector.java:457 - Ignoring interval time of 2000385682 for /10.52.3.47

在查看 GC 日志几个小时后,我注意到大约每 20 秒调用一次 GC,并且每次调用停止超过 10 秒:

2018-12-31T06:24:57.450+0000: 1184437.292: Total time for which application threads were stopped: 18.0318658 seconds, Stopping threads took: 0.0005000 seconds
2018-12-31T06:24:57.483+0000: 1184437.325: Total time for which application threads were stopped: 0.0053815 seconds, Stopping threads took: 0.0007325 seconds
2018-12-31T06:24:57.565+0000: 1184437.407: Total time for which application threads were stopped: 0.0118127 seconds, Stopping threads took: 0.0057652 seconds
2018-12-31T06:24:57.604+0000: 1184437.446: Total time for which application threads were stopped: 0.0064909 seconds, Stopping threads took: 0.0023037 seconds
2018-12-31T06:24:57.701+0000: 1184437.543: Total time for which application threads were stopped: 0.0066540 seconds, Stopping threads took: 0.0031299 seconds
{Heap before GC invocations=1377084 (full 108682):
 par new generation   total 943744K, used 943711K [0x00000005c0000000, 0x0000000600000000, 0x0000000600000000)
  eden space 838912K, 100% used [0x00000005c0000000, 0x00000005f3340000, 0x00000005f3340000)
  from space 104832K,  99% used [0x00000005f99a0000, 0x00000005ffff7ce0, 0x0000000600000000)
  to   space 104832K,   0% used [0x00000005f3340000, 0x00000005f3340000, 0x00000005f99a0000)
 concurrent mark-sweep generation total 7340032K, used 7340032K [0x0000000600000000, 0x00000007c0000000, 0x00000007c0000000)
 Metaspace       used 71629K, capacity 73844K, committed 75000K, reserved 1116160K
  class space    used 8521K, capacity 8909K, committed 9176K, reserved 1048576K
2018-12-31T06:24:58.029+0000: 1184437.870: [Full GC (Allocation Failure) 2018-12-31T06:24:58.029+0000: 1184437.871: [CMS: 7340032K->7340031K(7340032K), 15.2051822 secs] 8283743K->7443230K(8283776K), [Metaspace: 71629K->71629K(1116160K)], 15.2055451 secs] [Times: user=13.94 sys=1.28, real=15.20 secs] 
Heap after GC invocations=1377085 (full 108683):
 par new generation   total 943744K, used 103198K [0x00000005c0000000, 0x0000000600000000, 0x0000000600000000)
  eden space 838912K,  12% used [0x00000005c0000000, 0x00000005c64c7950, 0x00000005f3340000)
  from space 104832K,   0% used [0x00000005f99a0000, 0x00000005f99a0000, 0x0000000600000000)
  to   space 104832K,   0% used [0x00000005f3340000, 0x00000005f3340000, 0x00000005f99a0000)
 concurrent mark-sweep generation total 7340032K, used 7340031K [0x0000000600000000, 0x00000007c0000000, 0x00000007c0000000)
 Metaspace       used 71629K, capacity 73844K, committed 75000K, reserved 1116160K
  class space    used 8521K, capacity 8909K, committed 9176K, reserved 1048576K
}
2018-12-31T06:25:13.235+0000: 1184453.076: Total time for which application threads were stopped: 15.2103050 seconds, Stopping threads took: 0.0004204 seconds
2018-12-31T06:25:13.243+0000: 1184453.085: Total time for which application threads were stopped: 0.0047592 seconds, Stopping threads took: 0.0008416 seconds
2018-12-31T06:25:13.272+0000: 1184453.114: Total time for which application threads were stopped: 0.0085538 seconds, Stopping threads took: 0.0046376 seconds
2018-12-31T06:25:13.298+0000: 1184453.140: [GC (CMS Initial Mark) [1 CMS-initial-mark: 7340031K(7340032K)] 7536074K(8283776K), 0.0650538 secs] [Times: user=0.12 sys=0.01, real=0.06 secs] 
2018-12-31T06:25:13.364+0000: 1184453.206: Total time for which application threads were stopped: 0.0728487 seconds, Stopping threads took: 0.0039520 seconds
2018-12-31T06:25:13.364+0000: 1184453.206: [CMS-concurrent-mark-start]
{Heap before GC invocations=1377085 (full 108684):
 par new generation   total 943744K, used 943215K [0x00000005c0000000, 0x0000000600000000, 0x0000000600000000)
  eden space 838912K, 100% used [0x00000005c0000000, 0x00000005f3340000, 0x00000005f3340000)
  from space 104832K,  99% used [0x00000005f99a0000, 0x00000005fff7bd98, 0x0000000600000000)
  to   space 104832K,   0% used [0x00000005f3340000, 0x00000005f3340000, 0x00000005f99a0000)
 concurrent mark-sweep generation total 7340032K, used 7340031K [0x0000000600000000, 0x00000007c0000000, 0x00000007c0000000)
 Metaspace       used 71631K, capacity 73844K, committed 75000K, reserved 1116160K
  class space    used 8521K, capacity 8909K, committed 9176K, reserved 1048576K
2018-12-31T06:25:13.821+0000: 1184453.662: [Full GC (Allocation Failure) 2018-12-31T06:25:13.821+0000: 1184453.663: [CMS2018-12-31T06:25:16.960+0000: 1184456.802: [CMS-concurrent-mark: 3.592/3.596 secs] [Times: user=12.47 sys=0.38, real=3.60 secs]

所以我开始检查集群中的数据分布——我们使用的是 num_of_tokens——每个节点中似乎有 32 个数据是平衡的 ~ 40%。

UN  10.52.2.94  672.64 GiB  32           ?       ad3d1365-bbb7-4229-b586-40667ec22b41  rack1
UN  10.52.3.47  640.91 GiB  32           ?       cdba952b-9685-4769-aaf4-22e538a5c37f  rack1
UN  10.52.1.57  719.34 GiB  32           ?       13bb7573-eb30-489b-80c4-6e5a7c8e5f5e  rack1
UN  10.52.5.42  743.04 GiB  32           ?       c9e892c6-9281-4a49-b4c4-a147a387c3de  rack1
UN  10.52.4.43  691.1 GiB  32           ?       53e3724e-f5a9-46b1-b330-7bb542f15f90  rack1

因此,在检查其他节点中的日志后,我找不到该特定节点中那些超时的任何原因。关于是什么导致这种情况一次又一次地发生在同一个节点上的任何想法或想法?

标签: cassandra

解决方案


只在一个节点上看到这个真的很奇怪。仔细检查配置是否相同。否则,您可能正在编写/查询该节点主要负责的大分区。

复制因子 - 2 一致性级别 - 2

一般来说,修复可能会导致节点无法处理请求,因为构建 Merkle 树和流式数据非常耗费资源。我在这里看到两个问题:

  1. 长时间的 GC 暂停。
  2. 不允许任何节点关闭的 RF/CL 比率。

从 #2 开始,当您的 RF=2 并且您需要 2 个副本来响应您的查询时,您实际上是在 CONSISTENCY_ALL 进行查询。因此,如果节点不堪重负且资源短缺,您的查询将无法完成。如果是我,我会将 RF 增加到 3,然后进行修复(假设节点有足够的存储空间)。另一种选择是以一致性级别 1 (ONE) 进行查询,这可能是您无论如何都应该使用 RF=2 执行的操作。

此外,在诊断查询问题时,通常有助于查看正在运行的实际查询(您尚未提供)。对于 Cassandra,查询问题通常是由不适合设计表的查询造成的。

至于长时间的 GC 暂停,这是一个比较棘手的问题。有一个旧的 Cassandra JIRA ( CASSANDRA-8150 ) 票,其中讨论了 CMS 垃圾收集器的最佳设置。你应该读一读。

你的MAX_HEAP设置是什么?我看到你的新一代不到1GB,这太小了。每个节点都有 23GB 的 RAM,我将从以下设置开始使用 CMS GC:

  • 最大堆大小 ( Xmx) 为 8GB-12GB(您希望为操作系统/页面缓存留出大约 50% 的 RAM)。
  • Initial Heap Size ( Xms) 等于 Max Heap Size 以避免调整堆大小对性能造成的影响。
  • 堆新大小 ( Xmn) 为最大堆大小的 40-50%(因此介于 3GB-6GB 之间)。您希望有足够的空间供新一代使用。
  • MaxTenuringThreshold6 到 8,因为您希望对象在新一代幸存者空间中传递,直到它们死亡,而不是被提升到老一代。默认情况下,我认为它设置为 1,这意味着对象将被提升得太快。

基本上,新到旧/旧到永久的促销是长时间停顿发生的地方。理想情况下,您希望堆上的所有对象都在新生代中被创建、生存和死亡,而不被提升。

否则,尝试使用 G1GC 可能是值得的。对于 G1,我会使用大约 50% RAM 的 Max/Initial Heap,aMaxGCPauseMillis为 400-500ms,并且根本不设置 Heap New Size(G1 计算)。


推荐阅读