首页 > 解决方案 > 运行 Kafka 性能流量时出现错误“此服务器不是该主题分区的领导者”

问题描述

2018 年 8 月 15 日更新我执行strace来监控系统调用mprotect,发现它确实被阻塞了几秒钟。

     strace -f -e trace=mprotect,mmap,munmap -T -t -p `pidof java` 2>&1 |tee mp1.txt

     [pid 27007] 03:52:48 mprotect(0x7f9766226000, 4096, PROT_NONE) = 0 <3.631704>

但我没有确定原因。

2018 年 8 月 14 日更新我发现这是一个 JVM STW 事件。我使用以下选项调试了 JVM

 -XX:+PrintGCApplicationStoppedTime
 -XX:+PrintSafepointStatistics
 -XX:PrintSafepointStatisticsCount=1
 -XX:+SafepointTimeout
 -XX:SafepointTimeoutDelay=500

下面有一些日志

              vmop                    [threads: total initially_running wait_to_block]    [time: spin block sync cleanup vmop] page_trap_count
        488.188: no vm operation      [      73          1              1    ]      [     1     0  3301     0     0    ]  1

        2018-08-13T22:16:09.744-0400: 491.491: Total time for which application threads were stopped: 3.3021375 seconds, Stopping threads took: 3.3018193 seconds

奇怪的是自旋/阻塞时间是零,而同步时间是3301。我编译了一个基于open jdk 1.8的JVM并添加了一些调试日志,我发现它在下面的代码中被阻塞了,

     void SafepointSynchronize::begin() {
     ... ...

       if (UseCompilerSafepoints && DeferPollingPageLoopCount < 0) {
         // Make polling safepoint aware
         guarantee (PageArmed == 0, "invariant") ;
         PageArmed = 1 ;
         os::make_polling_page_unreadable();
       }
       ... ...
     }  

在函数 os::make_polling_page_unreadable 中,调用具有信号量依赖的 ::mprotect,

down_write(&current->mm->mmap_sem);

我怀疑信号量mmap_sem争用会导致这个 STW 事件。但我不知道是哪个函数导致了这个?这里有什么帮助吗?


原始问题

我现在正在测试 Kafka 的性能。我在一个包含 6 个节点、36 个分区和 4 个副本的集群中创建了一个主题。一个 zookeeper 节点运行在一个单独的节点上。

kafka-topics --create --topic kf.p36.r4 --zookeeper l2 --partitions 36 --replication-factor 4

[root@g9csf002-0-0-3 kafka]#  kafka-topics --describe --zookeeper l2 --topic kf.p36.r4
Topic:kf.p36.r4 PartitionCount:36   ReplicationFactor:4 Configs:
Topic: kf.p36.r4    Partition: 0    Leader: 1   Replicas: 1,5,6,2   Isr: 5,2,6,1
Topic: kf.p36.r4    Partition: 1    Leader: 2   Replicas: 2,6,1,3   Isr: 1,3,6,2
Topic: kf.p36.r4    Partition: 2    Leader: 3   Replicas: 3,1,2,4   Isr: 3,4,2,1
Topic: kf.p36.r4    Partition: 3    Leader: 4   Replicas: 4,2,3,5   Isr: 3,2,4,5
Topic: kf.p36.r4    Partition: 4    Leader: 5   Replicas: 5,3,4,6   Isr: 3,6,4,5
Topic: kf.p36.r4    Partition: 5    Leader: 6   Replicas: 6,4,5,1   Isr: 4,5,6,1
Topic: kf.p36.r4    Partition: 6    Leader: 1   Replicas: 1,6,2,3   Isr: 3,6,2,1
Topic: kf.p36.r4    Partition: 7    Leader: 2   Replicas: 2,1,3,4   Isr: 3,4,2,1
Topic: kf.p36.r4    Partition: 8    Leader: 3   Replicas: 3,2,4,5   Isr: 3,2,4,5
Topic: kf.p36.r4    Partition: 9    Leader: 4   Replicas: 4,3,5,6   Isr: 3,6,4,5
Topic: kf.p36.r4    Partition: 10   Leader: 5   Replicas: 5,4,6,1   Isr: 4,5,6,1
Topic: kf.p36.r4    Partition: 11   Leader: 6   Replicas: 6,5,1,2   Isr: 5,2,6,1
Topic: kf.p36.r4    Partition: 12   Leader: 1   Replicas: 1,2,3,4   Isr: 3,4,2,1
Topic: kf.p36.r4    Partition: 13   Leader: 2   Replicas: 2,3,4,5   Isr: 3,2,4,5
Topic: kf.p36.r4    Partition: 14   Leader: 3   Replicas: 3,4,5,6   Isr: 3,6,4,5
Topic: kf.p36.r4    Partition: 15   Leader: 4   Replicas: 4,5,6,1   Isr: 4,5,6,1
Topic: kf.p36.r4    Partition: 16   Leader: 5   Replicas: 5,6,1,2   Isr: 5,2,6,1
Topic: kf.p36.r4    Partition: 17   Leader: 6   Replicas: 6,1,2,3   Isr: 3,2,6,1
Topic: kf.p36.r4    Partition: 18   Leader: 1   Replicas: 1,3,4,5   Isr: 3,4,5,1
Topic: kf.p36.r4    Partition: 19   Leader: 2   Replicas: 2,4,5,6   Isr: 6,2,4,5
Topic: kf.p36.r4    Partition: 20   Leader: 3   Replicas: 3,5,6,1   Isr: 3,5,6,1
Topic: kf.p36.r4    Partition: 21   Leader: 4   Replicas: 4,6,1,2   Isr: 4,2,6,1
Topic: kf.p36.r4    Partition: 22   Leader: 5   Replicas: 5,1,2,3   Isr: 3,5,2,1
Topic: kf.p36.r4    Partition: 23   Leader: 6   Replicas: 6,2,3,4   Isr: 3,6,2,4
Topic: kf.p36.r4    Partition: 24   Leader: 1   Replicas: 1,4,5,6   Isr: 4,5,6,1
Topic: kf.p36.r4    Partition: 25   Leader: 2   Replicas: 2,5,6,1   Isr: 1,6,2,5
Topic: kf.p36.r4    Partition: 26   Leader: 3   Replicas: 3,6,1,2   Isr: 3,2,6,1
Topic: kf.p36.r4    Partition: 27   Leader: 4   Replicas: 4,1,2,3   Isr: 3,4,2,1
Topic: kf.p36.r4    Partition: 28   Leader: 5   Replicas: 5,2,3,4   Isr: 3,2,4,5
Topic: kf.p36.r4    Partition: 29   Leader: 6   Replicas: 6,3,4,5   Isr: 3,6,4,5
Topic: kf.p36.r4    Partition: 30   Leader: 1   Replicas: 1,5,6,2   Isr: 5,2,6,1
Topic: kf.p36.r4    Partition: 31   Leader: 2   Replicas: 2,6,1,3   Isr: 1,3,6,2
Topic: kf.p36.r4    Partition: 32   Leader: 3   Replicas: 3,1,2,4   Isr: 3,4,2,1
Topic: kf.p36.r4    Partition: 33   Leader: 4   Replicas: 4,2,3,5   Isr: 3,2,4,5
Topic: kf.p36.r4    Partition: 34   Leader: 5   Replicas: 5,3,4,6   Isr: 3,6,4,5
Topic: kf.p36.r4    Partition: 35   Leader: 6   Replicas: 6,4,5,1   Isr: 4,5,6,1   

我运行了两个生产者实例,kafka-producer-perf-test

kafka-producer-perf-test --topic kf.p36.r4  --num-records  600000000 --record-size 1024 --throughput 120000 --producer-props bootstrap.servers=b3:9092,b4:9092,b5:9092,b6:9092,b7:9092,b8:9092 acks=1 

总流量为 240k tps,每条消息为 1024 字节。当我运行 240k tps 流量时,一开始一切正常,但过了一段时间,出现了一些错误信息。

[root@g9csf002-0-0-1 ~]# kafka-producer-perf-test --topic kf.p36.r4  --num-records  600000000 --record-size 1024 --throughput 120000 --producer-props bootstrap.servers=b3:9092,b4:9092,b5:9092,b6:9092,b7:9092,b8:9092 acks=1 
599506 records sent, 119901.2 records/sec (117.09 MB/sec), 4.8 ms avg latency, 147.0 max latency.
600264 records sent, 120052.8 records/sec (117.24 MB/sec), 2.0 ms avg latency, 13.0 max latency.
599584 records sent, 119916.8 records/sec (117.11 MB/sec), 1.9 ms avg latency, 13.0 max latency.
600760 records sent, 120152.0 records/sec (117.34 MB/sec), 1.9 ms avg latency, 13.0 max latency.
599764 records sent, 119904.8 records/sec (117.09 MB/sec), 2.0 ms avg latency, 35.0 max latency.
276603 records sent, 21408.9 records/sec (20.91 MB/sec), 103.0 ms avg latency, 10743.0 max latency.
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.

我研究了kafka broker的日志,发现brokers和zookeeper之间的通信有问题。

  [2018-08-06 01:28:02,562] WARN Client session timed out, have not heard from server in 7768ms for sessionid 0x164f8ea86020062 (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:28:02,562] INFO Client session timed out, have not heard from server in 7768ms for sessionid 0x164f8ea86020062, clo

zookeeper客户端是zookeeper-3.4.10.jar,我下载代码并添加一些日志到 src/java/main/org/apache/zookeeper/ClientCnxn.java

发现SendThread在访问变量状态时有时会被阻塞

  [2018-08-06 01:27:54,793] INFO ROVER: start of loop. (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:27:54,793] INFO ROVER: state = CONNECTED (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:27:54,793] INFO ROVER: to = 4000 (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:27:54,793] INFO ROVER: timeToNextPing = 2000 (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:27:54,793] INFO ROVER: before clientCnxnSocket.doTransport, to = 2000 (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:27:56,795] INFO ROVER: after clientCnxnSocket.doTransport (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:28:02,561] INFO ROVER: state = CONNECTED (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:28:02,561] INFO ROVER: start of loop. (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:28:02,561] INFO ROVER: state = CONNECTED (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:28:02,561] INFO ROVER: to = 1998 (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:28:02,561] INFO ROVER: timeToNextPing = -1002 (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:28:02,561] INFO ROVER: sendPing has done. (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:28:02,561] INFO ROVER: before clientCnxnSocket.doTransport, to = 1998 (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:28:02,561] INFO ROVER: after clientCnxnSocket.doTransport (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:28:02,561] INFO ROVER: state = CONNECTED (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:28:02,561] INFO ROVER: start of loop. (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:28:02,561] INFO ROVER: state = CONNECTED (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:28:02,561] INFO ROVER: to = -3768 (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:28:02,562] WARN Client session timed out, have not heard from server in 7768ms for sessionid 0x164f8ea86020062 (org.apache.zookeeper.ClientCnxn)
  [2018-08-06 01:28:02,562] INFO Client session timed out, have not heard from server in 7768ms for sessionid 0x164f8ea86020062, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)

您可以在2018-08-06 01:27:562018-08-06 01:28:02之间找到,线程被阻塞,什么都不做。更改后的代码如下所示,

                // If we are in read-only mode, seek for read/write server
                if (state == States.CONNECTEDREADONLY) {
                    long now = System.currentTimeMillis();
                    int idlePingRwServer = (int) (now - lastPingRwServer);
                    if (idlePingRwServer >= pingRwTimeout) {
                        lastPingRwServer = now;
                        idlePingRwServer = 0;
                        pingRwTimeout =
                            Math.min(2*pingRwTimeout, maxPingRwTimeout);
                        pingRwServer();
                    }
                    to = Math.min(to, pingRwTimeout - idlePingRwServer);
                }

                LOG.info("ROVER: before clientCnxnSocket.doTransport, to = " + to );
                clientCnxnSocket.doTransport(to, pendingQueue, outgoingQueue, ClientCnxn.this);
                LOG.info("ROVER: after clientCnxnSocket.doTransport");
                LOG.info("ROVER: state = " + state);
            } catch (Throwable e) {

安装的kafka是confluent-kafka-2.11,java是

[root@g9csf0002-0-0-12 kafka]# java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode) 

现在我不知道如何解决这个问题,有人可以对此有所了解吗?

标签: javalinuxapache-kafkajvmapache-zookeeper

解决方案


我之前遇到过这个问题,有时 Kafka JVM 会长时间进行垃圾收集,或者内部网络方面发生了一些奇怪的事情。我注意到在我们的例子中超时都在 6 秒或 7 秒左右(这看起来和你的相似)。问题是,如果 Kafka 在分配的时间段内不能向 zookeeper 报告,它就会发疯,它会开始报告复制不足的分区,每隔一段时间就会关闭整个集群。因此,如果我没记错的话,我们将超时时间增加到 15 秒,之后一切正常,零错误。

这些是 kafka 代理上的相应设置:

zookeeper.session.timeout.ms    Default: 6000ms
zookeeper.connection.timeout.ms  

我们更改了两者,IIRC,但您应该先尝试更改session配置。


推荐阅读