首页 > 解决方案 > Spark 流式传输,kafka 代理错误,“无法获取 spark-executor 的记录 - .... 在轮询 512 后”

问题描述

我们有一个从 Kafka 读取数据的 spark 流应用程序。数据量:1500万

看到以下错误:

java.lang.AssertionError: assertion failed: Failed to get records for spark-executor- after ...polling for 512 at scala.Predef$.assert(Predef.scala:170)

出现更多与 CachedKafkaConsumer 相关的错误

at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:74)
at org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:227)
at org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:193)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:214)
at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:935)
at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:926)
at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:866)
at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:926) 
at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:670) 
at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:330)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:281)
at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:105)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) 
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

spark.streaming.kafka.consumer.poll.ms 设置为默认 512ms,其他 Kafka 流超时设置为默认设置。

"request.timeout.ms" 
"heartbeat.interval.ms" 
"session.timeout.ms" 
"max.poll.interval.ms" 

此外,Kafka 最近从 0.8 更新到 0.10。在 0.8 中,没有看到这种行为。没有发现资源问题。

任何指针?

标签: kubernetesapache-kafkaspark-streamingspark-structured-streamingapache-spark-2.0

解决方案


推荐阅读