首页 > 解决方案 > Spark 广播失败

问题描述

我对 spark 很陌生,并试图根据此处描述的另一个 RDD 过滤一个 RDD 。

我的过滤器数据位于 S3 的 CSV 文件中。这个 CSV 文件大小为 1.7GB,行数约为 100M。每行都有一个唯一的 10 个字符长的 ID。我的计划是将此 CSV 文件中的这些 id 提取到内存集中,然后广播该集并使用它来过滤另一个 RDD。

我的代码看起来像这样:

val sparkContext: SparkContext = new SparkContext()

val filterSet = sparkContext
  .textFile("s3://.../filter.csv") // this is the 1.7GB csv file
  .map(_.split(",")(0)) // each string here has exactly 10 chars (A-Z|0-9)
  .collect()
  .toSet // ~100M 10 char long strings in set.

val filterSetBC = sparkContext.broadcast(filterSet) // THIS LINE IS FAILING

val otherRDD = ...

otherRDD
  .filter(item => filterSetBC.value.contains(item.id))
  .saveAsTextFile("s3://...")

我在 10 m4.2xlarge(16 vCore,32 GB 内存)EC2 实例上的 AWS EMR 上运行此代码并得到以下错误。

18/09/06 17:15:33 INFO UnifiedMemoryManager: Will not store broadcast_2 as the required space (16572507620 bytes) exceeds our memory limit (13555256524 bytes)
18/09/06 17:15:33 WARN MemoryStore: Not enough space to cache broadcast_2 in memory! (computed 10.3 GB so far)
18/09/06 17:15:33 INFO MemoryStore: Memory use = 258.6 KB (blocks) + 1024.0 KB (scratch space shared across 1 tasks(s)) = 1282.6 KB. Storage limit = 12.6 GB.
18/09/06 17:15:33 WARN BlockManager: Persisting block broadcast_2 to disk instead.
18/09/06 17:18:54 WARN BlockManager: Putting block broadcast_2 failed due to exception java.lang.ArrayIndexOutOfBoundsException: 1073741865.
18/09/06 17:18:54 WARN BlockManager: Block broadcast_2 could not be removed as it was not found on disk or in memory
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 1073741865
    at com.esotericsoftware.kryo.util.IdentityObjectIntMap.clear(IdentityObjectIntMap.java:382)
    at com.esotericsoftware.kryo.util.MapReferenceResolver.reset(MapReferenceResolver.java:65)
    at com.esotericsoftware.kryo.Kryo.reset(Kryo.java:865)
    at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:630)
    at org.apache.spark.serializer.KryoSerializationStream.writeObject(KryoSerializer.scala:241)
    at org.apache.spark.serializer.SerializationStream.writeAll(Serializer.scala:140)
    at org.apache.spark.serializer.SerializerManager.dataSerializeStream(SerializerManager.scala:174)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1$$anonfun$apply$7.apply(BlockManager.scala:1101)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1$$anonfun$apply$7.apply(BlockManager.scala:1099)
    at org.apache.spark.storage.DiskStore.put(DiskStore.scala:68)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1099)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1083)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1018)
    at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1083)
    at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:841)
    at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1404)
    at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:123)
    at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:88)
    at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
    at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
    at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1482)

据我从日志中了解到,我尝试广播的集合约为 15GB。通常 100Mx10 字符约为 1GB,但有一些 java 开销,我预计它约为 5-6GB。

问题一:为什么我的集合数据这么大?我怎样才能最小化它?

尽管如此,我还是将我的执行程序配置为消耗 22GB(执行程序内存)+ 2GB(spark.executor.memoryOverhead)内存。

问题2:为什么spark说它超出了内存限制(12.6GB)?这个 12.6GB 的限制从何而来?

我想我把参数搞砸了spark-submit。他们是这样的:

--deploy-mode cluster 
--class com.example.MySparkJob
--master yarn
--driver-memory 24G
--executor-cores 15
--executor-memory 22G
--num-executors 9
--deploy-mode client
--conf spark.default.parallelism=1200
--conf spark.speculation=true
--conf spark.rdd.compress=true
--conf spark.files.fetchTimeout=180s
--conf spark.network.timeout=300s
--conf spark.yarn.max.executor.failures=5000
--conf spark.dynamicAllocation.enabled=true   // also tried without this parameter, no changes
--conf spark.driver.maxResultSize=0
--conf spark.executor.memoryOverhead=2G
--conf spark.serializer=org.apache.spark.serializer.KryoSerializer
--conf spark.kryo.registrator=com.example.MyKryoRegistrator
--driver-java-options -XX:+UseCompressedOops

标签: apache-spark

解决方案


第一个请不要分配这么大的驱动程序内存 4 Gb 就足够了,第二个执行器核心 15 是巨大的 3-4 就足够了(这将提供更多的执行器而不是只有几个) 第三个如果你有更多的内存增加执行器9 到 45(如果没有,则将 Executor 18 和 Executor men 设为 16)


推荐阅读