首页 > 解决方案 > 线程“broadcast-exchange-0”java.lang.OutOfMemoryError 中的异常:没有足够的内存来构建表并将表广播到所有工作节点

问题描述

我正在以下配置上运行 spark 应用程序:

1 个主节点,2 个工作节点。

运行应用程序时出现异常:

Exception in thread "broadcast-exchange-0" java.lang.OutOfMemoryError: Not enough memory to build and broadcast the table to all worker nodes. As a workaround, you can either disable broadcast by setting spark.sql.autoBroadcastJoinThreshold to -1 or increase the spark driver memory by setting spark.driver.memory to a higher value
        at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1$$anonfun$apply$1.apply(BroadcastExchangeExec.scala:115)
        at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1$$anonfun$apply$1.apply(BroadcastExchangeExec.scala:73)
        at org.apache.spark.sql.execution.SQLExecution$.withExecutionId(SQLExecution.scala:97)
        at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1.apply(BroadcastExchangeExec.scala:72)
        at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1.apply(BroadcastExchangeExec.scala:72)
        at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
        at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

此错误本身提到了两种解决方案:

  1. 作为一种解决方法,您可以通过将 spark.sql.autoBroadcastJoinThreshold 设置为 -1 来禁用广播。

    或者

  2. 通过将 spark.driver.memory 设置为更高的值来增加 spark 驱动程序的内存。

我正在尝试设置更多驱动程序内存来​​运行,但是我想了解此问题的根本原因。谁能解释一下。

我在我的代码中使用了Java

编辑 1

我在我的代码中使用广播变量。

编辑 2

添加包含广播变量的代码。

//1.
        Dataset<Row> currencySet1 = sparkSession.read().format("jdbc").option("url",connection ).option("dbtable", CI_CURRENCY_CD).load();
        currencySetCache = currencySet1.select(CURRENCY_CD, DECIMAL_POSITIONS).persist(StorageLevel.MEMORY_ONLY());
        Dataset<Row> currencyCodes = currencySetCache.select(CURRENCY_CD);
        currencySet = currencyCodes.as(Encoders.STRING()).collectAsList();

        //2.
        Dataset<Row>  divisionSet = sparkSession.read().format("jdbc").option("url",connection ).option("dbtable", CI_CIS_DIVISION).load();
        divisionSetCache = divisionSet.select(CIS_DIVISION).persist(StorageLevel.MEMORY_ONLY());
        divisionList = divisionSetCache.as(Encoders.STRING()).collectAsList();

        //3.
        Dataset<Row> userIdSet =  sparkSession.read().format("jdbc").option("url",connection ).option("dbtable", SC_USER).load();
        userIdSetCache = userIdSet.select(USER_ID).persist(StorageLevel.MEMORY_ONLY());
        userIdList = userIdSetCache.as(Encoders.STRING()).collectAsList();

ClassTag<List<String>> evidenceForDivision = scala.reflect.ClassTag$.MODULE$.apply(List.class);
        Broadcast<List<String>> broadcastVarForDiv = context.broadcast(divisionList, evidenceForDivision);

        ClassTag<List<String>> evidenceForCurrency = scala.reflect.ClassTag$.MODULE$.apply(List.class);
        Broadcast<List<String>> broadcastVarForCurrency = context.broadcast(currencySet, evidenceForCurrency);

        ClassTag<List<String>> evidenceForUserID = scala.reflect.ClassTag$.MODULE$.apply(List.class);
        Broadcast<List<String>> broadcastVarForUserID = context.broadcast(userIdList, evidenceForUserID);


        //Validation -- Start
        Encoder<RuleParamsBean> encoder = Encoders.bean(RuleParamsBean.class);
        Dataset<RuleParamsBean> ds = new Dataset<RuleParamsBean>(sparkSession, finalJoined.logicalPlan(), encoder);


        Dataset<RuleParamsBean> validateDataset = ds.map(ruleParamsBean -> validateTransaction(ruleParamsBean,broadcastVarForDiv.value(),broadcastVarForCurrency.value(),
                broadcastVarForUserID.value()),encoder);
        validateDataset.persist(StorageLevel.MEMORY_ONLY());

标签: javaapache-sparkapache-spark-sqlapache-spark-2.0

解决方案


可能的根本原因: “spark.driver.memory”的默认值仅 1 Gb(取决于分配),它是非常小的数字。如果您在驱动程序上读取大量数据,则很容易发生 OutOfMemory,异常的建议是正确的。

解决方案:将“spark.driver.memory”和“spark.executor.memory”至少增加到16Gb。


推荐阅读