首页 > 解决方案 > 任务不可序列化 - Java 1.8 和 Spark 2.1.1

问题描述

我对 Java 8 和 Spark 2.1.1 有疑问

我有一个(有效的)正则表达式保存在一个名为“模式”的变量中。当我尝试使用此变量过滤从文本文件加载的内容时,会引发 SparkException:Task not serializable。谁能帮我?这是代码:

  JavaRDD<String> lines = sc.textFile(path);
  JavaRDD<String> filtered = lines.filter(new Function<String, Boolean>() {
        @Override
        public Boolean call(String v1) throws Exception {
            return v1.contains(pattern);
        }
    });

这是错误堆栈

Exception in thread "main" org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2101)
at org.apache.spark.rdd.RDD$$anonfun$filter$1.apply(RDD.scala:387)
at org.apache.spark.rdd.RDD$$anonfun$filter$1.apply(RDD.scala:386)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.filter(RDD.scala:386)
at org.apache.spark.api.java.JavaRDD.filter(JavaRDD.scala:78)
at FileReader.filteredRDD(FileReader.java:47)
at FileReader.main(FileReader.java:68)

Caused by: java.io.NotSerializableException: FileReader
Serialization stack:
- object not serializable (class: FileReader, value: FileReader@6107165)
- field (class: FileReader$1, name: this$0, type: class FileReader)
- object (class FileReader$1, FileReader$1@7c447c76)
- field (class: org.apache.spark.api.java.JavaRDD$$anonfun$filter$1, name: f$1, type: interface org.apache.spark.api.java.function.Function)
- object (class org.apache.spark.api.java.JavaRDD$$anonfun$filter$1, <function1>)
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295)

标签: javaapache-spark

解决方案


根据 spark 为 non-serializability 生成的报告

 - object not serializable (class: FileReader, 
   value: FileReader@6107165)
 - field (class: FileReader$1, name: this$0, type: 
         class FileReader)
 - object (class FileReader$1, 
           FileReader$1@7c447c76)

建议FileReader在闭包不可序列化的类中。当 spark 无法仅序列化方法时会发生这种情况。Spark 看到了这一点,并且由于方法不能自行序列化,Spark 尝试序列化整个类。

在您的代码中,pattern我认为变量是类变量。这导致了问题。Spark 不确定如何在pattern不序列化整个类的情况下序列化。

尝试将模式作为局部变量传递给闭包,这将起作用。


推荐阅读