首页 > 解决方案 > 代码中的Scala Spark Task不可序列化错误

问题描述

不知道下面的代码有什么问题,但它的抛出

org.apache.spark.SparkException: Task not serializable

错误。谷歌搜索该错误,但无法解决。

下面是代码:(可以community.cloud.databricks.com通过创建新的 Scala 笔记本复制粘贴和执行)

    import com.google.gson._    
     object TweetUtils {
          case class Tweet(
               id : String,
               user : String,
               userName : String,
               text : String,
               place : String,
               country : String,
               lang : String
          ) 

       def parseFromJson(lines:Iterator[String]):Iterator[Tweet] = {
            val gson = new Gson
            lines.map( line => gson.fromJson(line, classOf[Tweet]))     
       }

       def loadData(): RDD[Tweet] = { 
           val pathToFile = "/FileStore/tables/reduced_tweets-57570.json"
           sc.textFile(pathToFile).mapPartitions(parseFromJson(_))
       }

       def tweetsByUser(): RDD[(String, Iterable[Tweet])] = {
           val tweets = loadData
           tweets.groupBy(_.user)    
       }  
   } 

   val res = TweetUtils.tweetsByUser()
   res.collect().take(5).foreach(println)

以下是详细的错误消息:

at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:403)
    at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:393)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:162)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:2548)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:827)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:826)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:392)
    at org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:826)
    at line05db6d250e4b42e2b2c1d6b97ba83df533.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$TweetUtils$.loadData(command-3696793732897971:22)
    at line05db6d250e4b42e2b2c1d6b97ba83df533.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$TweetUtils$.tweetsByUser(command-3696793732897971:25)
    at line05db6d250e4b42e2b2c1d6b97ba83df533.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3696793732897971:30)
    at line05db6d250e4b42e2b2c1d6b97ba83df533.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3696793732897971:84)
    at line05db6d250e4b42e2b2c1d6b97ba83df533.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3696793732897971:86)
    at line05db6d250e4b42e2b2c1d6b97ba83df533.$read$$iw$$iw$$iw$$iw$$iw.<init>(command-3696793732897971:88)
    at line05db6d250e4b42e2b2c1d6b97ba83df533.$read$$iw$$iw$$iw$$iw.<init>(command-3696793732897971:90)
    at line05db6d250e4b42e2b2c1d6b97ba83df533.$read$$iw$$iw$$iw.<init>(command-3696793732897971:92)
    at line05db6d250e4b42e2b2c1d6b97ba83df533.$read$$iw$$iw.<init>(command-3696793732897971:94)

提前致谢,

斯里兰卡

标签: scalaapache-spark

解决方案


最后,有效的方法是将“Artem Aliev”和“Partha”的建议一起实施。即通过将“案例类 Tweet”移到“TweetUtils 对象”之外并且还通过扩展对象“对象 TweetUtils 扩展可序列化”

感谢你们俩。


推荐阅读