首页 > 解决方案 > 异常:SparkException:任务不可序列化

问题描述

为什么这段代码会产生这个异常?我怎样才能避免它

    SparkConf conf = new SparkConf().setAppName("startingSpark").setMaster("local[*]");
    JavaSparkContext sc = new JavaSparkContext(conf);

    List<Tuple2<Integer, Integer>> visitsRaw = new ArrayList<>();
    visitsRaw.add(new Tuple2<>(4, 18));
    visitsRaw.add(new Tuple2<>(6, 4));
    visitsRaw.add(new Tuple2<>(10, 9));

    List<Tuple2<Integer, String>> usersRaw = new ArrayList<>();
    usersRaw.add(new Tuple2<>(1, "John"));
    usersRaw.add(new Tuple2<>(2, "Bob"));
    usersRaw.add(new Tuple2<>(3, "Alan"));
    usersRaw.add(new Tuple2<>(4, "Doris"));
    usersRaw.add(new Tuple2<>(5, "Marybelle"));
    usersRaw.add(new Tuple2<>(6, "Raquel"));

    JavaPairRDD<Integer, Integer> visits = sc.parallelizePairs(visitsRaw);
    JavaPairRDD<Integer, String> users = sc.parallelizePairs(usersRaw);

    JavaPairRDD<Integer, Tuple2<Integer, String>> joinedRdd = visits.join(users);

    joinedRdd.foreach(System.out::println);
    sc.close();

标签: javaapache-spark

解决方案


子句 'System.out::println' 不可序列化,可以更改为:

joinedRdd.foreach(v->System.out.println(v));

或者对于 Driver 节点上的打印值,可以使用这样的构造:

joinedRdd.collect().forEach(System.out::println);

推荐阅读