首页 > 解决方案 > Java8 maven引发错误对过滤器的引用不明确

问题描述

我正在运行 Spark 快速启动应用程序:

/* SimpleApp.java */
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;

public class SimpleApp {
  public static void main(String[] args) {
    String logFile = "/data/software/spark-2.4.4-bin-without-hadoop/README.md"; // Should be some file on your system
    SparkSession spark = SparkSession.builder().appName("Simple Application").getOrCreate();
    Dataset<String> logData = spark.read().textFile(logFile).cache();

    long numAs = logData.filter(s -> s.contains("a")).count();
    long numBs = logData.filter(s -> s.contains("b")).count();

    System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs);

    spark.stop();
  }
}

正如官方文件所说,

# Package a JAR containing your application
$ mvn package

当我运行mvn package它时,会引发以下错误:

[WARNING] File encoding has not been set, using platform encoding UTF-8, i.e. build is platform dependent!
[INFO] Compiling 1 source file to /home/dennis/java/spark_quick_start/target/classes
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR : 
[INFO] -------------------------------------------------------------
[ERROR] /home/dennis/java/spark_quick_start/src/main/java/SimpleApp.java:[11,25] reference to filter is ambiguous
  both method filter(scala.Function1<T,java.lang.Object>) in org.apache.spark.sql.Dataset and method filter(org.apache.spark.api.java.function.FilterFunction<T>) in org.apache.spark.sql.Dataset match
[ERROR] /home/dennis/java/spark_quick_start/src/main/java/SimpleApp.java:[12,25] reference to filter is ambiguous
  both method filter(scala.Function1<T,java.lang.Object>) in org.apache.spark.sql.Dataset and method filter(org.apache.spark.api.java.function.FilterFunction<T>) in org.apache.spark.sql.Dataset match
[INFO] 2 errors 
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  01:00 min
[INFO] Finished at: 2020-01-13T15:04:55+08:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.3:compile (default-compile) on project simple-project: Compilation failure: Compilation failure: 
[ERROR] /home/dennis/java/spark_quick_start/src/main/java/SimpleApp.java:[11,25] reference to filter is ambiguous
[ERROR]   both method filter(scala.Function1<T,java.lang.Object>) in org.apache.spark.sql.Dataset and method filter(org.apache.spark.api.java.function.FilterFunction<T>) in org.apache.spark.sql.Dataset match
[ERROR] /home/dennis/java/spark_quick_start/src/main/java/SimpleApp.java:[12,25] reference to filter is ambiguous
[ERROR]   both method filter(scala.Function1<T,java.lang.Object>) in org.apache.spark.sql.Dataset and method filter(org.apache.spark.api.java.function.FilterFunction<T>) in org.apache.spark.sql.Dataset match
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException

这是pom.xml

<project>
  <groupId>edu.berkeley</groupId>
  <artifactId>simple-project</artifactId>
  <modelVersion>4.0.0</modelVersion>
  <name>Simple Project</name>
  <packaging>jar</packaging>
  <version>1.0</version>
  <dependencies>
    <dependency> <!-- Spark dependency -->
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-sql_2.12</artifactId>
      <version>2.4.4</version>
      <scope>provided</scope>
    </dependency>
  </dependencies>


<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>3.3</version>
            <configuration>
                <source>1.8</source>
                <target>1.8</target>
            </configuration>
        </plugin>
    </plugins>
</build>

</project>

标签: javamaven

解决方案


这意味着您的 lambda 表达式都可以转换为 ascala.Function1<T,java.lang.Object>或 a org.apache.spark.api.java.function.FilterFunction<T>

我不知道这在 Scala 中是否也是模棱两可的,但在 Java 中确实如此。在这种情况下,您需要明确说明类型:

long numAs = logData.filter((org.apache.spark.api.java.function.FilterFunction<String>)s -> s.contains("a")).count();

或者用 Scala 编写代码。


推荐阅读