首页 > 解决方案 > Microsoft Spark - JVM 方法执行失败:类的非静态方法“csv”失败

问题描述

我尝试将结果保存为 Windows Server 2019 上的一个“csv”文件。我正在使用“Microsoft.Spark”库。创建一个没有“csv”文件的空文件夹。查询本身有效。

SparkSession spark =
                 SparkSession
                     .Builder()
                     .Master("local[*]")
                     .AppName("sparkToMongo")
                     .Config("spark.mongodb.input.uri", authURI)
                     .GetOrCreate();

            DataFrame df = spark.Read().Format("com.mongodb.spark.sql.DefaultSource").Load();

            df.CreateOrReplaceTempView("table");

            var beers = spark.Sql(sql).ToDF();

            beers.Coalesce(1).Write().Option("mode", "append").Option("header", "true").Csv("result");

            spark.Stop();

我收到以下错误:

[2021-06-10T14:17:04.3113758Z] [cdp-vm] [Exception] [JvmBridge] JVM method execution failed: Nonstatic method 'csv' failed for class '15' when called with 1 arguments ([Index=1, Type=String, Value=result], )
   at Microsoft.Spark.Interop.Ipc.JvmBridge.CallJavaMethod(Boolean isStatic, Object classNameOrJvmObjectReference, String methodName, Object[] args)
Unhandled exception. System.Exception: JVM method execution failed: Nonstatic method 'csv' failed for class '15' when called with 1 arguments ([Index=1, Type=String, Value=result], )
 ---> Microsoft.Spark.JvmException: org.apache.spark.SparkException: Job aborted.
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:231)
        at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:178)
        at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
        at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)

标签: c#.netapache-sparkwindows-server-2019spark-csv

解决方案


推荐阅读