r - spark_write_csv 不再起作用(使用 Sparklyr)
问题描述
spark_write_csv 函数不再起作用了,可能是因为我升级了 Spark 版本。有人可以帮忙吗?
这是代码示例,以及下面的错误消息:
library(sparklyr)
library(dplyr)
spark_conn <- spark_connect(master = "local")
iris <- copy_to(spark_conn, iris, overwrite = TRUE)
spark_write_csv(iris, path = "iris.csv")
Error: org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:231)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:132)
解决方案
推荐阅读
- python - gem5 "ImportError: No module named Six" 即使安装了模块六
- tcp - TCP 可靠服务
- c++ - C++:右值引用构造函数和复制省略
- ffmpeg - 使用FFmpeg重新编码时如何获得视频无损旋转?
- airflow - 为什么我的 Airflow DAG 没有运行完成?
- github - 谁可以转让回购所有权
- python - SQLITE3 python根据字符串日期yyyy/mm/dd hh:mm:ss删除行
- wordpress - 我想知道如何让这个 CSS 只在 Woocommerce 上的某些产品上工作?
- c++ - 如何在 C++ 中创建 if else 循环而不是多个嵌套?
- nfc - 读取 NFC 标签时如何显示标题而不是 URL