首页 > 解决方案 > 使用 hadoop 3.2 安装 Pyspark 3.0+,df.write 错误

问题描述

我在 Windows 10 上尝试了 spark-3.0.2 和 bin-hadoop2.7,它工作正常。✅</p>

然而...

当我尝试pyspark-3.1.1使用bin-hadoop3.2 安装时,我在df.write...
我想要一个带有 hadoop 3.2 + 的 pyspark 版本 3.0+,因为我想将 spark 与 azure adls 连接起来,这是一个要求。

我遵循与以前相同的步骤。(安装相同版本的 pyspark,添加winutils,设置PYSPARK_HOMEHADOOP_HOME,并将它们添加到path)。虽然我可以成功启动 spark 会话并使用 df 创建一个 df df = spark.range(0, 5),但我收到以下错误 ❌ with df.write.csv('mycsv.csv')or df.write.parquet('myp.parquet') 等。❌</p>

错误如下:

Traceback (most recent call last):
  File "<input>", line 23, in <module>
  File "C:\Users\user\Work\RAP2.0\.rap2_env\lib\site-packages\pyspark\sql\readwriter.py", line 1158, in saveAsTable
    self._jwrite.saveAsTable(name)
  File "C:\Users\user\Work\RAP2.0\.rap2_env\lib\site-packages\py4j\java_gateway.py", line 1305, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "C:\Users\user\Work\RAP2.0\.rap2_env\lib\site-packages\pyspark\sql\utils.py", line 111, in deco
    return f(*a, **kw)
  File "C:\Users\user\Work\RAP2.0\.rap2_env\lib\site-packages\py4j\protocol.py", line 328, in get_return_value
    format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o49.saveAsTable.
: java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)V
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Native Method)
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode(NativeIO.java:560)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:534)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:587)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:559)
    at org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:705)
    at org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.liftedTree1$1(InMemoryCatalog.scala:121)
    at org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.createDatabase(InMemoryCatalog.scala:118)
    at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:137)
    at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:124)
    at org.apache.spark.sql.internal.BaseSessionStateBuilder.$anonfun$catalog$1(BaseSessionStateBuilder.scala:140)
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.externalCatalog$lzycompute(SessionCatalog.scala:98)
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.externalCatalog(SessionCatalog.scala:98)
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.databaseExists(SessionCatalog.scala:266)
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.requireDbExists(SessionCatalog.scala:191)
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.getTableRawMetadata(SessionCatalog.scala:487)
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.getTableMetadata(SessionCatalog.scala:474)
    at org.apache.spark.sql.execution.datasources.v2.V2SessionCatalog.loadTable(V2SessionCatalog.scala:65)
    at org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension.loadTable(DelegatingCatalogExtension.java:68)
    at org.apache.spark.sql.delta.catalog.DeltaCatalog.loadTable(DeltaCatalog.scala:147)
    at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:637)
    at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:623)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Unknown Source)

我怀疑它与我从这里获得的 winutils 文件有关:https ://github.com/cdarlint/winutils for 3.2.0。也许无效?

有什么想法吗?

标签: pythonapache-sparkpyspark

解决方案


推荐阅读