首页 > 解决方案 > 尝试保存 pyspark 数据框时出错

问题描述

我有一个使用 pyspark 的 python 脚本,当通过 jupyter 完成时运行良好。当使用 spark-submit 运行时,由于某种原因尝试使用该行保存结果时崩溃

df.write.format('jdbc').options(
    url='jdbc:mysql://{0}/{1}?useServerPrepStmts=false&rewriteBatchedStatements=true'.format(\
        output_server, output_db),\
    driver='com.mysql.jdbc.Driver',\
    dbtable=output_table,\
    user='user',\
    password='xxxx').mode('overwrite').save()

错误是:

Traceback (most recent call last):
  File "/opt/spark-2.1.0-bin-hadoop2.7/sbin/test.py", line 381, in <module>
    password='xxxx').mode('overwrite').save()
  File "/opt/spark-2.1.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 548, in save
  File "/opt/spark-2.1.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
  File "/opt/spark-2.1.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
    if records_acum:
  File "/opt/spark-2.1.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o55.save.
: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:38)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:78)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:78)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:78)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:34)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:53)
    at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:426)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:745)

如果我尝试使用

/opt/Spark/spark-2.2.0_hadoop-2.7/bin/spark-submit --packages mysql:mysql-connector-java:5.1.40 test.py

然后避免了崩溃,但脚本永远不会完成,只是挂在同一 df.save 行上。如果不清楚,我想运行脚本完成,成功保存数据。

标签: apache-sparkpysparkapache-spark-sqljupyter

解决方案


我发现以下 Add jars to a Spark Job - spark-submit它应该有助于解决您的加载问题。似乎执行程序无法获取 MySQL 驱动程序。


推荐阅读