首页 > 解决方案 > Spark 的示例在客户端模式下抛出 FileNotFoundException

问题描述

我有:Ubuntu 14.04、Hadoop 2.7.7、Spark 2.2.0。

我刚刚安装了所有东西。

当我尝试运行 Spark 的示例时:

bin/spark-submit --deploy-mode client \
               --class org.apache.spark.examples.SparkPi \
               examples/jars/spark-examples_2.11-2.2.0.jar 10

我收到以下错误:

INFO yarn.Client:客户端令牌:N/A 诊断:应用程序 application_1552490646290_0007 失败 2 次,原因是 AM Container for appattempt_1552490646290_0007_000002 退出并退出代码:-1000 有关更详细的输出,请查看应用程序跟踪页面:http://ip-123-45- 67-89:8088/cluster/app/application_1552490646290_0007然后,单击指向每次尝试日志的链接。诊断:文件文件:/tmp/spark-f5879f52-6777-481a-8ecf-bbb55e376901/__spark_libs__6948713644593068670.zip 不存在 java.io.FileNotFoundException:文件文件:/tmp/spark-f5879f52-6777-481a-8ecf-bbb553/ __spark_libs__6948713644593068670.zip 不存在

 at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611)
        at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
        at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
        at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:428)
        at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
        at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
        at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
        at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:421)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
        at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
        at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:473)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
        at java.lang.Thread.run(Thread.java:748)

我在客户端模式和集群模式下都遇到同样的错误。

标签: apache-sparkubuntuhadoopbigdatahadoop-yarn

解决方案


首先,包含您的应用程序和所有依赖项的捆绑 jar 的路径。URL 必须在集群内全局可见,例如,所有节点上都存在 hdfs:// 路径或 file:// 路径。

其次,如果您在 YARN 模式下运行,您需要将 master 指向纱线提交应用程序,并将您的 jar 文件放入 hdfs

# Run on a YARN cluster
# Connect to a YARN cluster in client or cluster mode depending on the value 
# of --deploy-mode. The cluster location will be found based on the HADOOP_CONF_DIR 
# or YARN_CONF_DIR variable. 

export HADOOP_CONF_DIR=XXX
./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn \
--deploy-mode cluster \  # can be client for client mode
hdfs://path/to/spark-examples.jar
1000

推荐阅读