首页 > 解决方案 > 无法访问 pyspark 中的本地文件

问题描述

我正在尝试在 Yarn 框架上以客户端模式读取本地文件。我也无法在客户端模式下访问本地文件。

import os
import pyspark.sql.functions as F
from os import listdir, path

from pyspark import SparkConf, SparkContext

import argparse
from pyspark import SparkFiles
from pyspark.sql import SparkSession

def main():
    spark = SparkSession \
    .builder \
    .appName("Spark File load example") \
    .config("spark.jars","/u/user/someuser/sqljdbc4.jar") \
    .config("spark.dynamicAllocation.enabled","true") \
    .config("spark.shuffle.service.enabled","true") \
    .config("hive.exec.dynamic.partition", "true") \
    .config("hive.exec.dynamic.partition.mode", "nonstrict") \
    .config("spark.sql.shuffle.partitions","50") \
    .config("hive.metastore.uris", "thrift://******.hpc.****.com:9083") \
    .enableHiveSupport() \
    .getOrCreate()

    spark.sparkContext.addFile("/u/user/vikrant/testdata/EMPFILE1.csv")


    inputfilename=getinputfile(spark)
    print("input file path is:",inputfilename)
    data = processfiledata(spark,inputfilename)
    data.show()
    spark.stop()

def getinputfile(spark):

    spark_files_dir = SparkFiles.getRootDirectory()
    print("spark_files_dir:",spark_files_dir)
    inputfile = [filename
                   for filename in listdir(spark_files_dir)
                   if filename.endswith('EMPFILE1.csv')]
    if len(inputfile) != 0:
        path_to_input_file = path.join(spark_files_dir, inputfile[0])
    else:
        print("file path not found",path_to_input_file)

    print("inputfile name:",inputfile)
    return path_to_input_file


    def processfiledata(spark,inputfilename):

        dataframe= spark.read.format("csv").option("header","false").load(inputfilename)
        return dataframe

if __name__ == "__main__":
     main()

Below is my shell script-->
    spark-submit --master yarn --deploy-mode client PysparkMainModulenew.py --files /u/user/vikrant/testdata/EMPFILE1.csv

下面是错误信息-->

('spark_files_dir:', u'/h/tmp/spark-76bdbd48-cbb4-4e8f-971a-383b899f79b0/userFiles-ee6dcdec-b320-433b-8491-311927c75fe2') ('输入文件名:', [u'EMPFILE1. csv']) ('输入文件路径为:', u'/h/tmp/spark-76bdbd48-cbb4-4e8f-971a-383b899f79b0/userFiles-ee6dcdec-b320-433b-8491-311927c75fe2/EMPFILE1.csv') Traceback (最近一次通话最后):文件“/u/user/vikrant/testdata/PysparkMainModulenew.py”,第 57 行,在 main() 文件“/u/user/vikrant/testdata/PysparkMainModulenew.py”,第 31 行,在main data = processfiledata(spark,inputfilename) File "/u/user/vikrant/testdata/PysparkMainModulenew.py", line 53, in processfiledata dataframe = spark.read.format("csv").option("header","错误的”)。加载(输入文件名)文件“/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/readwriter.py”,第 166 行,在加载文件“/usr/hdp/current/spark2-客户端/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py”,第 1160 行,在在 deco pyspark.sql.utils.AnalysisException 中调用 文件“/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/utils.py”,第 69 行:u'Path 不存在:hdfs://hdd2cluster/h/tmp/spark-76bdbd48-cbb4-4e8f-971a-383b899f79b0/userFiles-ee6dcdec-b320-433b-8491-311927c75fe2/EMPFILE1.csv;'

标签: apache-sparkpyspark

解决方案


你有这样的东西。这不起作用,因为您需要PysparkMainModulenew.py--files选项之后放置。所以这

spark-submit --master yarn --deploy-mode client PysparkMainModulenew.py --files /u/user/vikrant/testdata/EMPFILE1.csv

应该,

spark-submit --master yarn --deploy-mode client --files /u/user/vikrant/testdata/EMPFILE1.csv PysparkMainModulenew.py

addFile而且,在那种情况下不需要使用。您可以将两者复制PysparkMainModulenew.pyEMPFILE1.csv同一个文件夹。而且,一切都应该在--files选项之后。

spark-submit --master yarn --deploy-mode client --files /u/user/vikrant/testdata/EMPFILE1.csv /u/user/vikrant/testdata/PysparkMainModulenew.py

或者,您也可以使用--py-files选项。


推荐阅读