首页 > 解决方案 > 从 HDFS 目录读取 csv 文件时无法修复 UnknownHostException

问题描述

我的 spark 程序正在服务器上运行:serverA. 我正在从 pyspark 终端运行代码。使用该程序,我试图从另一个服务器上设置的另一个集群读取 csv 文件 -> server: serverB,HDFS cluster:clusterB如下:

 spark = SparkSession.builder.master('yarn').appName("Detector").config('spark.app.name','dummy_App').config('spark.executor.memory','2g').config('spark.executor.cores','2').config('spark.yarn.keytab','/home/testuser/testuser.keytab').config('spark.yarn.principal','krbtgt/HADOOP.NAME.COM@NAME.COM').config('spark.executor.instances','1').config('hadoop.security.authentication','kerberos').config('spark.yarn.access.hadoopFileSystems','hdfs://clusterB').config('spark.yarn.principal','testuser@NAME.COM').getOrCreate()

我要读取的文件在集群上:clusterB如下:

(base) testuser@hdptetl:[~] {46} $ hadoop fs -df -h
Filesystem          Size     Used  Available  Use%
hdfs://clusterB  787.3 T  554.5 T    230.7 T   70%

我在 spark 配置中提到的 keytab 详细信息(keytab 的路径,KDC REALM)存在于服务器serverB 上当我尝试将文件加载为:

csv_df = spark.read.format('csv').load('hdfs://botest01/test/mr/wc.txt')

代码结果UnknownHostException如下:

>>> tdf = spark.read.format('csv').load('hdfs://clusterB/test/mr/wc.txt')
20/07/15 15:40:36 WARN FileStreamSink: Error while looking for metadata directory.
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/hdp/current/spark2-client/python/pyspark/sql/readwriter.py", line 166, in load
    return self._df(self._jreader.load(path))
  File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1160, in __call__
  File "/usr/hdp/current/spark2-client/python/pyspark/sql/utils.py", line 79, in deco
    raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: u'java.net.UnknownHostException: clusterB'

谁能让我知道我在这里犯了什么错误,我该如何解决?

标签: apache-sparkhadooppyspark

解决方案


推荐阅读