pyspark - org.apache.spark.SparkException:pyspark.daemon 的标准输出中没有端口号
问题描述
我正在 Hadoop-Yarn 集群上执行 spark-submit 作业。
火花提交/opt/spark/examples/src/main/python/pi.py 1000
但面临以下错误消息。似乎是工人没有开始。
2018-12-20 07:25:14 INFO SparkContext:54 - Created broadcast 0 from broadcast at DAGScheduler.scala:1161
2018-12-20 07:25:14 INFO DAGScheduler:54 - Submitting 1000 missing tasks from ResultStage 0 (PythonRDD[1] at reduce at /opt/spark/examples/src/main/python/pi.py:44) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14))
2018-12-20 07:25:14 INFO YarnScheduler:54 - Adding task set 0.0 with 1000 tasks
2018-12-20 07:25:14 INFO TaskSetManager:54 - Starting task 0.0 in stage 0.0 (TID 0, hadoop-slave2, executor 1, partition 0, PROCESS_LOCAL, 7863 bytes)
2018-12-20 07:25:14 INFO TaskSetManager:54 - Starting task 1.0 in stage 0.0 (TID 1, hadoop-slave1, executor 2, partition 1, PROCESS_LOCAL, 7863 bytes)
2018-12-20 07:25:15 INFO BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on hadoop-slave2:37217 (size: 4.2 KB, free: 93.3 MB)
2018-12-20 07:25:15 INFO BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on hadoop-slave1:35311 (size: 4.2 KB, free: 93.3 MB)
2018-12-20 07:25:15 INFO TaskSetManager:54 - Starting task 2.0 in stage 0.0 (TID 2, hadoop-slave2, executor 1, partition 2, PROCESS_LOCAL, 7863 bytes)
2018-12-20 07:25:15 INFO TaskSetManager:54 - Starting task 3.0 in stage 0.0 (TID 3, hadoop-slave1, executor 2, partition 3, PROCESS_LOCAL, 7863 bytes)
2018-12-20 07:25:16 WARN TaskSetManager:66 - Lost task 0.0 in stage 0.0 (TID 0, hadoop-slave2, executor 1): org.apache.spark.SparkException:
Error from python worker:
Traceback (most recent call last):
File "/usr/lib64/python2.6/runpy.py", line 104, in _run_module_as_main
loader, code, fname = _get_module_details(mod_name)
File "/usr/lib64/python2.6/runpy.py", line 79, in _get_module_details
loader = get_loader(mod_name)
File "/usr/lib64/python2.6/pkgutil.py", line 456, in get_loader
return find_loader(fullname)
File "/usr/lib64/python2.6/pkgutil.py", line 466, in find_loader
for importer in iter_importers(fullname):
File "/usr/lib64/python2.6/pkgutil.py", line 422, in iter_importers
__import__(pkg)
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/__init__.py", line 51, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/context.py", line 31, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/accumulators.py", line 97, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/serializers.py", line 71, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/cloudpickle.py", line 246, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/cloudpickle.py", line 270, in CloudPickler
NameError: name 'memoryview' is not defined
PYTHONPATH was:
/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/filecache/21/__spark_libs__3793296165132209773.zip/spark-core_2.11-2.4.0.jar: /tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip:/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/py4j-0.10.7-src.zip
org.apache.spark.SparkException: No port number in pyspark.daemon's stdout
at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:204)
at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:122)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:95)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
解决方案
我相信当 Python 版本不匹配时会发生此问题。
将以下内容添加到我的 ~/.bash_profile 对我有用:
alias spark-submit='PYSPARK_PYTHON=$(which python) spark-submit'
它应该强制 spark 使用您在模块中加载的相同版本的 python。
推荐阅读
- excel - 如何通过VBA同时异步循环播放Excel中的多个WAV文件?
- javascript - "Content-Disposition", "inline;filename=" + filName 不强制显示 pdf 文件
- java - 如何将对象添加到静态列表以供下面的程序增加它
- python - ValueError:检查目标时出错:预期 activation_10 具有 2 维,但得到了形状为 (118, 50, 1) 的数组
- javascript - 如何在滚动上循环垂直轮播
- sql - 在 sql 中使用用户定义的表类型时出错 --> 操作数类型冲突:varchar 与“用户定义的表类型”不兼容
- haskell - 如何使用 Stack Haskell 构建 docker 镜像?
- swift - 在 Swift 中使用枚举
- javascript - 如何保护数据库中 dropzone 图像上传的文件路径?
- linux - 将当前目录存储在变量中,并使用变量存储从前一个变量开发的路径