首页 > 解决方案 > Py4JJavaError:SparkException:作业因阶段失败而中止

问题描述

我通过 pyspark 使用 Spark。我正在运行以下玩具示例(在 Jupyter Notebook 中):

import findspark
findspark.init()

import pyspark
import random

sc = pyspark.SparkContext(appName="Pi")
num_samples = 10000

def inside(p):     
  x, y = random.random(), random.random()
  return x*x + y*y < 1

count = sc.parallelize(range(0, num_samples)).filter(inside).count()

pi = 4 * count / num_samples
print(pi)

sc.stop()

使用 num_samples = 100 或类似值时运行良好,但对于给定的数字,它返回有关 Python Workers 的错误:

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
    : org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 0.0 failed 1 times, most recent failure: Lost task 2.0 in stage 0.0 (TID 2, localhost, executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
        [...]
    Caused by: org.apache.spark.SparkException: Python worker failed to connect back.
        [...]
    Caused by: java.net.SocketTimeoutException: Accept timed out
        [...]

标签: pythonpython-3.xapache-sparkpyspark

解决方案


推荐阅读