首页 > 解决方案 > Spark 安装问题 -TypeError: an integer is required (got type bytes) - spark-2.4.5-bin-hadoop2.7, hadoop 2.7.1, python 3.8.2

问题描述

我正在尝试在我的 64 位 Windows 操作系统计算机上安装 Spark。我安装了 python 3.8.2。我有版本 20.0.2 的点子。我下载 spark-2.4.5-bin-hadoop2.7 并将环境变量设置为 HADOOP_HOME、SPARK_HOME 并将 pyspark 添加到路径变量。当我从 cmd 运行 pyspark 时,我看到下面给出的错误:

C:\Users\aa>pyspark
Python 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 23:03:10) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Traceback (most recent call last):
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\shell.py", line 31, in <module>
    from pyspark import SparkConf
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\__init__.py", line 51, in <module>
    from pyspark.context import SparkContext
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\context.py", line 31, in <module>
    from pyspark import accumulators
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\accumulators.py", line 97, in <module>
    from pyspark.serializers import read_int, PickleSerializer
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\serializers.py", line 72, in <module>
    from pyspark import cloudpickle
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\cloudpickle.py", line 145, in <module>
    _cell_set_template_code = _make_cell_set_template_code()
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\cloudpickle.py", line 126, in _make_cell_set_template_code
    return types.CodeType(
TypeError: an integer is required (got type bytes)

我想将 pyspark 导入到我的 python 代码中,但是在 Pycharm 中,但是在我运行我的代码文件之后,我遇到了一个错误,比如TypeError: an integer is required (got type bytes)。我卸载了 python 3.8.2 并尝试使用 python 2.7,但在这种情况下,我出现了折旧错误。我接受下面给出的错误并更新 pip 安装程序。

Could not find a version that satisfies the requirement pyspark (from versions: )
No matching distribution found for pyspark 

然后我跑去python -m pip install --upgrade pip更新 pip 但我TypeError: an integer is required (got type bytes) 又遇到了问题。

C:\Users\aa>python --version
Python 3.8.2

C:\Users\aa>pip --version
pip 20.0.2 from c:\users\aa\appdata\local\programs\python\python38\lib\site-packages\pip (python 3.8)

C:\Users\aa>java --version
java 14 2020-03-17
Java(TM) SE Runtime Environment (build 14+36-1461)
Java HotSpot(TM) 64-Bit Server VM (build 14+36-1461, mixed mode, sharing)

我该如何解决和克服这个问题?目前我有 spark-2.4.5-bin-hadoop2.7 和 python 3.8.2。提前致谢!

标签: pythonapache-sparkhadooppysparkapache-spark-sql

解决方案


这是一个python3.8和spark版本的兼容性问题,你可以看到:https ://github.com/apache/spark/pull/26194 。

要使其发挥作用(在一定程度上),您需要:

  • 将pyspark目录中的cloudpickle.py文件替换为其 1.1.1 版本,在以下位置找到它: https ://github.com/cloudpipe/cloudpickle/blob/v1.1.1/cloudpickle/cloudpickle.py 。
  • 编辑cloudpickle.py文件以添加:
def print_exec(stream):
    ei = sys.exc_info()
    traceback.print_exception(ei[0], ei[1], ei[2], None, stream)

然后你就可以导入 pyspark。


推荐阅读