首页 > 解决方案 > Pyspark - ImportError:无法从“pyspark”导入名称“SparkContext”

问题描述

当我使用 python 和 pyspark 开发数据管道时,我遇到了以下错误。

PS C:\Users\folder\Documents\folder\projects\code\etl-gd\src\jobs\greater-data> python test.py Traceback(最近一次调用最后):文件“test.py”,第 1 行,从 pyspark.conf 导入 SparkConf 文件“C:\Users\folder\AppData\Local\Programs\Python\Python37\lib\site-packages\pyspark__init__.py”,第 51 行,从 pyspark.context 导入 SparkContext 文件“C :\Users\folder\AppData\Local\Programs\Python\Python37\lib\site-packages\pyspark\context.py”,第 43 行,从 pyspark.profiler 导入 ProfilerCollector,BasicProfiler 文件“C:\Users\folder\ AppData\Local\Programs\Python\Python37\lib\site-packages\pyspark\profiler.py”,第 18 行,在导入 cProfile 文件“C:\Users\folder\AppData\Local\Programs\Python\Python37\lib\ cProfile.py”,第 10 行,在导入配置文件中作为 _pyprofile 文件“C:\Users\folder\Documents\folder\projects\code\etl-gd\src\jobs\greater-data\profile.py”,第 2 行,从 awsglue.context 导入 GlueContext 文件“C:\Users\folder\Documents\folder\projects\code\etl-gd\src\jobs\greater-data\awsglue__init__.py”,第 13 行,从 .dynamicframe 导入 DynamicFrame 文件“C:\Users\folder \Documents\folder\projects\code\etl-gd\src\jobs\greater-data\awsglue\dynamicframe.py”,第 20 行,从 pyspark.sql.dataframe 导入 DataFrame 文件“C:\Users\folder\AppData \Local\Programs\Python\Python37\lib\site-packages\pyspark\sql__init__.py”,第 45 行,从 pyspark.sql.types 导入行文件“C:\Users\folder\AppData\Local\Programs\Python \Python37\lib\site-packages\pyspark\sql\types.py",第 36 行,在 from pyspark import SparkContext ImportError: cannot import name 'SparkContext' from 'pyspark' (C:\Users\folder\AppData\Local\Programs\Python\Python37\lib\site-packages\pyspark__init__.py)

该代码非常简单,仅用于尝试:

from pyspark.conf import SparkConf

print("hello world")

Java、spark、python 和 pyspark 正确安装如下:

> PS
> C:\Users\folder\Documents\folder\projects\code\etl-gd\src\jobs\greater-data>
> java -version java version "1.8.0_241" Java(TM) SE Runtime Environment
> (build 1.8.0_241-b07) Java HotSpot(TM) 64-Bit Server VM (build
> 25.241-b07, mixed mode) PS C:\Users\folder\Documents\folder\projects\code\etl-gd\src\jobs\greater-data>


> PS
> C:\Users\folder\Documents\folder\projects\code\etl-gd\src\jobs\greater-data> python --version 
> Python 3.7.6


> PS
> C:\Users\folder\Documents\folder\projects\code\etl-gd\src\jobs\greater-data>
> spark-shell --version Welcome to
>       ____              __
>      / __/__  ___ _____/ /__
>     _\ \/ _ \/ _ `/ __/  '_/    /___/ .__/\_,_/_/ /_/\_\   version 2.4.3
>       /_/
> 
> Using Scala version 2.11.12, Java HotSpot(TM) 64-Bit Server VM,
> 1.8.0_231 Branch heads/v2.4.3 Compiled by user vaviliv on 2019-09-17T17:31:05Z Revision c3e32bf06c35ba2580d46150923abfa795b4446a
> Url https://github.com/apache/spark Type --help for more information.


> PS
> C:\Users\folder\Documents\folder\projects\code\etl-gd\src\jobs\greater-data>
> pyspark --version
>     Welcome to
>           ____              __
>          / __/__  ___ _____/ /__
>         _\ \/ _ \/ _ `/ __/  '_/
>        /___/ .__/\_,_/_/ /_/\_\   version 2.4.3
>           /_/
>     
>     Using Scala version 2.11.12, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_231
>     Branch heads/v2.4.3
>     Compiled by user vaviliv on 2019-09-17T17:31:05Z
>     Revision c3e32bf06c35ba2580d46150923abfa795b4446a
>     Url https://github.com/apache/spark
>     Type --help for more information.

预先感谢您的帮助。

标签: pythonpython-3.xapache-sparkpyspark

解决方案


我知道了。我创建了一个独立的虚拟环境,因为我的计算机中有一些版本的 python 和 spark。


推荐阅读