首页 > 解决方案 > 从 Jupyter Notebook 运行 Spark/Python

问题描述

我创建了 shell 脚本来从 Jupyter 笔记本访问 PySpark。当我运行脚本时,我在下面收到此错误。

sudo /home/scripts/jupyspark.sh test.py 
**/home/scripts/jupyspark.sh: line 6: /bin/pyspark: No such file or directory**

这是我的 jupyspark 脚本

#!/bin/bash
export PYSPARK_DRIVER_PYTHON=jupyter
export PYSPARK_DRIVER_PYTHON_OPTS="notebook --NotebookApp.open_browser=True --NotebookApp.ip='localhost' --NotebookApp.port=8888"

${SPARK_HOME}/bin/pyspark \
--master local[4] \
--executor-memory 1G \
--driver-memory 1G \
--conf spark.sql.warehouse.dir="file:///tmp/spark-warehouse" \
--packages com.databricks:spark-csv_2.11:1.5.0 \
--packages com.amazonaws:aws-java-sdk-pom:1.10.34 \
--packages org.apache.hadoop:hadoop-aws:2.7.3

我也做了这个步骤:

cat ~/.bash_profile 
export SPARK_HOME=/usr/local/spark
export PYTHONPATH=$SPARK_HOME/python:$PYTHONPATH
export HADOOP_HOME=/usr/local/hadoop
export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native/:$LD_LIBRARY_PATH
export AWS_ACCESS_KEY_ID='MY_ACCESS_KEY'
export AWS_SECRET_ACCESS_KEY='MY_SECRET_ACCESS_KEY'

您对如何解决这个问题有任何想法吗?

标签: pythonapache-sparkhadooppysparkjupyter-notebook

解决方案


推荐阅读