首页 > 解决方案 > Apache Spark spark.read 未按预期工作

问题描述

我正在学习 IBM Apache Spark。我正在使用 HMP 数据集。我按照教程中的说明进行操作,但代码未按预期工作。这是我的代码:

!git clone https://github.com/wchill/HMP_Dataset

from pyspark.sql.types import StructType, StructField, IntegerType

schema = StructType([
    StructField("x",IntegerType(), True),
    StructField("y",IntegerType(), True),
    StructField("z",IntegerType(), True)
])

import os
file_list = os.listdir("HMP_Dataset")
file_list_filtered = [file for file in file_list if "_" in file]
from pyspark.sql.functions import lit
for cat in file_list_filtered:
    data_files = os.listdir("HMP_Dataset/" + cat)

    for data_file in data_files:
        print(data_file)

        temp_df = spark.read.option("header","false").option( "delimeter" , " ").csv("HMP_Dataset/" + cat + "/" + data_file, schema=schema)

        temp_df = temp_df.withColumn("class",lit(cat))
        temp_df = temp_df.withColumn("source",lit(data_file))

        if df is None:
            df = temp_df
        else:
            df = df.union(temp_df)

x,y,z 的模式在使用 df.show() 方法时保持为空。这是输出:

+----+----+----+-----------+--------------------+
|   x|   y|   z|      class|              source|
+----+----+----+-----------+--------------------+
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
|null|null|null|Brush_teeth|Accelerometer-201...|
+----+----+----+-----------+--------------------+
only showing top 20 rows

x、y、z 列必须有数字。我究竟做错了什么?我使用了教程视频中显示的确切代码。我正在使用 IBM Watson Studio 来运行该程序。链接到教程https://www.coursera.org/learn/advanced-machine-learning-signal-processing/lecture/8cfiW/introduction-to-sparkml

标签: pythonapache-sparkpysparkibm-cloudapache-spark-ml

解决方案


似乎您在指定“分隔符”的选项中有错字,而要传递的正确选项是“分隔符”

temp_df = spark.read.option("header","false").option( "delimeter" , " ").csv("HMP_Dataset/" + cat + "/" + data_file, schema=schema)

正确的:-

temp_df = spark.read.option("header","false").option( "delimiter" , " ").csv("HMP_Dataset/" + cat + "/" + data_file, schema=schema)

您也可以使用“sep”作为分隔符。如需更多参考,请在此处或 spark 文档中参考 spark-csv:- https://github.com/databricks/spark-csv


推荐阅读