首页 > 解决方案 > 如何在 spark scala 中加载包含多行记录的 CSV 文件?

问题描述

我有一个多行字段 csv ,我尝试通过 spark 将其加载为数据框。

Cust_id, cust_address, city,zip
1, "1289 cobb parkway
Bufford", "ATLANTA",34343
2, "1234 IVY lane
Decatur", "ATLANTA",23435


val df = Spark.read.format("csv")
              .option("multiLine", true)
              .option("header", true)
              .option("escape", "\"")
              .load("/home/SPARK/file.csv")

    df.show()

这向我展示了数据框,例如-

+--------+-------------------+-----+----+
| id     | address           | city| zip|
+--------+-------------------+-----+----+
|       1| "1289 cobb parkway| null|null|
|Bufford"|          "ATLANTA"|34343|null|
|       2|     "1234 IVY lane| null|null|
|Decatur"|          "ATLANTA"|23435|null|
+--------+-------------------+-----+----+

我想要输出像 -

+---+--------------------+-------+-----+
| id|             address|   city|  zip|
+---+--------------------+-------+-----+
|  1|1289 cobb parkway...|ATLANTA|34343|
|  2|1234 IVY lane Dec...|ATLANTA|23435|
+---+--------------------+-------+-----+

标签: csvdataframeapache-sparkapache-spark-sql

解决方案


val File = sqlContext.read.format("com.databricks.spark.csv")
.option("delimiter", delimiter)
.option("header",true)
.option("quote", "\"")
.option("multiLine", "true")
.option("inferSchema", "true")
.option("parserLib", "UNIVOCITY")
.option("ignoreTrailingWhiteSpace","true")
.option("ignoreLeadingWhiteSpace", true)
.load(file_name) 

推荐阅读