首页 > 解决方案 > 如何将多个 Spark 数据帧转换为 Dataset[Map[String, Array]]?

问题描述

我需要获取 Map[String, DataFrame] 并将其转换为 Dataset[Map[String, Array]]

val map_of_df = Map(
 "df1"->sc.parallelize(1 to 4).map(i => (i,i*1000)).toDF("id","x").repartition(4)
,"df2"->sc.parallelize(1 to 4).map(i => (i,i*100)).toDF("id","y").repartition(4)
)

//map_of_df: scala.collection.immutable.Map[String,org.apache.spark.sql.Dataset[org.apache.spark.sql.Row]] = Map(df1 -> [id: int, x: int], df2 -> [id: int, y: int])

//magic here, I need a type of org.apache.spark.sql.Dataset[Map[String, Array[org.apache.spark.sql.Row]]] with four partitions
//where the keys to the map are "df1" and "df2"

标签: scalaapache-sparkapache-spark-sql

解决方案


你只是collect所有的DataFrames:

map_of_df
      .mapValues(_.collect())
      .toSeq
      .toDS

请记住,这不会扩展 - 所有数据都将加载到驱动程序内存中。换句话说,您不需要 Spark。


推荐阅读