首页 > 解决方案 > 如何在 pyspark 数据帧上应用 group by 并对结果对象进行转换

问题描述

我有一个火花数据框

| item_id | attribute_key| attribute_value
____________________________________________________________________________
| id_1        brand          Samsung
| id_1        ram            6GB
| id_2        brand          Apple
| id_2        ram            4GB
_____________________________________________________________________________

我想将此数据框分组item_id并输出为一个文件,每一行都是一个json对象

{id_1: "properties":[{"brand":['Samsung']},{"ram":['6GB']} ]}
{id_2: "properties":[{"brand":['Apple']},{"ram":['4GB']} ]}

这是一个大型分布式数据框,因此不能转换为 pandas。这种转换在pyspark中是否可能?

标签: jsonapache-sparkpysparkapache-spark-sqlpyspark-dataframes

解决方案


在 scala 中,但 python 版本将非常相似(sql.functions):

val df = Seq((1,"brand","Samsung"),(1,"ram","6GB"),(1,"ram","8GB"),(2,"brand","Apple"),(2,"ram","6GB")).toDF("item_id","attribute_key","attribute_value")

+-------+-------------+---------------+
|item_id|attribute_key|attribute_value|
+-------+-------------+---------------+
|      1|        brand|        Samsung|
|      1|          ram|            6GB|
|      1|          ram|            8GB|
|      2|        brand|          Apple|
|      2|          ram|            6GB|
+-------+-------------+---------------+

df.groupBy('item_id,'attribute_key)
.agg(collect_list('attribute_value).as("list2"))
.groupBy('item_id)
.agg(map(lit("properties"),collect_list(map('attribute_key,'list2))).as("prop"))
.select(to_json(map('item_id,'prop)).as("json"))
.show(false)

输出:

+------------------------------------------------------------------+
|json                                                              |
+------------------------------------------------------------------+
|{"1":{"properties":[{"ram":["6GB","8GB"]},{"brand":["Samsung"]}]}}|
|{"2":{"properties":[{"brand":["Apple"]},{"ram":["6GB"]}]}}        |
+------------------------------------------------------------------+

推荐阅读