首页 > 解决方案 > Pyspark agg function to "explode" rows into columns

问题描述

Basically, I have a dataframe that looks like this:

+----+-------+------+------+
| id | index | col1 | col2 |
+----+-------+------+------+
| 1  | a     | a11  | a12  |
+----+-------+------+------+
| 1  | b     | b11  | b12  |
+----+-------+------+------+
| 2  | a     | a21  | a22  |
+----+-------+------+------+
| 2  | b     | b21  | b22  |
+----+-------+------+------+

and my desired output is this:

+----+--------+--------+--------+--------+
| id | col1_a | col1_b | col2_a | col2_b |
+----+--------+--------+--------+--------+
| 1  | a11    | b11    | a12    | b12    |
+----+--------+--------+--------+--------+
| 2  | a21    | b21    | a22    | b22    |
+----+--------+--------+--------+--------+

So basically I want to "explode" the index column into new columns after I groupby id. Btw, the id counts are the same and each id has the same set of index values. I'm using pyspark.

标签: apache-sparkpyspark

解决方案


使用 pivot 可以实现所需的输出。

from pyspark.sql import functions as F
df = spark.createDataFrame([[1,"a","a11","a12"],[1,"b","b11","b12"],[2,"a","a21","a22"],[2,"b","b21","b22"]],["id","index","col1","col2"])
df.show()
+---+-----+----+----+                                                           
| id|index|col1|col2|
+---+-----+----+----+
|  1|    a| a11| a12|
|  1|    b| b11| b12|
|  2|    a| a21| a22|
|  2|    b| b21| b22|
+---+-----+----+----+

使用枢轴

 df3 =df.groupBy("id").pivot("index").agg(F.first(F.col("col1")),F.first(F.col("col2")))

collist=["id","col1_a","col2_a","col1_b","col2_b"]

重命名列

df3.toDF(*collist).show()
+---+------+------+------+------+
| id|col1_a|col2_a|col1_b|col2_b|
+---+------+------+------+------+
|  1|   a11|   a12|   b11|   b12|
|  2|   a21|   a22|   b21|   b22|
+---+------+------+------+------+

请注意根据您的要求重新排列列。


推荐阅读