首页 > 解决方案 > 使用 DataFrame 分组的累积总和 - Pyspark

问题描述

我的代码:

df=temp_df.groupBy('date','id').count()
windowval = (Window.partitionBy('date','id').orderBy('date','id').rangeBetween(Window.unboundedPreceding, 0))
final_df = df.withColumn('cum_sum', F.sum('count').over(windowval)).orderBy('date','id').show()

请更正我的代码,我认为使用 Window(rangeBetween) 有问题。

谢谢,

DF:
+-------------------+------------------+-----+
|               date|                id|count|
+-------------------+------------------+-----+
|2007-11-04 00:00:00|                 5|    4|
|2007-11-05 00:00:00|                 5|    7|
|2007-11-06 00:00:00|                 5|    3|
|2007-11-06 00:00:00|                 8|    3|
|2007-11-07 00:00:00|                 5|    7|
|2007-11-08 00:00:00|                 5|    2|
|2007-11-08 00:00:00|                 8|    4|
+-------------------+------------------+-----+

Expected output:

+-------------------+------------------+-----+-------+
|               date|                id|count|cum_sum|
+-------------------+------------------+-----+-------+
|2007-11-04 00:00:00|                 5|    4|      4|
|2007-11-05 00:00:00|                 5|    7|     11|
|2007-11-06 00:00:00|                 5|    3|     14|
|2007-11-06 00:00:00|                 8|    3|      3|
|2007-11-07 00:00:00|                 5|    7|     21|
|2007-11-08 00:00:00|                 5|    2|     23|
|2007-11-08 00:00:00|                 8|    4|      7|
+-------------------+------------------+-----+-------+

My Output:

+-------------------+------------------+-----+-------+
|               date|                id|count|cum_sum|
+-------------------+------------------+-----+-------+
|2007-11-04 00:00:00|                 5|    4|      4|
|2007-11-05 00:00:00|                 5|    7|      7|
|2007-11-06 00:00:00|                 5|    3|      3|
|2007-11-06 00:00:00|                 8|    3|      3|
|2007-11-07 00:00:00|                 5|    7|      7|
|2007-11-08 00:00:00|                 5|    2|      2|
|2007-11-08 00:00:00|                 8|    4|      4|
+-------------------+------------------+-----+-------+


标签: apache-sparkpysparkapache-spark-sql

解决方案


只需将您当前的代码更改为:

df = temp_df.groupBy('date', 'id').count()

windowval = Window.partitionBy('id').orderBy('date').rangeBetween(Window.unboundedPreceding, 0)

final_df = df.withColumn('cum_sum', F.sum('count').over(windowval)).orderBy('date', 'id').show()

当您按 id 和 date 进行分区时,每个 ( id, date) 组合都是唯一的。您需要按idand进行分区orderBy date


推荐阅读