首页 > 解决方案 > 整个数据帧上的 Pyspark Window 函数

问题描述

考虑一个 pyspark 数据框。我想按列汇总整个数据框,并为每一行附加结果。

+-----+----------+-----------+
|index|      col1| col2      |
+-----+----------+-----------+
|  0.0|0.58734024|0.085703015|
|  1.0|0.67304325| 0.17850411|

预期结果

+-----+----------+-----------+-----------+-----------+-----------+-----------+
|index|      col1| col2      |  col1_min | col1_mean |col2_min   | col2_mean
+-----+----------+-----------+-----------+-----------+-----------+-----------+
|  0.0|0.58734024|0.085703015|  -5       | 2.3       |  -2       | 1.4 |
|  1.0|0.67304325| 0.17850411|  -5       | 2.3       |  -2       | 1.4 |

据我所知,我需要将整个数据框作为 Window 的 Window 函数,以保留每一行的结果(而不是,例如,分别进行统计,然后重新加入以复制每一行)

我的问题是:

  1. 如何在没有任何分区或顺序的情况下编写窗口

我知道有带有分区和顺序的标准窗口,但不是将所有内容都视为 1 个单独分区的窗口

w = Window.partitionBy("col1", "col2").orderBy(desc("col1"))
df = df.withColumn("col1_mean", mean("col1").over(w)))

我将如何编写一个将所有内容都作为一个分区的窗口?

  1. 为所有列动态写入的任何方式。

假设我有500列,重复写看起来不太好。

df = df.withColumn("col1_mean", mean("col1").over(w))).withColumn("col1_min", min("col2").over(w)).withColumn("col2_mean", mean().over(w)).....

假设我想要每列有多个统计信息,所以每个colx都会生成colx_min, colx_max, colx_mean.

标签: dataframeapache-sparkpysparkapache-spark-sqlwindow-functions

解决方案


除了使用 window 之外,您还可以通过自定义聚合结合交叉连接来实现相同的目的:

import pyspark.sql.functions as F
from pyspark.sql.functions import broadcast
from itertools import chain

df = spark.createDataFrame([
  [1, 2.3, 1],
  [2, 5.3, 2],
  [3, 2.1, 4],
  [4, 1.5, 5]
], ["index", "col1", "col2"])

agg_cols = [(
             F.min(c).alias("min_" + c), 
             F.max(c).alias("max_" + c), 
             F.mean(c).alias("mean_" + c)) 

  for c in df.columns if c.startswith('col')]

stats_df = df.agg(*list(chain(*agg_cols)))

# there is no performance impact from crossJoin since we have only one row on the right table which we broadcast (most likely Spark will broadcast it anyway)
df.crossJoin(broadcast(stats_df)).show() 

# +-----+----+----+--------+--------+---------+--------+--------+---------+
# |index|col1|col2|min_col1|max_col1|mean_col1|min_col2|max_col2|mean_col2|
# +-----+----+----+--------+--------+---------+--------+--------+---------+
# |    1| 2.3|   1|     1.5|     5.3|      2.8|       1|       5|      3.0|
# |    2| 5.3|   2|     1.5|     5.3|      2.8|       1|       5|      3.0|
# |    3| 2.1|   4|     1.5|     5.3|      2.8|       1|       5|      3.0|
# |    4| 1.5|   5|     1.5|     5.3|      2.8|       1|       5|      3.0|
# +-----+----+----+--------+--------+---------+--------+--------+---------+

注意1:使用广播我们将避免洗牌,因为广播的df将被发送给所有的执行者。

注意2:我们展平了chain(*agg_cols)我们在上一步中创建的元组列表。

更新:

以下是上述程序的执行计划:

== Physical Plan ==
*(3) BroadcastNestedLoopJoin BuildRight, Cross
:- *(3) Scan ExistingRDD[index#196L,col1#197,col2#198L]
+- BroadcastExchange IdentityBroadcastMode, [id=#274]
   +- *(2) HashAggregate(keys=[], functions=[finalmerge_min(merge min#233) AS min(col1#197)#202, finalmerge_max(merge max#235) AS max(col1#197)#204, finalmerge_avg(merge sum#238, count#239L) AS avg(col1#197)#206, finalmerge_min(merge min#241L) AS min(col2#198L)#208L, finalmerge_max(merge max#243L) AS max(col2#198L)#210L, finalmerge_avg(merge sum#246, count#247L) AS avg(col2#198L)#212])
      +- Exchange SinglePartition, [id=#270]
         +- *(1) HashAggregate(keys=[], functions=[partial_min(col1#197) AS min#233, partial_max(col1#197) AS max#235, partial_avg(col1#197) AS (sum#238, count#239L), partial_min(col2#198L) AS min#241L, partial_max(col2#198L) AS max#243L, partial_avg(col2#198L) AS (sum#246, count#247L)])
            +- *(1) Project [col1#197, col2#198L]
               +- *(1) Scan ExistingRDD[index#196L,col1#197,col2#198L]

在这里,我们看到 aBroadcastExchange正在SinglePartition广播单行,因为stats_df它可以放入 a SinglePartition中。因此,这里被洗牌的数据只有一行(可能的最小值)。


推荐阅读