首页 > 解决方案 > 如何在 sparklyr 中随时间计算唯一值(给出示例)?

问题描述

我正在尝试根据时间戳(四舍五入到分钟)计算过去 10 分钟内看到的唯一设备。我可以在 data.table 中做到这一点,但不知道如何在 R 中的 sparklyr 中复制相同的内容。 540 是指添加到当前时间戳的 # 秒。

下面提供示例来解释我的问题。

给定数据

df<-data.frame(device_subscriber_id=c("x","a","z","x","a","z","x","y","a","z"),
                  start_timestamp=c("2020-12-11 14:21:00","2020-12-11 14:22:00","2020-12-11 14:23:00",
                                    "2020-12-11 14:26:00","2020-12-11 14:24:00","2020-12-11 14:25:00",
                                    "2020-12-11 14:26:00","2020-12-11 14:28:00","2020-12-11 14:31:00","2020-12-11 14:38:00"))

df$start_timestamp<-as.POSIXct(df$start_timestamp,format="%Y-%m-%d %H:%M:%S")
dt<-setDT(df)

预期数据

expected_dt<-dt[dt[ , .(start_timestamp3=start_timestamp, start_timestamp2 = start_timestamp - 540, device_subscriber_id)], 
             on = .(start_timestamp >= start_timestamp2, start_timestamp<=start_timestamp3), 
             allow.cartesian = TRUE][ , .(unique_devices_seen = uniqueN(device_subscriber_id)),by = .(start_timestamp = start_timestamp + 540)]

expected_dt

   start_timestamp unique_devices_seen
   2020-12-11 14:21:00                   1
   2020-12-11 14:22:00                   2
   2020-12-11 14:23:00                   3
   2020-12-11 14:26:00                   3
   2020-12-11 14:24:00                   3
   2020-12-11 14:25:00                   3
   2020-12-11 14:28:00                   4
   2020-12-11 14:31:00                   4
   2020-12-11 14:38:00                   2

标签: rapache-sparkapache-spark-sqldata.tablesparklyr

解决方案


我建议OVER在当前行和前 540 秒之间使用 SQL 窗口函数。count(distinct device_subscriber_id)穿越Error: org.apache.spark.sql.AnalysisException: Distinct window functions are not supported。_ 一种解决方法是收集一组唯一 ID 并返回数组的大小。时间戳被转换为纪元,以便以秒为单位使用范围值。

library(sparklyr)
library(tidyverse)
sc <- spark_connect(master="local[4]", version = "3.0.1")

sdf <- copy_to(sc, df, name = "df", overwrite = TRUE)

sdf_sql(sc, "
SELECT 
  start_timestamp,
  size(collect_set(device_subscriber_id) 
       OVER (ORDER BY start_ts_epoch ASC 
             RANGE BETWEEN 540 PRECEDING AND CURRENT ROW)) as unique_devices_seen
FROM (SELECT *, unix_timestamp(start_timestamp) as start_ts_epoch FROM `df`)")

结果:

# Source: spark<?> [?? x 2]
   start_timestamp     unique_devices_seen
   <dttm>                            <int>
 1 2020-12-11 13:21:00                   1
 2 2020-12-11 13:22:00                   2
 3 2020-12-11 13:23:00                   3
 4 2020-12-11 13:24:00                   3
 5 2020-12-11 13:25:00                   3
 6 2020-12-11 13:26:00                   3
 7 2020-12-11 13:26:00                   3
 8 2020-12-11 13:28:00                   4
 9 2020-12-11 13:31:00                   4
10 2020-12-11 13:38:00                   2

参考Spark SQL 窗口函数 API


奖励:如果需要缺少的时间戳,您需要将设备的数据与包含所有可能时间戳的表连接起来。缺少的时间戳会将设备 ID 设为空值,并且不会计入计数。

df_ts <- data.frame(start_timestamp=seq(min(df$start_timestamp), max(df$start_timestamp), by = "min"))
sdf_ts <- copy_to(sc, df_ts, name = "df_ts", overwrite = TRUE)

sdf_sql(sc, "
SELECT DISTINCT
  start_timestamp
  , size(collect_set(device_subscriber_id) 
         OVER (ORDER BY start_ts_epoch ASC 
               RANGE BETWEEN 540 PRECEDING AND CURRENT ROW)) as unique_devices_seen
  , concat_ws(',', collect_set(device_subscriber_id)
                   OVER (ORDER BY start_ts_epoch ASC 
                   RANGE BETWEEN 540 PRECEDING AND CURRENT ROW)) as unique_devices_seen_csv
FROM (SELECT 
        device_subscriber_id
        , df_ts.start_timestamp
        , unix_timestamp(df_ts.start_timestamp) as start_ts_epoch
      FROM df
      FULL JOIN df_ts ON (df.start_timestamp = df_ts.start_timestamp))") %>% print(n=30)

请注意,我添加unique_devices_seen_csv以显示幕后发生的事情。它连接滑动窗口的设备 ID。

结果:

# Source: spark<?> [?? x 3]
   start_timestamp     unique_devices_seen unique_devices_seen_csv
   <dttm>                            <int> <chr>                  
 1 2020-12-11 13:21:00                   1 x                      
 2 2020-12-11 13:22:00                   2 x,a                    
 3 2020-12-11 13:23:00                   3 z,x,a                  
 4 2020-12-11 13:24:00                   3 z,x,a                  
 5 2020-12-11 13:25:00                   3 z,x,a                  
 6 2020-12-11 13:26:00                   3 z,x,a                  
 7 2020-12-11 13:27:00                   3 z,x,a                  
 8 2020-12-11 13:28:00                   4 z,y,x,a                
 9 2020-12-11 13:29:00                   4 z,y,x,a                
10 2020-12-11 13:30:00                   4 z,y,x,a                
11 2020-12-11 13:31:00                   4 z,y,x,a                
12 2020-12-11 13:32:00                   4 z,y,x,a                
13 2020-12-11 13:33:00                   4 z,y,x,a                
14 2020-12-11 13:34:00                   4 z,y,x,a                
15 2020-12-11 13:35:00                   3 y,x,a                  
16 2020-12-11 13:36:00                   2 y,a                    
17 2020-12-11 13:37:00                   2 y,a                    
18 2020-12-11 13:38:00                   2 z,a

推荐阅读