首页 > 解决方案 > 使用 pyspark,如何将列添加到 DataFrame 作为同一 DataFrame 中多个已知列的键值映射,不包括空值?

问题描述

给定以下示例:

d = [{'asset': '2', 'ts': 6,  'B':'123','C':'234'}, 
     {'asset': '1', 'ts': 5, 'C.1':'999', 'B':'888','F':'999'}]
df = spark.createDataFrame(d)
df.show(truncate=False)

+---+----+-----+---+----+----+
|B  |C   |asset|ts |C.1 |F   |
+---+----+-----+---+----+----+
|123|234 |2    |6  |null|null|
|888|null|1    |5  |999 |999 |
+---+----+-----+---+----+----+

我想创建以下输出:

+-----+---+--------------------------------+
|asset|ts |signals                         |
+-----+---+--------------------------------+
|2    |6  |[B -> 123, C -> 234]            |
|1    |5  |[B -> 888, C.1 -> 999, F -> 999]|
+-----+---+--------------------------------+

我尝试了以下方法:

from itertools import chain
from pyspark.sql.functions import *
all_signals=['B','C','C.1','F']
key_values = create_map(*(chain(*[(lit(name), col("`"+name+"`"))
                                  for name in all_signals])))

new_df = df.withColumn('signals',key_values).drop(*all_signals).show(truncate=False)

+-----+---+--------------------------------------+
|asset|ts |signals                               |
+-----+---+--------------------------------------+
|2    |6  |[B -> 123, C -> 234, C.1 ->, F ->]    |
|1    |5  |[B -> 888, C ->, C.1 -> 999, F -> 999]|
+-----+---+--------------------------------------+

但我不想要具有空值的键。所以我尝试了很多方法来排除 null 或 None。我尝试了“如果”条件,何时/否则但似乎没有一个工作。这是一种尝试:

key_values = create_map(*(chain(*[(lit(name), col("`"+name+"`")) 
                                  for name in all_signals 
                                  if col("`"+name+"`").isNotNull()])))
new_df = df.withColumn('signals',key_values).drop(*all_signals).show(truncate=False)


ValueError: Cannot convert column into bool: please use '&' for 'and', '|' for 'or', '~' for 'not' when building DataFrame boolean expressions.

我使用一种我不满意的循环方式让它工作:

new_df= df.withColumn("signals", from_json(
                       to_json(struct(["`"+x+"`" for x in all_signals])),"MAP<STRING,STRING>"))
                      
new_df = new_df.drop(*all_signals)
new_df.show(truncate=False)

+-----+---+--------------------------------+
|asset|ts |signals                         |
+-----+---+--------------------------------+
|2    |6  |[B -> 123, C -> 234]            |
|1    |5  |[B -> 888, C.1 -> 999, F -> 999]|
+-----+---+--------------------------------+

但是必须有一种方法可以排除 null 而无需转到 json 并返回!

标签: pysparkapache-spark-sqlpyspark-dataframes

解决方案


不需要UDF, 使用高阶函数filter,arrays_zipmap_from_entries来获得你想要的输出。(spark2.4+)

from pyspark.sql import functions as F

all_singals=['B','C','C.1','F']

df.withColumn("all_one", F.array(*[F.lit(x) for x in all_signals]))\
  .withColumn("all_two", F.array(*["`"+x+"`" for x in all_signals]))\
  .withColumn("signals", F.expr("""map_from_entries(filter(arrays_zip(all_one,all_two),x-> x.all_two is not null))"""))\
  .drop("all_one","all_two").show(truncate=False)

#+---+----+-----+---+----+----+--------------------------------+
#|B  |C   |asset|ts |C.1 |F   |signals                         |
#+---+----+-----+---+----+----+--------------------------------+
#|123|234 |2    |6  |null|null|[B -> 123, C -> 234]            |
#|888|null|1    |5  |999 |999 |[B -> 888, C.1 -> 999, F -> 999]|
#+---+----+-----+---+----+----+--------------------------------+

推荐阅读