首页 > 解决方案 > Spark中具有固定向量的数据帧行的点积

问题描述

我在 Spark 中有一个包含 m 行和 n 列的数据框(df1)。我有另一个具有 1 行和 n 列的数据框(df2)。如何有效地计算 df1 的每一行与 df2 的单行的点积?

标签: apache-sparkpyspark

解决方案


我们可以使用 VectorAssembler 来做点积。

  1. 创建示例数据框:
from pyspark.ml.linalg import Vectors, DenseVector
from pyspark.sql import functions as F
from pyspark.ml.feature import VectorAssembler
from pyspark.sql.types import *

v = [('a', 1,2,3),
    ('b', 4,5,6),
    ('c', 9,8,7)]
df1 = spark.createDataFrame(v, ['id', 'v1', 'v2', 'v3'])
df2 = spark.createDataFrame([('d',3,2,1)], ['id', 'v1', 'v2', 'v3'])
df1.show()
df2.show()

它们看起来像这样:

+---+---+---+---+
| id| v1| v2| v3|
+---+---+---+---+
|  a|  1|  2|  3|
|  b|  4|  5|  6|
|  c|  9|  8|  7|
+---+---+---+---+

+---+---+---+---+
| id| v1| v2| v3|
+---+---+---+---+
|  d|  3|  2|  1|
+---+---+---+---+

  1. 用于VectorAssembler将列转换为Vector
vecAssembler = VectorAssembler(inputCols=["v1", "v2", "v3"], outputCol="values")
dfv1 = vecAssembler.transform(df1) 
dfv2 = vecAssembler.transform(df2)
dfv1.show()
dfv2.show()

现在它们看起来像这样:

+---+---+---+---+-------------+
| id| v1| v2| v3|       values|
+---+---+---+---+-------------+
|  a|  1|  2|  3|[1.0,2.0,3.0]|
|  b|  4|  5|  6|[4.0,5.0,6.0]|
|  c|  9|  8|  7|[9.0,8.0,7.0]|
+---+---+---+---+-------------+

+---+---+---+---+-------------+
| id| v1| v2| v3|       values|
+---+---+---+---+-------------+
|  d|  3|  2|  1|[3.0,2.0,1.0]|
+---+---+---+---+-------------+

  1. 定义一个udf做点积
# Get the fixed vector from DataFrame dfv2
vm = Vectors.dense(dfv2.take(1)[0]['values'])

dot_prod_udf = F.udf(lambda v: float(v.dot(vm)), FloatType())
dfv1 = dfv1.withColumn('dot_prod', dot_prod_udf('values'))

dfv1.show()

最终结果是:

+---+---+---+---+-------------+--------+
| id| v1| v2| v3|       values|dot_prod|
+---+---+---+---+-------------+--------+
|  a|  1|  2|  3|[1.0,2.0,3.0]|    10.0|
|  b|  4|  5|  6|[4.0,5.0,6.0]|    28.0|
|  c|  9|  8|  7|[9.0,8.0,7.0]|    50.0|
+---+---+---+---+-------------+--------+


推荐阅读