首页 > 解决方案 > 在scala Spark中从向量列转换为Double [Array]列

问题描述

我有一个数据框 doubleSeq,其结构如下

res274: org.apache.spark.sql.DataFrame = [finalFeatures: vector]

该列的第一条记录如下

res281: org.apache.spark.sql.Row = [[3.0,6.0,-0.7876947819954485,-0.21757635218517163,0.9731844373162398,-0.6641741696340383,-0.6860072219935377,-0.2990737363481845,-0.7075863760365155,0.8188108975549018,-0.8468559840943759,-0.04349947247406488,-0.45236764452589984,1.0333959313820456,0.6097566070878347,-0.7106619551471779,-0.7750330808435969,-0.08097610412658443,-0.45338437108038904,-0.2952869863393396,-0.30959772365257004,0.6988768123463287,0.17049117199049213,3.2674649019757385,-0.8333373234944124,1.8462942520757128,-0.49441222531240125,-0.44187299748074166,-0.300810826687287]]

我想提取双数组

[3.0,6.0,-0.7876947819954485,-0.21757635218517163,0.9731844373162398,-0.6641741696340383,-0.6860072219935377,-0.2990737363481845,-0.7075863760365155,0.8188108975549018,-0.8468559840943759,-0.04349947247406488,-0.45236764452589984,1.0333959313820456,0.6097566070878347,-0.7106619551471779,-0.7750330808435969,-0.08097610412658443,-0.45338437108038904,-0.2952869863393396,-0.30959772365257004,0.6988768123463287,0.17049117199049213,3.2674649019757385,-0.8333373234944124,1.8462942520757128,-0.49441222531240125,-0.44187299748074166,-0.300810826687287]

由此 -

doubleSeq.head(1)(0)(0)

Any = [3.0,6.0,-0.7876947819954485,-0.21757635218517163,0.9731844373162398,-0.6641741696340383,-0.6860072219935377,-0.2990737363481845,-0.7075863760365155,0.8188108975549018,-0.8468559840943759,-0.04349947247406488,-0.45236764452589984,1.0333959313820456,0.6097566070878347,-0.7106619551471779,-0.7750330808435969,-0.08097610412658443,-0.45338437108038904,-0.2952869863393396,-0.30959772365257004,0.6988768123463287,0.17049117199049213,3.2674649019757385,-0.8333373234944124,1.8462942520757128,-0.49441222531240125,-0.44187299748074166,-0.300810826687287]

这不能解决我的问题

Scala Spark - 在 Spark DataFrame 中将向量列拆分为单独的列

没有解决我的问题,但它是一个指标

标签: scalaapache-spark

解决方案


因此,您想从 Row 中提取 Vector,并将其转换为双精度数组。

您的代码的问题在于该get方法(以及您正在使用的隐式apply方法)返回一个类型为 的对象Any。实际上,aRow是一个通用的、未参数化的对象,现在无法在编译时知道它包含什么类型。它有点像 java 1.4 及之前的列表。要在 spark 中解决它,您可以使用可以使用getAs您选择的类型参数化的方法。

在您的情况下,您似乎有一个包含向量 ( org.apache.spark.ml.linalg.Vector) 的数据框。

import org.apache.spark.ml.linalg._
val firstRow = df.head(1)(0) // or simply df.head
val vect : Vector = firstRow.getAs[Vector](0)
// or all in one: df.head.getAs[Vector](0)

// to transform into a regular array
val array : Array[Double] = vect.toArray

另请注意,您可以按名称访问列,如下所示:

val vect : Vector = firstRow.getAs[Vector]("finalFeatures")

推荐阅读