首页 > 解决方案 > Pyspark 访问和分解 json 的嵌套项

问题描述

我对 spark 很陌生,我正在尝试解析一个包含要聚合的数据的 json 文件,但我无法导航其内容。我搜索了其他解决方案,但我找不到任何对我有用的东西。

这是导入的 json 数据框的架构:

root
  |-- UrbanDataset: struct (nullable = true)
  |    |-- context: struct (nullable = true)
  |    |    |-- coordinates: struct (nullable = true)
  |    |    |    |-- format: string (nullable = true)
  |    |    |    |-- height: long (nullable = true)
  |    |    |    |-- latitude: double (nullable = true)
  |    |    |    |-- longitude: double (nullable = true)
  |    |    |-- language: string (nullable = true)
  |    |    |-- producer: struct (nullable = true)
  |    |    |    |-- id: string (nullable = true)
  |    |    |    |-- schemeID: string (nullable = true)
  |    |    |-- timeZone: string (nullable = true)
  |    |    |-- timestamp: string (nullable = true)
  |    |-- specification: struct (nullable = true)
  |    |    |-- id: struct (nullable = true)
  |    |    |    |-- schemeID: string (nullable = true)
  |    |    |    |-- value: string (nullable = true)
  |    |    |-- name: string (nullable = true)
  |    |    |-- properties: struct (nullable = true)
  |    |    |    |-- propertyDefinition: array (nullable = true)
  |    |    |    |    |-- element: struct (containsNull = true)
  |    |    |    |    |    |-- codeList: string (nullable = true)
  |    |    |    |    |    |-- dataType: string (nullable = true)
  |    |    |    |    |    |-- propertyDescription: string (nullable = true)
  |    |    |    |    |    |-- propertyName: string (nullable = true)
  |    |    |    |    |    |-- subProperties: struct (nullable = true)
  |    |    |    |    |    |    |-- propertyName: array (nullable = true)
  |    |    |    |    |    |    |    |-- element: string (containsNull = true)
  |    |    |    |    |    |-- unitOfMeasure: string (nullable = true)
  |    |    |-- uri: string (nullable = true)
  |    |    |-- version: string (nullable = true)
  |    |-- values: struct (nullable = true)
  |    |    |-- line: array (nullable = true)
  |    |    |    |-- element: struct (containsNull = true)
  |    |    |    |    |-- coordinates: struct (nullable = true)
  |    |    |    |    |    |-- format: string (nullable = true)
  |    |    |    |    |    |-- height: double (nullable = true)
  |    |    |    |    |    |-- latitude: double (nullable = true)
  |    |    |    |    |    |-- longitude: double (nullable = true)
  |    |    |    |    |-- id: long (nullable = true)
  |    |    |    |    |-- period: struct (nullable = true)
  |    |    |    |    |    |-- end_ts: string (nullable = true)
  |    |    |    |    |    |-- start_ts: string (nullable = true)
  |    |    |    |    |-- property: array (nullable = true)
  |    |    |    |    |    |-- element: struct (containsNull = true)
  |    |    |    |    |    |    |-- name: string (nullable = true)
  |    |    |    |    |    |    |-- val: string (nullable = true)

整个json的一个子集附在这里

我的目标是从此模式中检索结构并操作/聚合位于 line.element.property.element.val 中的所有val

我还尝试将其分解以获取“csv 样式”列中的每个字段,但出现错误:

pyspark.sql.utils.AnalysisException: u"cannot resolve 'array( UrbanDataset. context, UrbanDataset. specification, UrbanDataset. values)' 由于数据类型不匹配:函数数组的输入应该都是相同的类型

import pyspark
import pyspark.sql.functions as psf

df = spark.read.format('json').load('data1.json')
df.select(psf.explode(psf.array("UrbanDataset.*"))).show()

谢谢

标签: pythonjsonpyspark

解决方案


不能直接访问嵌套数组,需要先使用explode。它将为数组中的每个元素创建一条线。

from pyspark.sql import functions as F
df.withColumn("Value", F.explode("Values"))

推荐阅读