首页 > 解决方案 > 使用 PySpark 数据框解析 json 字符串列表

问题描述

我正在尝试使用 pyspark 数据框读取 JSON 列表。您将在我的输入数据下方找到,我的目标是获得一个包含两列用户(字符串)和 ips 数组 [Sting] 的数据框。

sampleJson = [ ('{"user":100, "ips" : ["191.168.192.101", "191.168.192.103", "191.168.192.96", "191.168.192.99"]}',), ('{"user":101, "ips" : ["191.168.192.102", "191.168.192.105", "191.168.192.103", "191.168.192.107"]}',), ('{"user":102, "ips" : ["191.168.192.105", "191.168.192.101", "191.168.192.105", "191.168.192.107"]}',), ('{"user":103, "ips" : ["191.168.192.96", "191.168.192.100", "191.168.192.107", "191.168.192.101"]}',), ('{"user":104, "ips" : ["191.168.192.99", "191.168.192.99", "191.168.192.102", "191.168.192.99"]}',), ('{"user":105, "ips" : ["191.168.192.99", "191.168.192.99", "191.168.192.100", "191.168.192.96"]}',), ]

谢谢您的帮助。

标签: pythonjsondataframeapache-sparkpyspark

解决方案


使用from_json函数defining schema

Example:

from pyspark.sql.functions import *
from pyspark.sql.types import *

sampleJson = [ ('{"user":100, "ips" : ["191.168.192.101", "191.168.192.103", "191.168.192.96", "191.168.192.99"]}',),  ('{"user":101, "ips" : ["191.168.192.102", "191.168.192.105", "191.168.192.103", "191.168.192.107"]}',),  ('{"user":102, "ips" : ["191.168.192.105", "191.168.192.101", "191.168.192.105", "191.168.192.107"]}',),  ('{"user":103, "ips" : ["191.168.192.96", "191.168.192.100", "191.168.192.107", "191.168.192.101"]}',),  ('{"user":104, "ips" : ["191.168.192.99", "191.168.192.99", "191.168.192.102", "191.168.192.99"]}',),  ('{"user":105, "ips" : ["191.168.192.99", "191.168.192.99", "191.168.192.100", "191.168.192.96"]}',),  ]

df1=spark.createDataFrame(sampleJson)

sch=StructType([StructField('user', StringType(), False),StructField('ips',ArrayType(StringType()))])

df1.withColumn("n",from_json(col("_1"),sch)).select("n.*").show(10,False)
#+----+--------------------------------------------------------------------+
#|user|ips                                                                 |
#+----+--------------------------------------------------------------------+
#|100 |[191.168.192.101, 191.168.192.103, 191.168.192.96, 191.168.192.99]  |
#|101 |[191.168.192.102, 191.168.192.105, 191.168.192.103, 191.168.192.107]|
#|102 |[191.168.192.105, 191.168.192.101, 191.168.192.105, 191.168.192.107]|
#|103 |[191.168.192.96, 191.168.192.100, 191.168.192.107, 191.168.192.101] |
#|104 |[191.168.192.99, 191.168.192.99, 191.168.192.102, 191.168.192.99]   |
#|105 |[191.168.192.99, 191.168.192.99, 191.168.192.100, 191.168.192.96]   |
#+----+--------------------------------------------------------------------+


#schema

df1.withColumn("n",from_json(col("_1"),sch)).select("n.*").printSchema()
#root
# |-- user: string (nullable = true)
# |-- ips: array (nullable = true)
# |    |-- element: string (containsNull = true)

推荐阅读