首页 > 解决方案 > PySpark SQL 覆盖返回空表

问题描述

我正在迁移表中的一些数据,我正在尝试更改“日期”列的值,但似乎 PySpark 在读取数据时会擦除数据。

我正在执行以下步骤:

当我在这些步骤之后检查数据时,我的表是空的。

这是我的代码

table = "MY_TABLE" 

data_input = sqlContext.read.format("jdbc").options(url=JDBCURL, dbtable=table).load()
print("data_input.count()=", data_input.count())
print("'2019' in data_input:", data_input.where(col("date").contains("2019")).count())
print("'YEAR' in data_input:", data_input.where(col("date").contains("YEAR")).count())
# data_input.count()= 1000
# '2019' in data_input: 1000
# 'YEAR' in data_input: 0

data_output = data_input.withColumn("date", F.regexp_replace("date", "2019", "YEAR"))
print("data_output.count()=", data_output.count())
print("'2019' in data_output:", data_output.where(col("date").contains("2019")).count())
print("'YEAR' in data_output:", data_output.where(col("date").contains("YEAR")).count())
# data_output.count()= 1000
# '2019' in data_output: 1000
# 'YEAR' in data_output: 0

到目前为止一切顺利,让我们覆盖表格

df_writer = DataFrameWriter(data_output)
df_writer.jdbc(url = JDBCURL, table=table, mode="overwrite")

# Let's check the data now
print("data_input.count()=", data_input.count())
print("'2019' in data_input:", data_input.where(col("date").contains("2019")).count())
print("'YEAR' in data_input:", data_input.where(col("date").contains("YEAR")).count())
# data_input.count()= 0
# '2019' in data_input: 0
# 'YEAR' in data_input: 0
# huh, weird

print("data_output.count()=", data_output.count())
print("'2019' in data_output:", data_output.where(col("date").contains("2019")).count())
print("'YEAR' in data_output:", data_output.where(col("date").contains("YEAR")).count())
# data_output.count()= 0
# '2019' in data_output: 0
# 'YEAR' in data_output: 0
# Still weird

查询SELECT * FROM MY_TABLE返回 0 行。

为什么 [Py]Spark 会这样做?我怎样才能改变这种行为?缓存?这在文档中在哪里解释?

标签: pythonpyspark

解决方案


我通过“缓存”数据框找到了一种解决方法:

data_pandas = data_output.toPandas()
data_spark = spark.createDataFrame(data_pandas)
data_spark.write.jdbc(url=JDBCURL, table=table, mode="overwrite")

推荐阅读