首页 > 解决方案 > Adding a nullable column in PySpark dataframe

问题描述

In Spark, literal columns, when added, are not nullable:

from pyspark.sql import SparkSession, functions as F
spark = SparkSession.builder.getOrCreate()

df = spark.createDataFrame([(1,)], ['c1'])

df = df.withColumn('c2', F.lit('a'))

df.printSchema()
#  root
#   |-- c1: long (nullable = true)
#   |-- c2: string (nullable = false)

How to create a nullable column?

标签: pythonapache-sparkpysparkapache-spark-sqlnullable

解决方案


The shortest method I've found - using when (the otherwise clause seems not needed):

df = df.withColumn('c2', F.when(F.lit(1).isNotNull(), F.lit('a')))

Full test result:

from pyspark.sql import SparkSession, functions as F
spark = SparkSession.builder.getOrCreate()

df = spark.createDataFrame([(1,)], ['c1'])
df = df.withColumn('c2', F.when(F.lit(1).isNotNull(), F.lit('a')))

df.show()
#  +---+---+
#  | c1| c2|
#  +---+---+
#  |  1|  a|
#  +---+---+

df.printSchema()
#  root
#   |-- c1: long (nullable = true)
#   |-- c2: string (nullable = true)

推荐阅读