首页 > 解决方案 > 启用书签的 AWS Glue 作业失败并显示“数据源不支持编写空或嵌套的空模式”

问题描述

我在 AWS 中有一个启用了书签的 Glue(版本 1.0)的 python 3 作业。此作业将 json 数据源转换为 s3 存储桶中的 parquet 文件格式。这项工作第一次运行完美,或者如果我重置书签。

但是,后续运行失败并出现以下错误。

AnalysisException: '\nDatasource 不支持写入空或嵌套的空架构。\n请确保数据架构至少有一个或多个列。\n ;'

使用的脚本由 AWS 控制台生成,无需任何修改,源是使用数据目录的 S3 存储桶中的 json 文件,输出是另一个存储桶。

    import sys
    from awsglue.transforms import *
    from awsglue.utils import getResolvedOptions
    from pyspark.context import SparkContext
    from awsglue.context import GlueContext
    from awsglue.job import Job

    ## @params: [JOB_NAME]
    args = getResolvedOptions(sys.argv, ['JOB_NAME'])

    sc = SparkContext()
    glueContext = GlueContext(sc)
    spark = glueContext.spark_session
    job = Job(glueContext)
    job.init(args['JOB_NAME'], args)
    ## @type: DataSource
    ## @args: [database = "segment", table_name = "segment_zlw54zvojf", transformation_ctx = "datasource0"]
    ## @return: datasource0
    ## @inputs: []
    datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "segment", table_name = "segment_zlw54zvojf", transformation_ctx = "datasource0")
    ## @type: ApplyMapping
    ## @args: [mapping = [("channel", "string", "channel", "string"), ("context", "struct", "context", "struct"), ("event", "string", "event", "string"), ("integrations", "struct", "integrations", "struct"), ("messageid", "string", "messageid", "string"), ("projectid", "string", "projectid", "string"), ("properties", "struct", "properties", "struct"), ("receivedat", "string", "receivedat", "string"), ("timestamp", "string", "timestamp", "string"), ("type", "string", "type", "string"), ("userid", "string", "userid", "string"), ("version", "int", "version", "int"), ("anonymousid", "string", "anonymousid", "string"), ("partition_0", "string", "partition_0", "string")], transformation_ctx = "applymapping1"]
    ## @return: applymapping1
    ## @inputs: [frame = datasource0]
    applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("channel", "string", "channel", "string"), ("context", "struct", "context", "struct"), ("event", "string", "event", "string"), ("integrations", "struct", "integrations", "struct"), ("messageid", "string", "messageid", "string"), ("projectid", "string", "projectid", "string"), ("properties", "struct", "properties", "struct"), ("receivedat", "string", "receivedat", "string"), ("timestamp", "string", "timestamp", "string"), ("type", "string", "type", "string"), ("userid", "string", "userid", "string"), ("version", "int", "version", "int"), ("anonymousid", "string", "anonymousid", "string"), ("partition_0", "string", "partition_0", "string")], transformation_ctx = "applymapping1")
    ## @type: ResolveChoice
    ## @args: [choice = "make_struct", transformation_ctx = "resolvechoice2"]
    ## @return: resolvechoice2
    ## @inputs: [frame = applymapping1]
    resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_struct", transformation_ctx = "resolvechoice2")
    ## @type: DropNullFields
    ## @args: [transformation_ctx = "dropnullfields3"]
    ## @return: dropnullfields3
    ## @inputs: [frame = resolvechoice2]
    dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")
    ## @type: DataSink
    ## @args: [connection_type = "s3", connection_options = {"path": "s3://mydestination.datalake.raw/segment/iterable"}, format = "parquet", transformation_ctx = "datasink4"]
    ## @return: datasink4
    ## @inputs: [frame = dropnullfields3]
    datasink4 = glueContext.write_dynamic_frame.from_options(frame = dropnullfields3, connection_type = "s3", connection_options = {"path": "s3://mydestination.datalake.raw/segment/iterable"}, format = "parquet", transformation_ctx = "datasink4")
    job.commit()

任何建议将不胜感激。

标签: apache-sparkparquetaws-glue

解决方案


所以我找到了这个问题的根源。

按源 S3 存储桶每天都有新数据写入其中。但是,这些数据被写入我的 s3 存储桶中的新子文件夹中。

为了让 AWS 粘合作业识别这些新的子文件夹,我需要重新运行 AWS 爬虫来更新源数据目录。

如果不这样做,则不会识别新数据,并且默认 AWS 生成的脚本会尝试写入空数据集并失败。

为了解决这个问题,我安排我的 Crawler 在我的作业执行之前执行。


推荐阅读