首页 > 解决方案 > AWS Glue:在转换为镶木地板时使用 ResolveChoice 投影到时间戳丢弃字段

问题描述

试图将一系列压缩的 gz 转换为 parquet 格式。

在尝试进行一些转换的过程中。(减少字段数量,强制转换等)在进行了一些调试之后,当我尝试投影一个字段来为生成的镶木地板文件添加时间戳时似乎缺少该字段。

相关的python片段:

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job

args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

datasource0 = glueContext.create_dynamic_frame.from_options(connection_type = "s3", connection_options = {"paths": ["<s3 path>"]}, format = "json", transformation_ctx = "read")

datasource0 = ApplyMapping.apply(frame = datasource0, mappings = [("timestamp", "string", "timestamp", "long"), ("name", "string", "name", "string"), ("value", "string", "value", "string"), ("type", "string", "type", "string")])
datasource0 = SelectFields.apply(frame = datasource0, paths = ["timestamp", "name", "value", "type"])

# here is where the parquet schema changes 
# the timestamp column is no longer there in parquet tools

datasource0 = ResolveChoice.apply(frame = datasource0, specs = [('timestamp','project:timestamp'), ('name','cast:string'), ('type','cast:string'), ('value','cast:string')])


glueContext.write_dynamic_frame.from_options(datasource0, connection_type = "s3", connection_options = {"path": "<another s3 path>"}, format = "parquet", format_options = {'compression': 'gzip'}, transformation_ctx = "write")

job.commit()

直到 ResolveChoice 选择,如果我对生成的 parquet 文件执行 parquet-tools,我会看到四个字段。

然而,在使用那条线后,我得到了这个:

message spark_schema {
  optional binary name (UTF8);
  optional binary value (UTF8);
  optional binary type (UTF8);
}

缺少时间戳字段。

project:通过将所有数据投影到一种可能的数据类型来解决潜在的歧义。例如,如果列中的数据可以是 int 或字符串,则使用 project:string 操作会在生成的 DynamicFrame 中生成一列,其中所有 int 值都将转换为字符串。

所以我想知道有没有办法投影到时间戳类型?

标签: apache-sparkparquetaws-glue

解决方案


推荐阅读