首页 > 解决方案 > Spark:partitionBy的奇怪行为,字段变得不可读

问题描述

我有一个 csv 记录,并作为数据框导入:

--------------------------- 
name | age | entranceDate | 
---------------------------
Tom  | 12  | 2019-10-01   | 
---------------------------
Mary | 15  | 2019-10-01   | 
---------------------------

当我使用:

String[] partitions =
new String[] {
  "name",
  "entranceDate"
};

df.write()
.partitionBy(partitions)
.mode(SaveMode.Append)
.parquet(parquetPath);

它将我的镶木地板写入文件 (.parquet)。但奇怪的是,当我再次尝试从镶木地板上阅读时:

public static StructType createSchema() {
    final StructType schema = DataTypes.createStructType(Arrays.asList(
            DataTypes.createStructField("name", DataTypes.StringType, false),
            DataTypes.createStructField("age", DataTypes.StringType, false),
            DataTypes.createStructField("entranceDate", DataTypes.StringType, false)
    ));
    return schema;
}


sqlContext.read()
    .schema(createSchema())
    .parquet(pathToParquet);
    .show()

该字段name变得不可读:

|          name |  age | entranceDate|
+--------------------+----+
|?F...|Tom| 2019-10-01 | 
|?F...|Mary| 2019-10-01 |
+--------------------+

这怎么可能?但是我试过了,如果我不放.partitionBy(partitions)线,我可以毫无问题地阅读。

有人可以解释根本原因是什么吗?我一直在寻找一段时间,但没有找到原因。

编辑:我试图检索“名称”字段(row.getString(0)),我得到的值如下,但我无法读取它:

?F??m???9??A?Aorg/apache/spark/sql/catalyst/expressions/codegen/UnsafeRowWriter??:??A?Aorg.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter??!:??A?Aorg/apache/spark/sql/catalyst/expressions/codegen/UnsafeRowWriter??7:??A?Aorg/apache/spark/sql/catalyst/expressions/codegen/UnsafeRowWriter?-??9????Q:??A?Forg/apache/spark/sql/catalyst/expressions/BaseGenericInternalRow$class??h:??,??A?Forg.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow$class?????:??A?Forg/apache/spark/sql/catalyst/expressions/BaseGenericInternalRow$class]??6x]???:???:???]??:??????x?:??????b?x?:?????c?x?:?????r?x?:?????c?x?:?????1c?x?:???????x?:?????.??x?:?????Nc?x?:?????]c?x?:????????x?:???????x?:????????x?:???????x?:????????x?:???????xy?x????:??]??X;??T???????:???:??????:???5??x?:???5?.???:???x????:??K0?i?x?i?x??6x6x??6x6x???:??A?Eorg/apache/spark/sql/catalyst/trees/TreeNode$$anonfun$transformDown$2??
;???;?A?Eorg.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$2????#;??A?Eorg/apache/spark/sql/catalyst/trees/TreeNode$$anonfun$transformDown$2???j?v9??:???:??:???:??7;??9;???<?;;??>;?????H;???"?@?x?i?xux?]?E;???"?@?x?i?xux????:??;??????:???5??x[;???5?.??[;???x???c;??K0?i?x?i?x??6x6x??6x6x???j?v9?h;???:?h;??[;??s;??u;???<?w;??z;??????;???"?egx?i?xux?]??;???"?egx?i?xux???h;???;??????:???5??b?x?;???5?.???;???b?x????;??K0?i?x?i?x??6x6x??6x6x???j?v9??;???:??;??;??;??;???<??;??;??????;???"?o_x?i?xux?]??;???"?o_x?i?xux????;??<??????:???5?c?x?;???5?.???;??c?x????;??K0?i?x?i?x??6x6x??6x6x???j?v9??;???:??;???;???;???;???<??;???;??????;???"??lx?i?xux?]??;???"??lx?i?xux????;??H<??????:???5?r?x<???5?.??<??r?x???<??K0?i?x?i?x??6x6x??6x6x???j?v9?<???:?<??<??'<??)<???<?+<??.<?????8<???"?;_x?i?xux?]?5<???"?;_x?i?xux???<??<??????:???5?c?xK<???5?.??K<??c?x???S<??K0?i?x?i?x??6x6x??6x6x???j?v9?X<???:?X<??K<??c<??e<???<?g<??j<?????t<???"?H_x?i?xux?]?q<???"?H_x?i?xux???X<???<??????:???5?1c?x?<???5?.???<??1c?x????<??K0?i?x?i?x??6x6x??6x6x???j?v9??<???:??<??<??<??<???<??<??<??????<???"?|_x?i?x?/x?]??<???"?|_x?i?x?/x????<???<??????:???5???x?<???5?.???<???x????<??K0?i?x?i?x??6x6x??6x6x???j?v9??<???:??<???<???<???<???<??<???<??????<???"??_x?i?x?/x?]??<???"??_x?i?x?/x????<??8=??????:???5?.??x?<???5?.???<??.??x???=??K0?i?x?i?x??6x6x??6x6x???j?v9?=???:?=???<??=??=???<?=??=?????(=???"?T_x?i?xux?]?%=???"?T_x?i?xux???=??t=??????:???5?Nc?x;=???5?.??;=??Nc?x???C=??K0?i?x?i?x??6x6x??6x6x???j?v9?H=???:?H=??;=??S=??U=???<?W=??Z=?????d=???"?{lx?i?xux?]?a=???"?{lx?i?xux???H=??=??????:???5?]c?xw=???5?.??w=??]c?x???=??K0?i?x   

标签: javascalaapache-spark

解决方案


partitionBy由于保存文件的方式,这些列被混淆了。partitionBy子句中指定的所有列都存储为目录结构。在你的情况下,它会像:

<<root-path>>/name=???/entranceDate=???/???.parquet

L->R这强制分区列按目录顺序在模式末尾指定。

因此,如果您将架构指定为 ,则在读取镶木地板文件时[age, name, entranceDate],它应该会产生正确的值。


推荐阅读