首页 > 解决方案 > 数据流作业:无法将列分区表复制到列分区元表:不支持

问题描述

我有一个 Apache Beam 项目,它使用 Google Dataflow 运行器来处理存储在 BigQuery 中的相当多的数据。该流程读取 1 个主表并使用 3 个不同的侧流。对于输入数据集中的每一行,我们计算一个“标签”,它会生成 5 个不同的输出流。我们读取的 BigQuery 主表为 60GB,3 个侧流分别为 2GB、51GB 和 110GB。这些都转换为PCollectionView<Map<String, Iterable<TableRow>>>

最终,这 5 个流被合并并写回 BigQuery。

当我在数据的子集(100 万行)上运行此作业时,该作业按预期工作,但是当我在完整数据集(1.77 亿行)上运行它时,作业返回以下错误:无法复制列分区表到列分区元表:不支持

这个错误是什么意思?我该如何解决这个问题?谢谢!

完整的堆栈跟踪:

java.lang.RuntimeException: Failed to create copy job with id prefix beam_load_poisrschellenberger0810134033c63e44ed_e7cf725c5321409b96a4f20e7ec234bc_3d9288a5ff3a24b9eb8b1ec9c621e7dc_00000, reached max retries: 3, last failed copy job: {
  "configuration" : {
    "copy" : {
      "createDisposition" : "CREATE_IF_NEEDED",
      "destinationTable" : {
        "datasetId" : "KPI",
        "projectId" : "bolcom-stg-kpi-logistics-f6c",
        "tableId" : "some_table_v1$20180811"
      },
      "sourceTables" : [ {
        "datasetId" : "KPI",
        "projectId" : "bolcom-stg-kpi-logistics-f6c",
        "tableId" : "beam_load_poisrschellenberger0810134033c63e44ed_e7cf725c5321409b96a4f20e7ec234bc_3d9288a5ff3a24b9eb8b1ec9c621e7dc_00002_00000"
      }, {
        "datasetId" : "KPI",
        "projectId" : "bolcom-stg-kpi-logistics-f6c",
        "tableId" : "beam_load_poisrschellenberger0810134033c63e44ed_e7cf725c5321409b96a4f20e7ec234bc_3d9288a5ff3a24b9eb8b1ec9c621e7dc_00001_00000"
      }, {
        "datasetId" : "KPI",
        "projectId" : "bolcom-stg-kpi-logistics-f6c",
        "tableId" : "beam_load_poisrschellenberger0810134033c63e44ed_e7cf725c5321409b96a4f20e7ec234bc_3d9288a5ff3a24b9eb8b1ec9c621e7dc_00004_00000"
      }, {
        "datasetId" : "KPI",
        "projectId" : "bolcom-stg-kpi-logistics-f6c",
        "tableId" : "beam_load_poisrschellenberger0810134033c63e44ed_e7cf725c5321409b96a4f20e7ec234bc_3d9288a5ff3a24b9eb8b1ec9c621e7dc_00003_00000"
      } ],
      "writeDisposition" : "WRITE_APPEND"
    }
  },
  "etag" : "\"HbYIGVDrlNbv2nDGLHCFlwJG0rI/oNgxlMGidSDy59VClvLIlEu08aU\"",
  "id" : "bolcom-stg-kpi-logistics-f6c:EU.beam_load_poisrschellenberger0810134033c63e44ed_e7cf725c5321409b96a4f20e7ec234bc_3d9288a5ff3a24b9eb8b1ec9c621e7dc_00000-2",
  "jobReference" : {
    "jobId" : "beam_load_poisrschellenberger0810134033c63e44ed_e7cf725c5321409b96a4f20e7ec234bc_3d9288a5ff3a24b9eb8b1ec9c621e7dc_00000-2",
    "location" : "EU",
    "projectId" : "bolcom-stg-kpi-logistics-f6c"
  },
  "kind" : "bigquery#job",
  "selfLink" : "https://www.googleapis.com/bigquery/v2/projects/bolcom-stg-kpi-logistics-f6c/jobs/beam_load_poisrschellenberger0810134033c63e44ed_e7cf725c5321409b96a4f20e7ec234bc_3d9288a5ff3a24b9eb8b1ec9c621e7dc_00000-2?location=EU",
  "statistics" : {
    "creationTime" : "1533957446953",
    "endTime" : "1533957447111",
    "startTime" : "1533957447111"
  },
  "status" : {
    "errorResult" : {
      "message" : "Failed to copy Column partitioned table to Column partitioned meta table: not supported.",
      "reason" : "invalid"
    },
    "errors" : [ {
      "message" : "Failed to copy Column partitioned table to Column partitioned meta table: not supported.",
      "reason" : "invalid"
    } ],
    "state" : "DONE"
  },
  "user_email" : "595758839781-compute@developer.gserviceaccount.com"
}.
    at org.apache.beam.sdk.io.gcp.bigquery.WriteRename.copy(WriteRename.java:166)
    at org.apache.beam.sdk.io.gcp.bigquery.WriteRename.writeRename(WriteRename.java:107)
    at org.apache.beam.sdk.io.gcp.bigquery.WriteRename.processElement(WriteRename.java:80)

要写入的表创建如下:

private static void write(final PCollection<TableRow> data) {
    // Write to BigQuery.
    data.apply(BigQueryIO.writeTableRows()
            .to(new GetPartitionFromTableRowFn("table_name"))
            .withSchema(getOutputSchema())
            .withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
            .withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND));
}

private static TableSchema getOutputSchema() {
    final List<TableFieldSchema> fields = new ArrayList<>();
    fields.add(new TableFieldSchema().setName(ORDER_LINE_REFERENCE).setType("INTEGER"));
    fields.add(new TableFieldSchema().setName(COLUMN_LABEL).setType("STRING"));
    fields.add(new TableFieldSchema().setName(COLUMN_INSERTION_DATETIME).setType("TIMESTAMP"));
    fields.add(new TableFieldSchema().setName(COLUMN_PARTITION_DATE).setType("DATE"));
    return new TableSchema().setFields(fields);
}

使用以下序列化函数:

public class GetPartitionFromTableRowFn implements SerializableFunction<ValueInSingleWindow<TableRow>, TableDestination> {
    private final String tableDestination;

    public GetPartitionFromTableRowFn(final String tableDestination) {
        this.tableDestination = tableDestination;
    }

    public TableDestination apply(final ValueInSingleWindow<TableRow> element) {
        final TableDestination tableDestination;
        if (null != element.getValue()) {
            final TimePartitioning timePartitioning = new TimePartitioning().setType("DAY");
            timePartitioning.setField(Constants.COLUMN_PARTITION_DATE);
            final String formattedDate = element.getValue().get(Constants.COLUMN_PARTITION_DATE).toString().replaceAll("-", "");
            // e.g. output$20180801
            final String tableName = String.format("%s$%s", this.tableDestination, formattedDate);
            tableDestination = new TableDestination(tableName, null, timePartitioning);
        } else {
            tableDestination = new TableDestination(this.tableDestination, null);
        }

        return tableDestination;
    }
}

标签: google-bigquery

解决方案


1)您正在尝试写入在表后缀中描述为分区装饰器的列分区表some_table_v1$20180811:这是不可能的。此语法仅适用于摄取时间分区表。

由于您的表已根据错误消息按列分区,因此不支持此操作。您需要运行 UPDATE 或 MERGE 语句来更新基于列的分区,并且一项作业仅限于更改 1000 个分区。或者删除基于列的分区并仅使用摄取时间分区表。

请注意,BigQuery支持两种分区

  • 基于摄取时间
  • 基于列。

2)如果不是这种情况,那么您需要再次检查您的源表:

复制多个分区表时,请注意以下事项:

  • 如果您将多个源表复制到同一作业中的分区表中,则源表不能包含分区表和非分区表的混合。
  • 如果所有源表都是分区表,则所有源表的分区规范必须与目标表的分区规范匹配。您的设置确定是追加还是覆盖目标表。源表和目标表必须位于同一位置的数据集中。

附言。有关更多详细信息,请发布您的表格定义。

3) 看看这个解决方案BigQuery partitioning with Beam 流


推荐阅读