首页 > 解决方案 > 如何使用 BigQuery 连接器从 java spark 读取 BigQuery 表

问题描述

我正在尝试通过 spark java 代码读取 bigquery 表,如下所示:

    BigQuerySQLContext bqSqlCtx = new BigQuerySQLContext(sqlContext);
    bqSqlCtx.setGcpJsonKeyFile("sxxxl-gcp-1x4c0xxxxxxx.json");
    bqSqlCtx.setBigQueryProjectId("winged-standard-2xxxx");
    bqSqlCtx.setBigQueryDatasetLocation("asia-east1");
    bqSqlCtx.setBigQueryGcsBucket("dataproc-9cxxxxx39-exxdc-4e73-xx07- 2258xxxx4-asia-east1");
    Dataset<Row> testds = bqSqlCtx.bigQuerySelect("select * from bqtestdata.customer_visits limit 100");

但我面临以下问题:

19/01/14 10:52:01 WARN org.apache.spark.sql.SparkSession$Builder: Using an existing SparkSession; some configuration may not take effect.
19/01/14 10:52:01 INFO com.samelamin.spark.bigquery.BigQueryClient: Executing query select * from bqtestdata.customer_visits limit 100
19/01/14 10:52:02 INFO com.samelamin.spark.bigquery.BigQueryClient: Creating staging dataset winged-standard-2xxxxx:spark_bigquery_staging_asia-east1

Exception in thread "main" java.util.concurrent.ExecutionException: com.google.api.client.googleapis.json.GoogleJsonResponseException: 

400 Bad Request
{
  "code" : 400,
  "errors" : 
[ {
    "domain" : "global",
    **"message" : "Invalid dataset ID \"spark_bigquery_staging_asia-east1\". Dataset IDs must be alphanumeric (plus underscores) and must be at most 1024 characters long.",**
    "reason" : "invalid"
  } ],
  "message" : "Invalid dataset ID \"spark_bigquery_staging_asia-east1\". Dataset IDs must be alphanumeric (plus underscores) and must be at most 1024 characters long.",
  "status" : "INVALID_ARGUMENT"
}

标签: javaapache-sparkgoogle-bigquerygoogle-cloud-dataproc

解决方案


响应中的消息

Dataset IDs must be alphanumeric (plus underscores)...

表示dataset ID“spark_bigquery_staging_asia-east1”是无效的,因为它有一个连字符,特别是在asia-east1.


推荐阅读