首页 > 解决方案 > 未经授权的错误将批处理大表设置为 Spark 流中的数据主机

问题描述

我正在按照此处的示例从 Spark Streaming 写入 Cloud Bigtable:https ://github.com/GoogleCloudPlatform/cloud-bigtable-examples/tree/master/scala/spark-streaming

在我的实例中,我从 Kafka 消费,进行一些转换,然后需要将它们写入我的 Bigtable 实例。最初,使用该示例中的所有依赖项版本,在尝试从 Bigtable 访问连接后的任何内容时,我因超时而收到 UNAUTHORIZED 错误:

Refreshing the OAuth token Retrying failed call. Failure #1, got: Status{code=UNAUTHENTICATED, description=Unexpected failure get auth token,
cause=java.util.concurrent.TimeoutException 
at java.util.concurrent.FutureTask.get(FutureTask.java:205) 
at com.google.bigtable.repackaged.com.google.cloud.bigtable.grpc.io.RefreshingOAuth2CredentialsInterceptor.getHeader(RefreshingOAuth2CredentialsInterceptor.java:290)

然后我将bigtable-hbase-1.x-hadoop依赖项提升到更新的版本,如 1.9.0,并通过表管理员工作的身份验证,但在实际尝试拨打电话时获得了额外的 UNAUTHORIZED saveAsNewAPIHadoopDataset()

Retrying failed call. Failure #1, got: Status{code=UNAUTHENTICATED, description=Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. 
See https://developers.google.com/identity/sign-in/web/devconsole-project., cause=null} on channel 34. 
Trailers: Metadata(www-authenticate=Bearer realm="https://accounts.google.com/",bigtable-channel-id=34)

我发现conf.set(BigtableOptionsFactory.BIGTABLE_HOST_KEY, BigtableOptions.BIGTABLE_BATCH_DATA_HOST_DEFAULT)setBatchConfigOptions()方法中删除 允许调用通过默认主机进行身份验证,并且将处理多个 Kafka 消息,但随后停止、挂断并最终引发No route to host错误:

019-07-25 17:29:12 INFO JobScheduler:54 - Added jobs for time 1564093750000 ms 
2019-07-25 17:29:21 INFO JobScheduler:54 - Added jobs for time 1564093760000 ms 
2019-07-25 17:29:31 INFO JobScheduler:54 - Added jobs for time 1564093770000 ms 
2019-07-25 17:29:36 WARN OperationAccountant:116 - No operations completed within the last 30 seconds. There are still 1 operations in progress. 
2019-07-25 17:29:36 WARN OperationAccountant:116 - No operations completed within the last 30 seconds. There are still 1 operations in progress. 
2019-07-25 17:29:36 WARN OperationAccountant:116 - No operations completed within the last 30 seconds. There are still 1 operations in progress. 
2019-07-25 17:29:36 WARN OperationAccountant:116 - No operations completed within the last 30 seconds. There are still 1 operations in progress. 
2019-07-25 17:29:36 WARN OperationAccountant:116 - No operations completed within the last 30 seconds. There are still 1 operations in progress. 
2019-07-25 17:29:36 WARN OperationAccountant:116 - No operations completed within the last 30 seconds. There are still 1 operations in progress. 
2019-07-25 17:29:36 WARN OperationAccountant:116 - No operations completed within the last 30 seconds. There are still 1 operations in progress. 
2019-07-25 17:29:36 WARN OperationAccountant:116 - No operations completed within the last 30 seconds. There are still 1 operations in progress. 
2019-07-25 17:29:38 WARN AbstractRetryingOperation:130 - Retrying failed call. 
Failure #1, got: Status{code=UNAVAILABLE, description=io exception, cause=com.google.bigtable.repackaged.io.grpc.netty.shaded.io.netty.channel.AbstractChannel$AnnotatedNoRouteToHostException: No route to host: batch-bigtable.googleapis.com/2607:f8b0:400f:801:0:0:0:200a:443

我假设这是依赖版本的问题,因为该示例相当旧,但找不到任何更新的从 Spark Streaming 写入 Bigtable 的示例。我没有任何运气找到与bigtable-hbase-2.x-hadoop.

当前 POM:

<scala.version>2.11.0</scala.version>
<spark.version>2.3.3</spark.version>
<hbase.version>1.3.1</hbase.version>
<bigtable.version>1.9.0</bigtable.version>
<dependencies>
    <dependency>
        <groupId>com.google.protobuf</groupId>
        <artifactId>protobuf-java</artifactId>
        <version>3.7.1</version>
    </dependency>
    <dependency>
        <groupId>com.google.guava</groupId>
        <artifactId>guava</artifactId>
        <version>26.0-jre</version>
    </dependency>
    <dependency>
        <groupId>com.google.cloud</groupId>
        <artifactId>google-cloud-logging</artifactId>
        <version>1.74.0</version>
        <exclusions>
            <exclusion>
                <groupId>com.google.guava</groupId>
                <artifactId>guava</artifactId>
            </exclusion>
            <exclusion>
                <groupId>com.google.protobuf</groupId>
                <artifactId>protobuf-java</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
        <version>${spark.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming_2.11</artifactId>
        <version>${spark.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.11</artifactId>
        <version>${spark.version}</version>
    </dependency>
    <dependency>
        <groupId>log4j</groupId>
        <artifactId>log4j</artifactId>
        <version>1.2.17</version>
    </dependency>
    <dependency>
        <groupId>org.scala-lang</groupId>
        <artifactId>scala-library</artifactId>
        <version>${scala.version}</version>
    </dependency>
    <dependency>
        <groupId>com.google.cloud.bigtable</groupId>
        <artifactId>bigtable-hbase-2.x-hadoop</artifactId>
        <version>${bigtable.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.hbase</groupId>
        <artifactId>hbase-server</artifactId>
        <version>${hbase.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.hbase</groupId>
        <artifactId>hbase-client</artifactId>
        <version>${hbase.version}</version>
    </dependency>
    <dependency>
        <groupId>com.google.cloud</groupId>
        <artifactId>google-cloud-bigtable</artifactId>
        <version>0.95.0-alpha</version>
    </dependency>

标签: apache-sparkgoogle-cloud-platformspark-streaminggoogle-cloud-bigtablespark-streaming-kafka

解决方案


批处理模式的身份验证问题是 Bigtable API 中的一个已知问题。他们最近发布了解决这些问题的 1.12.0。NoRouteToHostException 被隔离为在本地运行,最终成为公司防火墙问题,在设置 -Dhttps.proxyHost 和 -Dhttps.proxyPort 时解决。


推荐阅读