google-cloud-storage - Flink 检查点到 Google Cloud Storage
问题描述
我正在尝试为 GCS 中的 flink 作业配置检查点。如果我在本地运行测试作业(没有 docker 和任何集群设置),一切正常,但如果我使用 docker-compose 或集群设置运行它并在 flink 仪表板中部署带有作业的 fat jar,它会失败并出现错误。
有什么想法吗?谢谢!
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'gs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:405)
at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:320)
at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
at org.apache.flink.runtime.state.filesystem.FsCheckpointStorage.<init>(FsCheckpointStorage.java:61)
at org.apache.flink.runtime.state.filesystem.FsStateBackend.createCheckpointStorage(FsStateBackend.java:441)
at org.apache.flink.contrib.streaming.state.RocksDBStateBackend.createCheckpointStorage(RocksDBStateBackend.java:379)
at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:247)
... 33 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.
at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:64)
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:401)
环境配置是这样的:
StreamExecutionEnvironment env = applicationContext.getBean(StreamExecutionEnvironment.class);
CheckpointConfig checkpointConfig = env.getCheckpointConfig();
checkpointConfig.setFailOnCheckpointingErrors(false);
checkpointConfig.setCheckpointInterval(10000);
checkpointConfig.setMinPauseBetweenCheckpoints(5000);
checkpointConfig.setMaxConcurrentCheckpoints(1);
checkpointConfig.setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
RocksDBStateBackend rocksDBStateBackend = new RocksDBStateBackend(
String.format("gs://checkpoints/%s", jobClass.getSimpleName()), true);
env.setStateBackend((StateBackend) rocksDBStateBackend);
这是我的core-site.xml
文件:
<configuration>
<property>
<name>google.cloud.auth.service.account.enable</name>
<value>true</value>
</property>
<property>
<name>google.cloud.auth.service.account.json.keyfile</name>
<value>${user.dir}/key.json</value>
</property>
<property>
<name>fs.gs.impl</name>
<value>com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem</value>
<description>The FileSystem for gs: (GCS) uris.</description>
</property>
<property>
<name>fs.AbstractFileSystem.gs.impl</name>
<value>com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS</value>
<description>The AbstractFileSystem for gs: (GCS) uris.</description>
</property>
<property>
<name>fs.gs.application.name.suffix</name>
<value>-kube-flink</value>
<description>
Appended to the user-agent header for API requests to GCS to help identify
the traffic as coming from Dataproc.
</description>
</property>
对 gcs-connector 的依赖:
<dependency>
<groupId>com.google.cloud.bigdataoss</groupId>
<artifactId>gcs-connector</artifactId>
<version>1.9.4-hadoop2</version>
</dependency>
更新:
在对依赖项进行一些操作后,我已经能够编写检查点了。我目前的设置是:
<dependency>
<groupId>com.google.cloud.bigdataoss</groupId>
<artifactId>gcs-connector</artifactId>
<version>hadoop2-1.9.5</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-statebackend-rocksdb_${scala.version}</artifactId>
<version>1.5.1</version>
</dependency>
我也将flink图像切换到版本flink:1.5.2-hadoop28
不幸的是,我仍然无法读取检查点数据,因为我的工作在恢复状态时总是失败并出现错误:
java.lang.NoClassDefFoundError: com/google/cloud/hadoop/gcsio/GoogleCloudStorageImpl$6
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.open(GoogleCloudStorageImpl.java:666)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.open(GoogleCloudStorageFileSystem.java:323)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream.<init>(GoogleHadoopFSInputStream.java:136)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.open(GoogleHadoopFileSystemBase.java:1102)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:787)
at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.open(HadoopFileSystem.java:119)
at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.open(HadoopFileSystem.java:36)
at org.apache.flink.core.fs.SafetyNetWrapperFileSystem.open(SafetyNetWrapperFileSystem.java:80)
at org.apache.flink.runtime.state.filesystem.FileStateHandle.openInputStream(FileStateHandle.java:68)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBIncrementalRestoreOperation.copyStateDataHandleData(RocksDBKeyedStateBackend.java:1005)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBIncrementalRestoreOperation.transferAllDataFromStateHandles(RocksDBKeyedStateBackend.java:988)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBIncrementalRestoreOperation.transferAllStateDataToDirectory(RocksDBKeyedStateBackend.java:974)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBIncrementalRestoreOperation.restoreInstance(RocksDBKeyedStateBackend.java:758)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBIncrementalRestoreOperation.restore(RocksDBKeyedStateBackend.java:732)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.restore(RocksDBKeyedStateBackend.java:443)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.restore(RocksDBKeyedStateBackend.java:149)
at org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:151)
at org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:123)
at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:276)
at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:132)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:227)
at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:730)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:295)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703)
at java.lang.Thread.run(Thread.java:748)
我相信这将是最后一个错误......
解决方案
最后我在这里找到了解决方案
您必须创建自己的映像并将 gcs-connector 放入 lib 目录。否则你总是会遇到类加载问题(用户代码和系统类加载器)。
要创建自定义 Docker 映像,我们创建以下 Dockerfile:
FROM registry.platform.data-artisans.net/trial/v1.0/flink:1.4.2-dap1-scala_2.11 RUN wget -O lib/gcs-connector-latest-hadoop2.jar https://storage.googleapis.com/hadoop-lib/gcs/gcs-connector-latest-hadoop2.jar RUN wget -O lib/gcs-connector-latest-hadoop2.jar https://storage.googleapis.com/hadoop-lib/gcs/gcs-connector-latest-hadoop2.jar && \ wget http://ftp.fau.de/apache/flink/flink-1.4.2/flink-1.4.2-bin-hadoop28-scala_2.11.tgz && \ tar xf flink-1.4.2-bin-hadoop28-scala_2.11.tgz && \ mv flink-1.4.2/lib/flink-shaded-hadoop2* lib/ && \ rm -r flink-1.4.2* RUN mkdir etc-hadoop COPY <name of key file>.json etc-hadoop/ COPY core-site.xml etc-hadoop/ ENTRYPOINT ["/docker-entrypoint.sh"] EXPOSE 6123 8081 CMD ["jobmanager"]
Docker 镜像将基于我们作为 dA 平台试用的一部分提供的 Flink 镜像。我们正在添加 Google Cloud Storage 连接器、Flink 的 Hadoop 包和配置文件的密钥。
要构建自定义映像,当前目录中应包含以下文件:core-site.xml、Dockerfile 和密钥文件 (.json)。
为了最终触发自定义映像的构建,我们运行以下命令:
$ docker build -t flink-1.4.2-gs .
镜像构建完成后,我们会将镜像上传到 Google 的 Container Registry。要配置 Docker 以正确访问注册表,请运行一次此命令:
$ gcloud auth configure-docker
接下来,我们将标记并上传容器:
$ docker tag flink-1.4.2-gs:latest eu.gcr.io/<your project id>/flink-1.4.2-gs $ docker push eu.gcr.io/<your project id>/flink-1.4.2-gs
上传完成后,我们需要为 Application Manager 部署设置自定义图像。发送以下 PATCH 请求:
PATCH /api/v1/deployments/<your AppMgr deployment id> spec: template: spec: flinkConfiguration: fs.hdfs.hadoopconf: /opt/flink/etc-hadoop/ artifact: flinkImageRegistry: eu.gcr.io flinkImageRepository: <your project id>/flink-1.4.2-gs flinkImageTag: latest
或者,使用以下 curl 命令:
$ curl -X PATCH --header 'Content-Type: application/yaml' --header 'Accept: application/yaml' -d ' spec: \ template: \ spec: \ flinkConfiguration: fs.hdfs.hadoopconf: /opt/flink/etc-hadoop/ artifact: \ flinkImageRegistry: eu.gcr.io \ flinkImageRepository: <your project id>/flink-1.4.2-gs \ flinkImageTag: latest' 'http://localhost:8080/api/v1/deployments/<your AppMgr deployment id>‘
实施此更改后,您将能够检查点到 Google 的云存储。指定目录 gs:///checkpoints 时使用以下模式。对于保存点,设置 state.savepoints.dir Flink 配置选项。
推荐阅读
- apache-kafka - Kafka Streams - 共享变更日志主题
- c - fgets() 工作很尴尬
- cypress - 运行不同组赛普拉斯测试的简单方法
- asp.net - 将 Swagger UI 添加到自定义 .NET Web API
- reactjs - 带有 create-react-app 的 React-Router 未安装/渲染组件
- javascript - 如何使用一键禁用锚标记
- python - 如何在此直方图中添加图例?
- compression - uncompress a .txt.gz file in mac?
- speech-recognition - Google Speech API 元数据不会影响结果或转换方法
- java - Java Double Object 初始化与其他 Number 类型对象