首页 > 解决方案 > Flink s3 读取错误:读取的数据长度与预期不同

问题描述

使用 flink 1.7.0,但也见于 flink 1.8.0。通过 flink .readFile 源从 S3 读取 gzip 压缩的对象时,我们遇到了频繁但有些随机的错误:

org.apache.flink.fs.s3base.shaded.com.amazonaws.SdkClientException: Data read has a different length than the expected: dataLength=9713156; expectedLength=9770429; includeSkipped=true; in.getClass()=class org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client$2; markedSupported=false; marked=0; resetSinceLastMarked=false; markCount=0; resetCount=0
    at org.apache.flink.fs.s3base.shaded.com.amazonaws.util.LengthCheckInputStream.checkLength(LengthCheckInputStream.java:151)
    at org.apache.flink.fs.s3base.shaded.com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:93)
    at org.apache.flink.fs.s3base.shaded.com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:76)
    at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AInputStream.closeStream(S3AInputStream.java:529)
    at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AInputStream.close(S3AInputStream.java:490)
    at java.io.FilterInputStream.close(FilterInputStream.java:181)
    at org.apache.flink.fs.s3.common.hadoop.HadoopDataInputStream.close(HadoopDataInputStream.java:89)
    at java.util.zip.InflaterInputStream.close(InflaterInputStream.java:227)
    at java.util.zip.GZIPInputStream.close(GZIPInputStream.java:136)
    at org.apache.flink.api.common.io.InputStreamFSInputWrapper.close(InputStreamFSInputWrapper.java:46)
    at org.apache.flink.api.common.io.FileInputFormat.close(FileInputFormat.java:861)
    at org.apache.flink.api.common.io.DelimitedInputFormat.close(DelimitedInputFormat.java:536)
    at org.apache.flink.streaming.api.functions.source.ContinuousFileReaderOperator$SplitReader.run(ContinuousFileReaderOperator.java:336)

ys 在给定的作业中,我们通常会看到许多/大部分作业成功读取,但几乎总是至少有一个失败(比如 50 个文件)。

看来这个错误实际上是来自 AWS 客户端,所以也许 flink 与它无关,但我希望有人可能对如何使这项工作可靠地有所了解。

当错误发生时,它最终会杀死源并取消所有连接的操作员。我还是 flink 的新手,但我认为这是可以从以前的快照中恢复的东西?当这种异常发生时,我是否应该期望 flink 会重试读取文件?

标签: amazon-s3apache-flink

解决方案


也许您可以尝试为 s3a 添加更多连接,例如

flink:
...
    config: |
      fs.s3a.connection.maximum: 320

推荐阅读