首页 > 解决方案 > S3 / MinIO with Java / Scala:将字节缓冲区文件块保存到对象存储

问题描述

因此,假设我有一个 Scala Vert.x Web REST API,它通过 HTTP 多部分请求接收文件上传。但是,它不会将传入的文件数据作为单个InputStream. 相反,每个文件都是作为一系列字节缓冲区接收的,这些缓冲区通过几个回调函数移交。

回调基本上如下所示:

  // the callback that receives byte buffers (chunks) of the file being uploaded
  //  it is called multiple times until the full file has been received
  upload.handler { buffer =>
    // send chunk to backend
  }

  // the callback that gets called after the full file has been uploaded
  //  (i.e. after all chunks have been received)
  upload.endHandler { _ =>
    // do something after the file has been uploaded
  }

  // callback called if an exception is raised while receiving the file
  upload.exceptionHandler { e =>
    // do something to handle the exception
  }

现在,我想使用这些回调将文件保存到 MinIO Bucket 中(如果您不熟悉,MinIO 基本上是自托管的 S3,它的 API 与 S3 Java API 几乎相同)。

由于我没有文件句柄,所以我需要使用putObject()将一个InputStream放入 MinIO。

我目前与 MinIO Java API 一起使用的低效解决方法如下所示:

// this is all inside the context of handling a HTTP request
val out = new PipedOutputStream()
val in = new PipedInputStream()
var size = 0
in.connect(out)

upload.handler { buffer =>
    s.write(buffer.getBytes)
    size += buffer.length()
}

upload.endHandler { _ =>
    minioClient.putObject(
        PutObjectArgs.builder()
            .bucket("my-bucket")
            .object("my-filename")
            .stream(in, size, 50000000)
            .build())
}

显然,这不是最优的。因为我在java.io这里使用了一个简单的流,所以整个文件最终被加载到内存中。

在将文件放入对象存储之前,我不想将文件保存到服务器上的磁盘。我想把它直接放入我的对象存储中。

upload.handler我如何使用 S3 API 和通过回调给我的一系列字节缓冲区来完成此操作?

编辑

我应该补充一点,我正在使用 MinIO,因为我不能使用商业托管的云解决方案,比如 S3。但是,正如 MinIO 网站上提到的,我可以使用 Amazon 的 S3 Java SDK,同时使用 MinIO 作为我的存储解决方案。

我尝试按照亚马逊网站上的指南将对象分块上传到 S3。

我尝试的解决方案如下所示:

      context.request.uploadHandler { upload =>
        println(s"Filename: ${upload.filename()}")

        val partETags = new util.ArrayList[PartETag]
        val initRequest = new InitiateMultipartUploadRequest("docs", "my-filekey")
        val initResponse = s3Client.initiateMultipartUpload(initRequest)

        upload.handler { buffer =>
          println("uploading part", buffer.length())
          try {
            val request = new UploadPartRequest()
              .withBucketName("docs")
              .withKey("my-filekey")
              .withPartSize(buffer.length())
              .withUploadId(initResponse.getUploadId)
              .withInputStream(new ByteArrayInputStream(buffer.getBytes()))

            val uploadResult = s3Client.uploadPart(request)
            partETags.add(uploadResult.getPartETag)
          } catch {
            case e: Exception => println("Exception raised: ", e)
          }
        }

        // this gets called for EACH uploaded file sequentially
        upload.endHandler { _ =>
          // upload successful
          println("done uploading")
          try {
            val compRequest = new CompleteMultipartUploadRequest("docs", "my-filekey", initResponse.getUploadId, partETags)
            s3Client.completeMultipartUpload(compRequest)
          } catch {
            case e: Exception => println("Exception raised: ", e)
          }
          context.response.setStatusCode(200).end("Uploaded")
        }
        upload.exceptionHandler { e =>
          // handle the exception
          println("exception thrown", e)
        }
      }
    }

这适用于小文件(我的测试小文件为 11 个字节),但不适用于大文件。

在大文件的情况下,upload.handler随着文件继续上传,内部的进程会逐渐变慢。此外,upload.endHandler永远不会调用,并且文件在 100% 的文件已上传后以某种方式继续上传。

但是,只要我注释掉里面的s3Client.uploadPart(request)部分upload.handler和里面的s3Client.completeMultipartUpload部分upload.endHandler(基本上是扔掉文件而不是将其保存到对象存储中),文件上传就会正常进行并正确终止。

标签: javascalaamazon-s3vert.xminio

解决方案


我发现我做错了什么(使用 S3 客户端时)。我没有在upload.handler. 我需要累积字节,直到缓冲区大小足以上传一部分,而不是每次收到几个字节时上传。

由于 Amazon 的 S3 客户端和 MinIO 客户端都没有满足我的要求,因此我决定深入研究putObject()其实际实现方式并自己制作。这就是我想出的。

这个实现是特定于 Vert.X 的,但是它可以很容易地推广到通过循环和使用一对流与内置java.ioInputStreams一起工作。whilePiped-

此实现也特定于 MinIO,但它可以很容易地适应使用 S3 客户端,因为在大多数情况下,这两个 API 是相同的。

在这个例子中,Buffer基本上是一个围绕 a 的容器ByteArray,我在这里并没有真正做任何特别的事情。我用一个字节数组替换了它,以确保它仍然可以工作,而且确实如此。

package server

import com.google.common.collect.HashMultimap
import io.minio.MinioClient
import io.minio.messages.Part
import io.vertx.core.buffer.Buffer
import io.vertx.core.streams.ReadStream

import scala.collection.mutable.ListBuffer

class CustomMinioClient(client: MinioClient) extends MinioClient(client) {
  def putReadStream(bucket: String = "my-bucket",
                    objectName: String,
                    region: String = "us-east-1",
                    data: ReadStream[Buffer],
                    objectSize: Long,
                    contentType: String = "application/octet-stream"
                   ) = {
    val headers: HashMultimap[String, String] = HashMultimap.create()
    headers.put("Content-Type", contentType)
    var uploadId: String = null

    try {
      val parts = new ListBuffer[Part]()
      val createResponse = createMultipartUpload(bucket, region, objectName, headers, null)
      uploadId = createResponse.result.uploadId()

      var partNumber = 1
      var uploadedSize = 0

      // an array to use to accumulate bytes from the incoming stream until we have enough to make a `uploadPart` request
      var partBuffer = Buffer.buffer()

      // S3's minimum part size is 5mb, excepting the last part
      // you should probably implement your own logic for determining how big
      // to make each part based off the total object size to avoid unnecessary calls to S3 to upload small parts.
      val minPartSize = 5 * 1024 * 1024

      data.handler { buffer =>

        partBuffer.appendBuffer(buffer)

        val availableSize = objectSize - uploadedSize - partBuffer.length

        val isMinPartSize = partBuffer.length >= minPartSize
        val isLastPart = uploadedSize + partBuffer.length == objectSize

        if (isMinPartSize || isLastPart) {

          val partResponse = uploadPart(
            bucket,
            region,
            objectName,
            partBuffer.getBytes,
            partBuffer.length,
            uploadId,
            partNumber,
            null,
            null
          )

          parts.addOne(new Part(partNumber, partResponse.etag))
          uploadedSize += partBuffer.length
          partNumber += 1

          // empty the part buffer since we have already uploaded it
          partBuffer = Buffer.buffer()
        }
      }


      data.endHandler { _ =>
        completeMultipartUpload(bucket, region, objectName, uploadId, parts.toArray, null, null)
      }

      data.exceptionHandler { exception =>
        // should also probably abort the upload here
        println("Handler caught exception in custom putObject: " + exception)
      }
    } catch {
      // and abort it here as well...
      case e: Exception =>
        println("Exception thrown in custom `putObject`: " + e)
        abortMultipartUpload(
          bucket,
          region,
          objectName,
          uploadId,
          null,
          null
        )
    }
  }
}

这一切都可以很容易地使用。

首先,设置客户端:

  private val _minioClient = MinioClient.builder()
    .endpoint("http://localhost:9000")
    .credentials("my-username", "my-password")
    .build()

  private val myClient = new CustomMinioClient(_minioClient)

然后,您收到上传请求的位置:

      context.request.uploadHandler { upload =>
        myClient.putReadStream(objectName = upload.filename(), data = upload, objectSize = myFileSize)
        context.response().setStatusCode(200).end("done")
      }

此实现的唯一问题是您需要提前知道请求的文件大小。

但是,这可以很容易地按照我的方式解决,特别是如果您使用的是 Web UI。

  • 在尝试上传文件之前,向服务器发送一个请求,其中包含文件名到文件大小的映射。
  • 该预请求应为上传生成一个唯一 ID。
  • 服务器可以使用上传 ID 作为索引来保存文件名->文件大小的组。- 服务器将上传 ID 发送回客户端。
  • 客户端使用上传 ID 发送分段上传请求
  • 服务器提取文件列表及其大小并使用它来调用.putReadStream()

推荐阅读