首页 > 解决方案 > 尝试使用 PUT 将 PDF 作为 blob 上传到 S3 存储桶时出现 403 禁止

问题描述

我想要做什么

从浏览器客户端上传 PDF 文件,而不会暴露任何凭据或任何令人讨厌的东西。基于,我认为它可以完成,但它似乎对我不起作用。

前提是:

从 S3 获取预签名 URL

这部分很简单,对我有用。我只是用这个小 JS 块从 S3 请求一个 URL:

const s3Params = {
    Bucket: uploadBucket,
    Key: `${fileId}.pdf`,
    ContentType: 'application/pdf',
    Expires: 60,
    ACL: 'public-read',
}

let uploadUrl = s3.getSignedUrl('putObject', s3Params);

使用预签名 URL 将文件上传到 S3

这是不起作用的部分,我不知道为什么。这段代码基本上使用 PUT 请求将数据块发送到 S3 存储桶预签名 URL。

const result = await fetch(response.data.uploadURL, {
        method: 'put',
        body: blobData,
});

放置或发布?

我发现使用任何 POST 请求都会导致400 Bad Request,所以 PUT 就是这样。

我看过的

Content-Type(在我的情况下,它是 application/pdf,所以blobData.type)——它们在后端和前端之间匹配。

x-amz-acl 标头

更多内容类型

类似的用例。看看这个,似乎不需要在 PUT 请求中提供任何标头,并且签名的 URL 本身就是文件上传所必需的。

有点奇怪,我不明白。看起来我可能需要将文件的长度和类型传递getSignedUrl给对 S3 的调用。

向公众公开我的存储桶(禁止 bueno)

使用 POST 将文件上传到 s3

前端(fileUploader.js,使用 Vue):

...

uploadFile: async function(e) {
      /* receives file from simple input element -> this.file */
      // get signed URL
      const response = await axios({
        method: 'get',
        url: API_GATEWAY_URL
      });

      console.log('upload file response:', response);

      let binary = atob(this.file.split(',')[1]);
      let array = [];

      for (let i = 0; i < binary.length; i++) {
        array.push(binary.charCodeAt(i));
      }

      let blobData = new Blob([new Uint8Array(array)], {type: 'application/pdf'});
      console.log('uploading to:', response.data.uploadURL);
      console.log('blob type sanity check:', blobData.type);

      const result = await fetch(response.data.uploadURL, {
        method: 'put',
        headers: {
          'Access-Control-Allow-Methods': '*',
          'Access-Control-Allow-Origin': '*',
          'x-amz-acl': 'public-read',
          'Content-Type': blobData.type
        },
        body: blobData,
      });

      console.log('PUT result:', result);

      this.uploadUrl = response.data.uploadURL.split('?')[0];
    }

后端(fileReceiver.js):

'use strict';

const uuidv4 = require('uuid/v4');
const aws = require('aws-sdk');
const s3 = new aws.S3();

const uploadBucket = 'the-chumiest-bucket';
const fileKeyPrefix = 'path/to/where/the/file/should/live/';

const getUploadUrl = async () => {
  const fileId = uuidv4();
  const s3Params = {
    Bucket: uploadBucket,
    Key: `${fileId}.pdf`,
    ContentType: 'application/pdf',
    Expires: 60,
    ACL: 'public-read',
  }

  return new Promise((resolve, reject) => {
    let uploadUrl = s3.getSignedUrl('putObject', s3Params);
    resolve({
      'statusCode': 200,
      'isBase64Encoded': false,
      'headers': { 
        'Access-Control-Allow-Origin': '*',
        'Access-Control-Allow-Headers': '*',
        'Access-Control-Allow-Credentials': true,
      },
      'body': JSON.stringify({
        'uploadURL': uploadUrl,
        'filename': `${fileId}.pdf`
      })
    });
  });
};

exports.handler = async (event, context) => {
  console.log('event:', event);
  const result = await getUploadUrl();
  console.log('result:', result);

  return result;
}

无服务器配置(serverless.yml):

service: ocr-space-service

provider:
  name: aws
  region: ca-central-1
  stage: ${opt:stage, 'dev'}
  timeout: 20

plugins:
  - serverless-plugin-existing-s3
  - serverless-step-functions
  - serverless-pseudo-parameters
  - serverless-plugin-include-dependencies

layers:
  spaceOcrLayer:
    package:
      artifact: spaceOcrLayer.zip
    allowedAccounts:
      - "*"

functions:
  fileReceiver:
    handler: src/node/fileReceiver.handler
    events:
      - http:
          path: /doc-parser/get-url
          method: get
          cors: true
  startStateMachine:
    handler: src/start_state_machine.lambda_handler
    role: 
    runtime: python3.7
    layers:
      - {Ref: SpaceOcrLayerLambdaLayer}
    events:
      - existingS3:
          bucket: ingenio-documents
          events:
            - s3:ObjectCreated:*
          rules:
            - prefix: 
            - suffix: .pdf
  startOcrSpaceProcess:
    handler: src/start_ocr_space.lambda_handler
    role: 
    runtime: python3.7
    layers:
      - {Ref: SpaceOcrLayerLambdaLayer}
  parseOcrSpaceOutput:
    handler: src/parse_ocr_space_output.lambda_handler
    role: 
    runtime: python3.7
    layers:
      - {Ref: SpaceOcrLayerLambdaLayer}
  renamePdf:
    handler: src/rename_pdf.lambda_handler
    role: 
    runtime: python3.7
    layers:
      - {Ref: SpaceOcrLayerLambdaLayer}
  parseCorpSearchOutput:
    handler: src/node/pdfParser.handler
    role: 
    runtime: nodejs10.x
  saveFileToProcessed:
    handler: src/node/saveFileToProcessed.handler
    role: 
    runtime: nodejs10.x

stepFunctions:
  stateMachines:
    ocrSpaceStepFunc:
      name: ocrSpaceStepFunc
      definition:
        StartAt: StartOcrSpaceProcess
        States:
          StartOcrSpaceProcess:
            Type: Task
            Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-startOcrSpaceProcess"
            Next: IsDocCorpSearchChoice
            Catch:
            - ErrorEquals: ["HandledError"]
              Next: HandledErrorFallback
          IsDocCorpSearchChoice:
            Type: Choice
            Choices:
              - Variable: $.docIsCorpSearch
                NumericEquals: 1
                Next: ParseCorpSearchOutput
              - Variable: $.docIsCorpSearch
                NumericEquals: 0
                Next: ParseOcrSpaceOutput
          ParseCorpSearchOutput:
            Type: Task
            Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-parseCorpSearchOutput"
            Next: SaveFileToProcessed
            Catch:
              - ErrorEquals: ["SqsMessageError"]
                Next: CorpSearchSqsErrorFallback
              - ErrorEquals: ["DownloadFileError"]
                Next: CorpSearchDownloadFileErrorFallback
              - ErrorEquals: ["HandledError"]
                Next: HandledNodeErrorFallback
          SaveFileToProcessed:
            Type: Task
            Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-saveFileToProcessed"
            End: true
          ParseOcrSpaceOutput:
            Type: Task
            Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-parseOcrSpaceOutput"
            Next: RenamePdf
            Catch:
            - ErrorEquals: ["HandledError"]
              Next: HandledErrorFallback
          RenamePdf:
            Type: Task
            Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-renamePdf"
            End: true
            Catch:
              - ErrorEquals: ["HandledError"]
                Next: HandledErrorFallback
              - ErrorEquals: ["AccessDeniedException"]
                Next: AccessDeniedFallback
          AccessDeniedFallback:
            Type: Fail
            Cause: "Access was denied for copying an S3 object"
          HandledErrorFallback:
            Type: Fail
            Cause: "HandledError occurred"
          CorpSearchSqsErrorFallback:
            Type: Fail
            Cause: "SQS Message send action resulted in error"
          CorpSearchDownloadFileErrorFallback:
            Type: Fail
            Cause: "Downloading file from S3 resulted in error"
          HandledNodeErrorFallback:
            Type: Fail
            Cause: "HandledError occurred"

错误:

403 禁止

放置响应

响应 {type: "cors", url: "https://{bucket-name}.s3.{region-id}.amazonaw...nedHeaders=host%3Bx-amz-acl&x-amz-acl=public-read",重定向:false,状态:403,ok:false,...} body:(...) bodyUsed:false headers:Headers {} ok:false 重定向:false status:403 statusText:“Forbidden”类型:“cors” url: "https://{bucket-name}.s3.{region-id}.amazonaws.com/actionID.pdf?Content-Type=application%2Fpdf&X-Amz-Algorithm=SHA256&X-Amz-Credential=CREDZ-&X-Amz -Date=20190621T192558Z&X-Amz-Expires=900&X-Amz-Security-Token={token}&X-Amz-SignedHeaders=host%3Bx-amz-acl&x-amz-acl=public-read" 原型:响应

我在想什么

我认为getSignedUrl使用 AWS S3 SDK 提供给调用的参数不正确,尽管它们遵循 AWS 文档建议的结构(在此处解释)。除此之外,我真的不知道为什么我的请求被拒绝了。我什至尝试将我的 Bucket 完全暴露给公众,但仍然没有用。

编辑

#1:

读完这篇文章后,我尝试像这样构造我的 PUT 请求:

      let authFromGet = response.config.headers.Authorization;      

      const putHeaders = {
        'Authorization': authFromGet,
        'Content-Type': blobData,
        'Expect': '100-continue',
      };

      ...

      const result = await fetch(response.data.uploadURL, {
        method: 'put',
        headers: putHeaders,
        body: blobData,
      });

这导致400 Bad Request了 403 而不是 403;不同,但仍然是错误的。显然,在请求上放置任何标头都是错误的。

标签: node.jsamazon-web-servicesamazon-s3serverless-framework

解决方案


深入研究,这是因为您试图将具有公共 ACL 的对象上传到不允许公共对象的存储桶中。

  1. 可选择删除公共 ACL 语句或...

  2. 确保存储桶设置为

    • 公开可见或
    • 确保没有其他策略阻止公共访问(例如,您是否有防止公共可见对象但尝试使用公共 ACL 上传对象的帐户策略?)

基本上,您不能将带有公共 ACL 的对象上传到存在一些限制的存储桶中 - 您将收到您描述的 403 错误。HTH。


推荐阅读