首页 > 解决方案 > 大于 57 MB 的文件上传 Elastic Beanstalk 获得 405 状态 - 不要在 localhost 上出现此错误

问题描述

我目前正在开发一个使用 Vue JS 和 SpringBoot 作为多模块 Maven 项目开发的 Web 应用程序。Web-App 部署在 Amazon AWS Elastic Beanstalk 上,具有 SQL-DB 和 S3-Bucket 连接。我想实现将视频 (.mp4) 的文件上传到 S3 存储桶中。我面临以下问题:

现在的谜团如下:

@Service
public class AmazonClient {

    private AmazonS3 s3client;

    @Value("${aws.endpointUrl}")
    private String endpointUrl;
    @Value("${aws.s3.bucket}")
    private String bucketName;
    @Value("${aws.access_key_id}")
    private String accessKey;
    @Value("${aws.secret_access_key}")
    private String secretKey;

    @Value("${aws.s3.region}")
    private String region;

    @Autowired
    UserRepository userRepository;

    @PostConstruct
    private void initializeAmazon() {
        AWSCredentials credentials = new BasicAWSCredentials(this.accessKey, this.secretKey);
        // this.s3client = new AmazonS3Client(credentials);
        this.s3client = AmazonS3ClientBuilder.standard().withRegion(Regions.fromName(region))
                .withCredentials(new AWSStaticCredentialsProvider(credentials)).build();
    }


    public String uploadFilePublicRead(MultipartFile multipartFile) {
        String fileUrl = "";
        try {
            File file = convertMultiPartToFile(multipartFile);
            
            String fileName = generateFileName(multipartFile);
            fileUrl = endpointUrl + "/" + bucketName + "/" + fileName;
            uploadFileTos3bucketPublicRead(fileName, file);
            file.delete();
        } catch (Exception e) {
            e.printStackTrace();
        }
        return fileUrl;
    }

    public String uploadMultiPartFilePublicRead(MultipartFile multipartFile) {
        String fileUrl = "";
        String fileName= "";
        File file = null;
        try {
            file = convertMultiPartToFile(multipartFile);
            
            fileName = generateFileName(multipartFile);
            fileUrl = endpointUrl + "/" + bucketName + "/" + fileName;

           // uploadFileTos3bucketPublicRead(fileName, file);

            
        } catch (Exception e) {
            e.printStackTrace();
        }

        if(!file.exists()) return "File does not exits";

        int maxUploadThreads = 5;

        TransferManager tm = TransferManagerBuilder
                .standard()
                .withS3Client(s3client)
                .withMultipartUploadThreshold((long) (5 * 1024 * 1024))
                .withExecutorFactory(() -> Executors.newFixedThreadPool(maxUploadThreads))
                .build();

         double size = this.getSizeOfFile(file, 0); 
         
        ObjectMetadata metadata = new ObjectMetadata();
        metadata.setContentLength((long) this.getSizeOfFile(file, 0));
    
        

        ProgressListener progressListener =
                progressEvent -> System.out.println("Transferred bytes: " + progressEvent.getBytesTransferred());
          
        PutObjectRequest request = new PutObjectRequest(this.bucketName, fileName, file)
        .withCannedAcl(CannedAccessControlList.PublicRead)
        .withMetadata(metadata);

      

        request.setGeneralProgressListener(progressListener);

        Upload upload = tm.upload(request);
        
        
        try {
            upload.waitForCompletion();
            System.out.println("Upload complete.");
        } catch (AmazonClientException e) {
            System.out.println("Error occurred while uploading file: AmazonClientExeception");
            e.printStackTrace();
        }
        catch(InterruptedException i)
        {
            System.out.println("Erorr occured while uploading file: InterruptedExecpetion");
            i.printStackTrace();
        }

        try{
        file.delete();
        }
        catch(Exception e)
        {
            System.out.println("Error occured while deleting file.");
            e.printStackTrace();
        }

        return fileUrl;
    }


    private File convertMultiPartToFile(MultipartFile file) throws IOException {
        File convFile = new File(file.getOriginalFilename());
        FileOutputStream fos = new FileOutputStream(convFile);
        fos.write(file.getBytes());
        fos.close();
        return convFile;
    }

    private String generateFileName(MultipartFile multiPart) {
        return new Date().toString().replace(" ", "-").replace(":", "_") + "-"
                + multiPart.getOriginalFilename().replace(" ", "_");
    }

    private void uploadFileTos3bucketPublicRead(String fileName, File file) {
        s3client.putObject(
                new PutObjectRequest(bucketName, fileName, file).withCannedAcl(CannedAccessControlList.PublicRead));

    }

    private void uploadFileTos3bucket(String fileName, File file) {
        s3client.putObject(new PutObjectRequest(bucketName, fileName, file));
    }  
    }
}

spring.servlet.multipart.max-file-size=5000MB
spring.servlet.multipart.max-request-size=5000MB

总而言之,我不知道为什么不接受大于 57 MB 的文件,尽管我在 NGINX 配置文件中更改了 client_max_body_size(和超时设置)。

另外:我还尝试更改我的上传过程(而不是单个 putObject)以使用 AWS Transfer Manager 上传一小部分视频。

我希望我恰当地描述了这个问题。请不要犹豫,提出进一步的问题。预先感谢您的帮助!

nginx.conf

postman_header

标签: spring-bootnginxamazon-s3file-uploadamazon-elastic-beanstalk

解决方案


推荐阅读