首页 > 解决方案 > 使用数据管道将 EFS 备份到 S3

问题描述

我正在编写一个将 EFS 文件系统备份到 S3 的解决方案。当这些备份发生时,它应该删除以前的备份。我实现这一点的方式是通过 Terraform。在 Terraform 中,我正在使用 cloudformation 堆栈创建数据管道。我还创建了 2 个 S3 存储桶:一个用于数据管道的日志,一个用于我的 EFS 卷的备份。执行我的 Terraform 代码时,除了数据管道之外,一切都没有问题。我收到 ROLLBACK_COMPLETE 错误。这是确切的错误:

ROLLBACK_COMPLETE: ["The following resource(s) failed to create: [DataPipelineEFSBackup]. . Rollback requested by user." "Pipeline Definition failed to validate because of following Errors: [{ObjectId = 'ShellCommandActivityObj', errors = [Not a valid S3 Path. It must be of the form s3://bucket/key]}, {ObjectId = 'EC2ResourceObj', errors = [Not a valid S3 Path. It must be of the form s3://bucket/key]}] and Warnings: []"]

我无法弄清楚为什么会出现这种情况。下面也是创建我的 S3 存储桶、我的 cloudformation 堆栈的代码,以及产生此错误的数据管道脚本的一部分。任何意见,将不胜感激。

S3 存储桶

resource "aws_s3_bucket" "backup" {


bucket        = var.s3_backup
  force_destroy = true

  versioning {
    enabled = true
  }

  lifecycle_rule {
    enabled = true
    prefix  = "efs"

    noncurrent_version_expiration {
      days = var.noncurrent_version_expiration_days
    }
  }
}

resource "aws_s3_bucket" "logs" {
  bucket        = var.s3_logs
  force_destroy = true
}

数据管道

resource "aws_cloudformation_stack" "datapipeline" {


name          = "${var.name}-datapipeline-stack"
  template_body = file("scripts/templates/datapipeline.yml")

  parameters = {
    myInstanceType             = var.datapipeline_config["instance_type"]
    mySubnetId                 = aws_subnet.public.id
    mySecurityGroupId          = aws_security_group.datapipeline.id
    myS3LogBucket              = aws_s3_bucket.logs.id
    myS3BackupsBucket          = aws_s3_bucket.backup.id
    myEFSId                    = aws_efs_file_system.efs.id
    myEFSSource                = aws_efs_mount_target.efs.dns_name
    myTopicArn                 = aws_cloudformation_stack.sns.outputs["TopicArn"]
    myImageId                  = data.aws_ami.amazon_linux.id
    myDataPipelineResourceRole = aws_iam_instance_profile.resource_role.name
    myDataPipelineRole         = aws_iam_role.datapipeline_role.name
    myKeyPair                  = aws_key_pair.key_pair.key_name
    myPeriod                   = var.datapipeline_config["period"]
    myExecutionTimeout         = var.datapipeline_config["timeout"]
    Tag                        = var.name
    myRegion                   = var.region
  }
}

数据管道脚本

- Id: ShellCommandActivityObj
      Name: ShellCommandActivityObj
      Fields:
        - Key: type
          StringValue: ShellCommandActivity
        - Key: runsOn
          RefValue: EC2ResourceObj
        - Key: command
          StringValue: |
            source="$1"
            region="$2"
            destination="$3"
            sudo yum -y install nfs-utils
            [[ -d /backup ]] || sudo mkdir /backup
            if ! mount -l -t nfs4 | grep -qF $source; then
              sudo mount -t nfs -o nfsvers=4.1 -o rsize=1048576 -o wsize=1048576 -o timeo=600 -o retrans=2 -o hard "$source" /backup
            fi
            sudo aws s3 sync --delete --exact-timestamps /backup/ s3://$destination/
            backup_status="$?"
            if [ "backup_status" -eq "2"]; then
              backup_status="0"
            fi
            exit "$backup_status"

标签: amazon-web-servicesamazon-s3terraformamazon-data-pipeline

解决方案


推荐阅读