首页 > 解决方案 > 文件在上传到 GCP 之前未在 terraform 中存档

问题描述

尽管使用depends_on了指令,但在尝试将其放入存储桶之前似乎并未创建 zip。考虑到管道输出,它只是在将文件上传到存储桶之前忽略了归档文件。两个文件(index.js 和 package.json)都存在。

resource "google_storage_bucket" "cloud-functions" {
  project       = var.project-1-id
  name          = "${var.project-1-id}-cloud-functions"
  location      = var.project-1-region
}

resource "google_storage_bucket_object" "start_instance" {
  name       = "start_instance.zip"
  bucket     = google_storage_bucket.cloud-functions.name
  source     = "${path.module}/start_instance.zip"
  depends_on = [
    data.archive_file.start_instance,
  ]
}

data "archive_file" "start_instance" {
  type        = "zip"
  output_path = "${path.module}/start_instance.zip"

  source {
    content  = file("${path.module}/scripts/start_instance/index.js")
    filename = "index.js"
  }

  source {
    content  = file("${path.module}/scripts/start_instance/package.json")
    filename = "package.json"
  }
}
Terraform has been successfully initialized!
 $ terraform apply -input=false "planfile"
 google_storage_bucket_object.stop_instance: Creating...
 google_storage_bucket_object.start_instance: Creating...
 Error: open ./start_instance.zip: no such file or directory
   on cloud_functions.tf line 41, in resource "google_storage_bucket_object" "start_instance":
   41: resource "google_storage_bucket_object" "start_instance" {

日志:

 2020-11-18T13:02:56.796Z [DEBUG] plugin.terraform-provider-google_v3.40.0_x5: 2020/11/18 13:02:56 [WARN] Failed to read source file "./start_instance.zip". Cannot compute md5 hash for it.
 2020/11/18 13:02:56 [WARN] Provider "registry.terraform.io/hashicorp/google" produced an invalid plan for google_storage_bucket_object.stop_instance, but we are tolerating it because it is using the legacy plugin SDK.
     The following problems may be the cause of any confusing errors from downstream operations:
       - .detect_md5hash: planned value cty.StringVal("different hash") does not match config value cty.NullVal(cty.String)
 2020/11/18 13:02:56 [WARN] Provider "registry.terraform.io/hashicorp/google" produced an invalid plan for google_storage_bucket_object.start_instance, but we are tolerating it because it is using the legacy plugin SDK.
     The following problems may be the cause of any confusing errors from downstream operations:
       - .detect_md5hash: planned value cty.StringVal("different hash") does not match config value cty.NullVal(cty.String)

标签: google-cloud-platformterraformbucket

解决方案


我对 GitLab CI/CD 管道有完全相同的问题。经过一番挖掘,根据讨论,我发现使用此设置,计划和应用阶段在单独的容器中运行,归档步骤在计划阶段执行。

一种解决方法是使用 null_resource 创建一个虚拟触发器,并强制 archive_file 依赖它,因此在应用阶段执行。

resource null_resource dummy_trigger {
  triggers = {
    timestamp = timestamp()
  }
}

resource "google_storage_bucket" "cloud-functions" {
  project       = var.project-1-id
  name          = "${var.project-1-id}-cloud-functions"
  location      = var.project-1-region
}

resource "google_storage_bucket_object" "start_instance" {
  name       = "start_instance.zip"
  bucket     = google_storage_bucket.cloud-functions.name
  source     = "${path.module}/start_instance.zip"
  depends_on = [
    data.archive_file.start_instance,
  ]
}

data "archive_file" "start_instance" {
  type        = "zip"
  output_path = "${path.module}/start_instance.zip"

  source {
    content  = file("${path.module}/scripts/start_instance/index.js")
    filename = "index.js"
  }

  source {
    content  = file("${path.module}/scripts/start_instance/package.json")
    filename = "package.json"
  }
  
  depends_on = [
    resource.null_resource.dummy_trigger,
  ]
}

推荐阅读