首页 > 解决方案 > terraform plan 使用 terraform 云后端在每次运行时重新创建资源

问题描述

我遇到了一个问题,即terraform plan重新创建不需要每次运行都重新创建的资源。这是一个问题,因为某些步骤取决于可用的资源,并且由于每次运行都会重新创建它们,因此脚本无法完成。

我的设置是 Github Actions、Linode LKE、Terraform Cloud。

我的 main.tf 文件如下所示:

terraform {
  required_providers {
    linode = {
      source  = "linode/linode"
      version = "=1.16.0"
    }
    helm = {
      source = "hashicorp/helm"
      version = "=2.1.0"
    }
  }
  backend "remote" {
    hostname      = "app.terraform.io"
    organization  = "MY-ORG-HERE"
    workspaces {
      name = "MY-WORKSPACE-HERE"
    }    
  }
}

provider "linode" {
}

provider "helm" {
  debug   = true
  kubernetes {
    config_path = "${local_file.kubeconfig.filename}"
  }
}

resource "linode_lke_cluster" "lke_cluster" {
    label       = "MY-LABEL-HERE"
    k8s_version = "1.21"
    region      = "us-central"

    pool {
        type  = "g6-standard-2"
        count = 3
    }
}

和我的 output.tf 文件

resource "local_file" "kubeconfig" {
  depends_on   = [linode_lke_cluster.lke_cluster]
  filename     = "kube-config"
  # filename     = "${path.cwd}/kubeconfig"
  content      = base64decode(linode_lke_cluster.lke_cluster.kubeconfig)
}

resource "helm_release" "ingress-nginx" {
  # depends_on   = [local_file.kubeconfig]
  depends_on = [linode_lke_cluster.lke_cluster, local_file.kubeconfig]
  name       = "ingress"
  repository = "https://kubernetes.github.io/ingress-nginx"
  chart      = "ingress-nginx"
}

resource "null_resource" "custom" {
  depends_on   = [helm_release.ingress-nginx]
  # change trigger to run every time
  triggers = {
    build_number = "${timestamp()}"
  }

  # download kubectl
  provisioner "local-exec" {
    command = "curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl"
  }

  # apply changes
  provisioner "local-exec" {
    command = "./kubectl apply -f ./k8s/ --kubeconfig ${local_file.kubeconfig.filename}"
  }
}

在 Github Actions 中,我正在运行以下步骤:

jobs:
  init-terraform:
    runs-on: ubuntu-latest
    defaults:
      run:
        working-directory: ./terraform
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
        with:
          ref: 'privatebeta-kubes'
      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v1
        with:
          cli_config_credentials_token: ${{ secrets.TERRAFORM_API_TOKEN }}
      - name: Terraform Init
        run: terraform init
      - name: Terraform Format Check
        run: terraform fmt -check -v
      - name: List terraform state
        run: terraform state list
      - name: Terraform Plan
        run: terraform plan
        id: plan
        env:
          LINODE_TOKEN: ${{ secrets.LINODE_TOKEN }}

当我查看结果时,terraform state list我可以看到我的资源:

Run terraform state list
  terraform state list
  shell: /usr/bin/bash -e {0}
  env:
    TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin state list
helm_release.ingress-nginx
linode_lke_cluster.lke_cluster
local_file.kubeconfig
null_resource.custom

但是我terraform plan失败了,问题似乎源于这些资源试图重新创建的事实。

Run terraform plan
  terraform plan
  shell: /usr/bin/bash -e {0}
  env:
    TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
    LINODE_TOKEN: ***
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin plan
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...
Waiting for the plan to start...

Terraform v1.0.2
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
linode_lke_cluster.lke_cluster: Refreshing state... [id=31946]
local_file.kubeconfig: Refreshing state... [id=fbb5520298c7c824a8069397ef179e1bc971adde]
helm_release.ingress-nginx: Refreshing state... [id=ingress]
╷
│ Error: Kubernetes cluster unreachable: stat kube-config: no such file or directory
│ 
│   with helm_release.ingress-nginx,
│   on outputs.tf line 8, in resource "helm_release" "ingress-nginx":
│    8: resource "helm_release" "ingress-nginx" {

有没有办法告诉 terraform 它不需要重新创建这些资源?

标签: kubernetesterraformkubernetes-helmgithub-actionsterraform-cloud

解决方案


关于显示的实际错误,错误:Kubernetes cluster unreachable: stat kibe-config: no such file or directory ... 这是引用您的输出文件...我发现这可以帮助您解决特定错误:https://github .com/hashicorp/terraform-provider-helm/issues/418

另一件事对我来说很奇怪。为什么您的 output.tf 指的是“资源”而不是“输出”。你的 output.tf 不应该是这样的吗?

output "local_file_kubeconfig" {
  value = "reference.to.resource"
}

我还看到您的状态文件/后端配置看起来配置正确。

我建议登录到您的 terraform 云帐户,以验证工作区是否确实存在,如预期的那样。是状态文件告诉 terraform 不要重新创建它管理的资源。

如果资源已经存在并且 terraform 正在尝试重新创建它们,这可能表明这些资源是在使用 terraform 之前创建的,或者可能是在另一个 terraform 云工作空间或计划中创建的。

您是否最终在此计划的任何时候重命名了您的后端工作区?我指的是您的 main.tf 文件,这部分内容为MY-WORKSPACE-HERE

terraform {
  required_providers {
    linode = {
      source  = "linode/linode"
      version = "=1.16.0"
    }
    helm = {
      source = "hashicorp/helm"
      version = "=2.1.0"
    }
  }
  backend "remote" {
    hostname      = "app.terraform.io"
    organization  = "MY-ORG-HERE"
    workspaces {
      name = "MY-WORKSPACE-HERE"
    }    
  }
}

不幸的是,我不是 kurbenetes 专家,因此可能需要更多帮助。


推荐阅读