首页 > 解决方案 > Terraform - 使用模块将 Azure VM 导入状态文件

问题描述

我正在使用以下以“# Script to create VM”开头的脚本创建 VM。正在从更高级别的目录调用该脚本,以便使用模块创建 VM,该调用类似于下面以“#Template..”开头的代码。问题是我们缺少一些在之前运行期间创建的虚拟机的状态。我尝试导入虚拟机本身,但查看状态文件,它看起来与使用底部脚本创建的文件没有任何相似之处。任何帮助都会很棒。

#Template to call VM Script below
module <virtual_machine_name> {
    source = "./vm"
    virtual_machine_name = "<virtual_machine_name>"
    resource_group_name = "<resource_group_name>"
    availability_set_name = "<availability_set_name>"
    virtual_machine_size = "<virtual_machine_size>"
    subnet_name = "<subnet_name>"
    private_ip = "<private_ip>"

    optional:
    production = true (default is false)
    data_disk_name = ["<disk1>","<disk2>"]
    data_disk_size = ["50","100"] size is in GB
}

# Script to create VM
data azurerm_resource_group rgdata02 {
    name = "${var.resource_group_name}"
}
data azurerm_subnet sndata02 {
    name = "${var.subnet_name}"
  resource_group_name = "${var.core_resource_group_name}"
  virtual_network_name = "${var.virtual_network_name}"
}
data azurerm_availability_set availsetdata02 {
  name = "${var.availability_set_name}"
  resource_group_name = "${var.resource_group_name}"
}
data azurerm_backup_policy_vm bkpoldata02 {
  name                = "${var.backup_policy_name}"
  recovery_vault_name = "${var.recovery_services_vault_name}"
  resource_group_name = "${var.core_resource_group_name}"
}
data azurerm_log_analytics_workspace law02 { 
  name                = "${var.log_analytics_workspace_name}"
  resource_group_name = "${var.core_resource_group_name}"
}
#===================================================================
# Create NIC
#===================================================================
resource "azurerm_network_interface" "vmnic02" {
  name                = "nic${var.virtual_machine_name}"
  location            = "${data.azurerm_resource_group.rgdata02.location}"
  resource_group_name = "${var.resource_group_name}"

  ip_configuration {
    name                          = "ipcnfg${var.virtual_machine_name}"
    subnet_id                     = "${data.azurerm_subnet.sndata02.id}"
    private_ip_address_allocation = "Static"
    private_ip_address            = "${var.private_ip}"
  }
}
#===================================================================
# Create VM with Availability Set
#===================================================================
resource "azurerm_virtual_machine" "vm02" {
  count = var.avail_set != "" ? 1 : 0
  depends_on            = [azurerm_network_interface.vmnic02]
  name                  = "${var.virtual_machine_name}"
  location              = "${data.azurerm_resource_group.rgdata02.location}"
  resource_group_name   = "${var.resource_group_name}"
  network_interface_ids = [azurerm_network_interface.vmnic02.id]
  vm_size               = "${var.virtual_machine_size}"
  availability_set_id   = "${data.azurerm_availability_set.availsetdata02.id}"
  tags                  = var.tags

  # This means the OS Disk will be deleted when Terraform destroys the Virtual Machine
  # NOTE: This may not be optimal in all cases.
  delete_os_disk_on_termination = true

  os_profile {
    computer_name  = "${var.virtual_machine_name}"
    admin_username = "__VMUSER__"
    admin_password = "__VMPWD__"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }

  storage_image_reference {
    id = "${var.image_id}"
  }

  storage_os_disk {
    name              = "${var.virtual_machine_name}osdisk"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Premium_LRS"
    os_type           = "Linux"     
  }

  boot_diagnostics {
    enabled = true
    storage_uri = "${var.boot_diagnostics_uri}"
  } 
}
#===================================================================
# Create VM without Availability Set
#===================================================================
resource "azurerm_virtual_machine" "vm03" {
  count = var.avail_set == "" ? 1 : 0
  depends_on            = [azurerm_network_interface.vmnic02]
  name                  = "${var.virtual_machine_name}"
  location              = "${data.azurerm_resource_group.rgdata02.location}"
  resource_group_name   = "${var.resource_group_name}"
  network_interface_ids = [azurerm_network_interface.vmnic02.id]
  vm_size               = "${var.virtual_machine_size}"
  # availability_set_id   = "${data.azurerm_availability_set.availsetdata02.id}"
  tags                  = var.tags

  # This means the OS Disk will be deleted when Terraform destroys the Virtual Machine
  # NOTE: This may not be optimal in all cases.
  delete_os_disk_on_termination = true

  os_profile {
    computer_name  = "${var.virtual_machine_name}"
    admin_username = "__VMUSER__"
    admin_password = "__VMPWD__"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }

  storage_image_reference {
    id = "${var.image_id}"
  }

  storage_os_disk {
    name              = "${var.virtual_machine_name}osdisk"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Premium_LRS"
    os_type           = "Linux"     
  }

  boot_diagnostics {
    enabled = true
    storage_uri = "${var.boot_diagnostics_uri}"
  } 
}
#===================================================================
# Set Monitoring and Log Analytics Workspace
#===================================================================
resource "azurerm_virtual_machine_extension" "oms_mma02" {
  count = var.bootstrap ? 1 : 0
  name                       = "${var.virtual_machine_name}-OMSExtension"
  virtual_machine_id         = "${azurerm_virtual_machine.vm02.id}"
  publisher                  = "Microsoft.EnterpriseCloud.Monitoring"
  type                       = "OmsAgentForLinux"
  type_handler_version       = "1.8"
  auto_upgrade_minor_version = true

  settings = <<SETTINGS
    {
      "workspaceId" : "${data.azurerm_log_analytics_workspace.law02.workspace_id}"
    }
  SETTINGS

  protected_settings = <<PROTECTED_SETTINGS
    {
      "workspaceKey" : "${data.azurerm_log_analytics_workspace.law02.primary_shared_key}"
    }
  PROTECTED_SETTINGS
}
#===================================================================
# Associate VM to Backup Policy
#===================================================================
resource "azurerm_backup_protected_vm" "vm02" {
  count = var.bootstrap ? 1 : 0
  resource_group_name = "${var.core_resource_group_name}"
  recovery_vault_name = "${var.recovery_services_vault_name}"
  source_vm_id        = "${azurerm_virtual_machine.vm02.id}"
  backup_policy_id    = "${data.azurerm_backup_policy_vm.bkpoldata02.id}"}

标签: azureimportvirtual-machinestateterraform

解决方案


据我了解,您并不清楚 Terraform 导入。所以我会告诉你这是什么意思。

  1. 当您要导入预先存在的资源时,您需要先在 Terraform 文件中配置资源,以了解现有资源如何配置。所有的资源都会被导入到状态文件中。

  2. 当前的另一个警告是,一次只能将一个资源导入状态文件。

当您要将资源导入模块时,我假设文件夹结构如下:

testingimportfolder
    └── main.tf
    └── terraform.tfstate
    └── terraform.tfstate.backup
    └───module
          └── main.tf

文件夹中的 main.tf 文件testingimportfolder设置了模块块,如下所示:

module "importlab" {
    source = "./module"
    ...
}

将所有资源导入状态文件后,您可以看到命令的输出terraform state list如下:

module.importlab.azurerm_network_security_group.nsg
module.importlab.azurerm_resource_group.rg
module.importlab.azurerm_virtual_network.vnet

所有资源名称都应为module.module_name.azurerm_xxxx.resource_name. 如果您在模块内部使用模块,我假设文件夹结构如下:

importmodules
├── main.tf
├── modules
│   └── vm
│       ├── main.tf
│       └── module
│           └── main.tf

像这样的文件importmodules/modules/vm/main.tf

module "azurevm" {
        source = "./module"
        ...
}

然后在你完成将所有资源导入状态文件后,你可以看到命令的输出terraform state list如下:

module.vm.module.azurevm.azurerm_network_interface.example

是的,它只是喜欢你所拥有的。当您一一引用模块时,状态文件将存储您现有的资源。所以你需要仔细而清晰地规划你的代码和模块。否则你会让自己感到困惑。


推荐阅读