Photo by Wes Hicks / Unsplash

Automating Azure Infrastructure with Terraform and Azure DevOps

Daniel Makhoba Emmanuel

Table of Contents

The Azure ecosystem offers a wide range of services with varying price points, from affordable to expensive. As a DevOps/Cloud engineer, your responsibility is to provision and configure these services properly not just from an operational standpoint, but also with regard to their operational expenses.

Managing these resources becomes more challenging when working in teams. Without a workflow in place to monitor and validate these resources before and after provisioning, it can lead to a range of issues such as conflicts during provisioning, increased operational expenses, ineffective asset management, and resource wastage.

Azure DevOps addresses these issues by integrating infrastructure-as-code (such as Terraform, Pulumi etc.), pre-approval gates into the CI/CD pipeline, centralized repositories to track changes and Kanban Boards to ensure the provisioning of the right resources.

In this article, you will learn how to improve your organization’s Azure Infrastructure provisioning and monitoring using Terraform and Azure DevOps.

Prerequisites

To follow this article, you need to know and set up a few prerequisite:

How Azure DevOps integrates with Infrastructure as Code (IaC)

Azure DevOps can integrate with various agnostic Infrastructure as Code (IaC) tools like Terraform, Pulumi, and Ansible, as well as vendor-specific tools such as AWS Cloud Formation, Azure Bicep, and Azure Resource Manager (ARM).

To enable IaC support, you need to install the IaC extension. This can be done by accessing the marketplace section in Azure DevOps, which can be found under "Organization Settings >> Extensions >> Browse Marketplace".

Creating an Azure Service Principal

A service principal is essentially a managed identity for an application or service in Azure. Think of your Azure account credentials as your house keys. Using a service principal is like creating a specific key for a cleaning service – it grants access to specific areas (resources) but not your entire house (all Azure resources). This offers numerous benefits such as:

  • Automation of tasks
  • Improved security due to the principle of least-privilege (Polp)
  • Creation of an auditable traces
  • Transferability of credentials from one user to another etc.

It’s generally good practice to create a service principal when performing these types of tasks.

Setting up a Service Principal (Managed Identity) for Terraform
Service Principals can be set up through various methods, in this article you will be authenticating your service principal using “ Client Secrets”. To learn about the others, refer to this documentation.

First, create a folder in VS Code, to house your configuration files. Then, open a new terminal and set it to "git bash".

Secondly, you’ll have to sign into your personal account. This is only temporary as it is required to create the service principal. This is done by running:

az login

This command will open a browser page, where you'll be prompted to sign in using your Microsoft account associated with Azure.

After successfully login in the browser, your VS code terminal will automatically display the details of your account in this format:

[
  {
    "cloudName": "AzureCloud",
    "id": "20000000-0000-0000-0000-000000000000",
    "isDefault": true,
    "name": "PAYG Subscription",
    "state": "Enabled",
    "tenantId": "10000000-0000-0000-0000-000000000000",
    "user": {
      "name": "user@example.com",
      "type": "user"
    }
  }
]

Since the details here will be used later, you'll create a separate file called "secrets.txt" to store them and all the other sensitive information you get during this process.

Now, before you can create the service principal, you'll need to run this command to fix an automatic path conversion error that occurs when using Git Bash on Windows with certain tools:

export MSYS_NO_PATHCONV=1

You can now create the service principal by running this command:

az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/20000000-0000-0000-0000-000000000000"

The subscription ID portion will be replaced with your "subscription ID" for your account, obtained from your account details earlier. Once it's done running, it will display your service principal credentials in the format below, which you'll use to log in;

{
  "appId": "00000000-0000-0000-0000-000000000000",
  "displayName": "azure-cli-2017-06-05-10-41-15",
  "name": "http://azure-cli-2017-06-05-10-41-15",
  "password": "0000-0000-0000-0000-000000000000",
  "tenant": "00000000-0000-0000-0000-000000000000"
}

Now, that the service principal has been created, simply log in using the provided credentials with this code.

az login --service-principal -u CLIENT_ID -p CLIENT_SECRET --tenant TENANT_ID

Where:

  • "CLIENT_ID" refers to the "appId"
  • "CLIENT_SECRET" refers to the "password"
  • "TENANT_ID" refers to the "tenant""

Finally, the last step in setting up your service principal is to store these credentials by exporting them as environment variables for Terraform to use to authenticate with Azure. This is done by running these commans individually in your terminal:

export ARM_CLIENT_ID="00000000-0000-0000-0000-000000000000"
export ARM_CLIENT_SECRET="12345678-0000-0000-0000-000000000000"
export ARM_TENANT_ID="10000000-0000-0000-0000-000000000000"
export ARM_SUBSCRIPTION_ID="20000000-0000-0000-0000-000000000000"

And with this final step, the service principal setup is complete. To view the service principal, you can navigate to the Azure Portal and access Entra ID. There, you’ll find “App Registrations” where your service principal is registered.

Creating Infrastructure with Terraform and Azure DevOps

You’ll be provisioning a simple virtual machine as the infrastructure in this article. It has been divided into 7 configuration files.

  • Provider.tf, configures the provider for Terraform. The "skip registration" aspect of the code has been omitted, because you already authenticated the service principal.

    terraform {
      required_providers {
        azurerm = {
          source  = "hashicorp/azurerm"
          version = "=3.0.0"
        }
      }
    }
    
    # Configure the Microsoft Azure Provider
    provider "azurerm" {
      #skip_provider_registration = true # This is only required when the User, Service Principal, or Identity running Terraform lacks the permissions to register Azure Resource Providers.
      features {}
    }

  • main.tf , houses the main portion of the infrastructure, which consists of a Virtual Machine, Virtual Network, Subnet and Network Interface Card.
resource "azurerm_resource_group" "example" {
  name     = "${var.prefix}-resources"
  location = var.location
}

resource "azurerm_virtual_network" "main" {
  name                = "${var.prefix}-network"
  address_space       = ["10.0.0.0/16"]
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
}

resource "azurerm_subnet" "internal" {
  name                 = "internal"
  resource_group_name  = azurerm_resource_group.example.name
  virtual_network_name = azurerm_virtual_network.main.name
  address_prefixes     = ["10.0.2.0/24"]
}

resource "azurerm_network_interface" "main" {
  name                = "${var.prefix}-nic"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name

  ip_configuration {
    name                          = "testconfiguration1"
    subnet_id                     = azurerm_subnet.internal.id
    private_ip_address_allocation = "Dynamic"
  }
}

resource "azurerm_virtual_machine" "main" {
  name                  = "${var.prefix}-vm"
  location              = azurerm_resource_group.example.location
  resource_group_name   = azurerm_resource_group.example.name
  network_interface_ids = [azurerm_network_interface.main.id]
  vm_size               = "Standard_DS1_v2"

  # Uncomment this line to delete the OS disk automatically when deleting the VM
  # delete_os_disk_on_termination = true

  # Uncomment this line to delete the data disks automatically when deleting the VM
  # delete_data_disks_on_termination = true

  storage_image_reference {
    publisher = "Canonical"
    offer     = "0001-com-ubuntu-server-jammy"
    sku       = "22_04-lts"
    version   = "latest"
  }
  storage_os_disk {
    name              = "myosdisk1"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
  }
  os_profile {
    computer_name  = "hostname"
    admin_username = "testadmin"
    admin_password = "Password1234!"
  }
  os_profile_linux_config {
    disable_password_authentication = false
  }
  tags = {
    environment = "staging"
  }
}

  • Variables.tf
variable "prefix" {

}
variable "location" {

}
  • Terraform.tfvars
prefix = "Test"
location = "West Europe"

Configuring Remote Backend
The configuration files currently made can provision infrastructure in Azure as is. However, the state file that indicates what is present in the infrastructure is stored locally. This presents a risk because if the file is accidentally deleted, Terraform would have no way of knowing the current state of the infrastructure, potentially leading to errors or duplication of resources during provisioning.

As a best practice, it's recommended to create a remote backend. This process is relatively straightforward and involves creating a storage account where the Terraform state file will be stored.

Firstly, create a storage account (which needs to have a unique name) in a new "Resource Group" and configure its location and redundancy. The location should be the same as the location in your configuration files.

Next, create a blob container called "tfstate" to hold the tfstate file for your infrastructure. With the storage account set up, you will create the remaining two files needed.

  • Backend.tf, the “key” is simply the name the tfstate file will be stored as, while the remaining aspects are details of the storage account created.
terraform {
  backend "azurerm" {
    resource_group_name  = "xxxxxxxxxxxxxxx" 
    storage_account_name = "xxxxxxxxxxxxxxx"                      
    container_name       = "tfstate"                      
    key                  = "prod.terraform.tfstate"        
  }
}
  • .gitignore, since secrets.txt has sensitive information , you dont want it getting out into the repository when you push it. To prevent this, you can simply, add it to the .gitignore file.
secrets.txt

And with this the infrastructure is set up and ready to be pushed to Azure Repos and used by Azure pipelines. If you don’t know how to this to Azure Repos , you can check out this article on ; setting up repositories in Azure Devops.

Creating “Build” Pipeline using Azure Pipelines

After pushing your repository to Azure Repos, you can create the build pipeline by either clicking "Build" on the top right of Azure Repos or by clicking "New pipeline" in Azure Pipelines.

At this stage, you will be creating the YAML pipeline from scratch. To do so, you will create an "empty starter pipeline" and paste this YAML code into it.

trigger: 
- main
stages:
- stage: Build
  jobs:
  - job: Build
    pool:
      vmImage: 'ubuntu-latest'
    steps:
    - task: TerraformTaskV4@4
      displayName: Tf init
      inputs:
        provider: 'azurerm'
        command: 'init'
        backendServiceArm: '{Your Azure Subscription}'
        backendAzureRmResourceGroupName: 'xxxxxxxxxxxxx'
        backendAzureRmStorageAccountName: 'xxxxxxxxxxxxx'
        backendAzureRmContainerName: 'tfstate'
        backendAzureRmKey: 'prod.terraform.tfstate'
    - task: TerraformTaskV4@4
      displayName: Tf Validate
      inputs:
        provider: 'azurerm'
        command: 'validate'
    - task: TerraformTaskV4@4
      displayName: Tf fmt
      inputs:
        provider: 'azurerm'
        command: 'custom'
        customCommand: 'fmt'
        outputTo: 'console'
        environmentServiceNameAzureRM: '{Your Azure Subscription}'
    - task: TerraformTaskV4@4
      inputs:
        provider: 'azurerm'
        command: 'plan'
        customCommand: '-out $(Build.SourcesDirectory)/tfplanfile'
        environmentServiceNameAzureRM: '{Your Azure Subscription}'
    - task: ArchiveFiles@2
      inputs:
        rootFolderOrFile: '$(Build.SourcesDirectory)/'
        includeRootFolder: false
        archiveType: 'zip'
        archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
        replaceExistingArchive: true
    - task: PublishBuildArtifacts@1
      inputs:
        PathtoPublish: '$(Build.ArtifactStagingDirectory)'
        ArtifactName: '$(Build.BuildId)-build'
        publishLocation: 'Container'

In this code, the build pipeline automatically retrieves the source from Azure Repos and then performs the following Terraform commands: init, validate, fmt, and plan. The terraform plan command creates a file indicating the resources being provisioned by this pipeline.

It's important to note that the YAML file does not include the terraform install command. The reason for this omission is that some agents already have support for terraform pre-installed. However, it's considered best practice to still install it, as you can't be certain which agents have it. In this case, Ubuntu had it.

Afterwards, the pipeline archives the files and publishes them to a temporary storage provided by Azure DevOps for storing artifacts, which is referred to as '$(Build.ArtifactStagingDirectory)'. The usage of environment variables in this process eliminates the need to hard code the path.

You can now proceed to click “save and run” to initiate the pipeline build.

Creating “Release” Pipeline using Azure Pipeline

To create the release pipeline , you’ll first configure the artifact to be available for the pipeline.

Afterwards, continuous deployment will be enabled, allowing the pipeline to be triggered from the completed build of the "build" pipeline. To do this, you can select the lightning symbol in the artifacts section, enable it, and set the trigger for the default branch.

Two stages will be created: "Deployment" and "Destroy". The Deployment stage applies the configuration files to the infrastructure, while the Destroy stage removes it. These two stages are similar, with only one command differentiating the two.


Deployment stage

Firstly, the “Agent job” will have to be configured to “ubuntu latest” with “Azure Pipelines” set as the agent pool.

Followed by “Extract File” with the below configuration, this can be gotten by clicking the “+” and searching “Extract File”.

Terraform is then installed onto the agent, with these configurations.

Terraform init, is then added and configured, due to the length , the YAML file will be supplied instead. This YAML corresponds with the options offered in the prompt

steps:
- task: ms-devlabs.custom-terraform-tasks.custom-terraform-release-task.TerraformTaskV4@4
  displayName: 'Terraform : init'
  inputs:
    workingDirectory: '''$(Build.ArtifactStagingDirectory)'''
    backendServiceArm: '{Your Azure Subscription}'
    backendAzureRmResourceGroupName: 'Infrastructure-state'
    backendAzureRmStorageAccountName: danielsinfrastorage
    backendAzureRmContainerName: tfstate
    backendAzureRmKey: prod.terraform.tfstate

Closing of this stage, Terraform Apply is then added with this configuration.

And with that the Deployment stage is complete.

Destroy stage

This stage is gotten by cloning the deployment stage and changing the terraform “apply” command to “destroy”, as seen below.

To prevent the resources from being destroyed immediately after the deployment stage a “Pre-deployment condition”  is added to the destroy stage, requiring approval from a designated approver before deployment.

And with this you can now run the pipeline and successfully provision your infrastructure.

Conclusion

This article covers how to integrate Azure DevOps with Terraform to gain greater control over your organization's infrastructure. Hopefully, it encourages you to consider implementing Azure DevOps as a means to monitor and scale your cloud infrastructure.

Like this article? Sign up for our newsletter below and become one of over 1000 subscribers who stay informed on the latest developments in the world of DevOps. Subscribe now!

AzureCI/CDTerraform

Daniel Makhoba Emmanuel

Daniel, a Cloud Engineer, passionately believes that "You can't secure it if you don't know how it's built." He strongly advocates for the adoption of Cloud by individuals and businesses of all scales