Terraforming from zero to pipelines as code with Azure DevOps

Are you ready to Terraform? Are you ready to deploy to Azure via Pipelines as code? Then you are going to enjoy this blog post!

I’ve blogged a lot recently about Azure DevOps and Terraform; both very relevant topics in Azure in relation to deploying IaC in a pipeline. This blog post; I am going to congest all under one “roof” – that one blog to contain it all and get you “Terraforming from zero to pipelines as code with Azure DevOps”

This blog post is part of the Azure Festive Calendar:- https://festivetechcalendar.com

What will this blog post contain?

  • What is Terraform?
  • From zero:- The Terraform commands and its workflow
  • Time to get warmed up, its time to run a pipeline!
  • We’ve ran a pipeline, now what?
    • Triggers
    • Variable Groups
    • Should I test what I’ve deployed?
  • Key takeaways
  • That CI/CD journey was fun, I hope you continue your journey!

What is Terraform?

Terraform enables you to safely and predictably create, change, and improve infrastructure

Terraform.io

A quick summary:-

  • A way to manage Azure
  • Easy to read and write
  • Declarative
  • Driven via the Azure API
  • OpenSource (its free!)
  • Disposable Environments
  • Lowers the potential for human errors while deploying and managing infrastructure

Terraform Terminology

Remember these four bullet points!

  • Providers represent a cloud provider or a local provider
  • Resources can be invoked to create/update infrastructure locally or in the cloud
  • State is representation of the infrastructure created/updated by Terraform
  • Data Sources are “read-only” resources

There are 5 main commands within TerraformTerraform Init

  • Terraform Init:- Allows you to initialise a terraform working directory
  • Terraform Plan:- Generates an shows an execution plan
  • Terraform Apply:- Builds or changes infrastructure
  • Terraform Output:- Read an output from state file
  • Terraform Destroy:- Destroy Terraforms infrastructure

The Terraform Journey

You would write Terraform; as you would write any sort of configuration language code, in an editor of your choice while storing any changes in a version controlled respository, whether it is local or remote!

I highly recommend you store your configuration code in a remote repository!

So… as you are making progress on your Terraform configuration; you would be running several Terraform Plans to confirm your syntax is correct and help to iron-out any errors in relation to syntax or misconfiguration(s) that you are noticing. Doing this, will ensure that your Terraform configuration is coming together as expected.

The plan is looking good? Time to commit that change! Once you commit to the required change, you run Terraform Apply which will add/remove or change infrastructure that you have defined within your Terraform configuration.

This core workflow is a continuous loop throughout any project; the next change or additional/removal you want to make – you will follow the exact same process

This is what we call “Infrastucture as Code” (IAC) – in this blog post I am going to show how you write IaC along with deploying this code in CI/CD including writing Azure Pipelines as code!

Time to get warmed up, lets run a pipeline

What is Azure DevOps?

Deploying resources already into Azure; you probably already have came across using Azure DevOps, it is a hosted service by Microsoft that provides an end-to-end DevOps toolchain for developing and deploying software, along with this – it is a hosted service to deploy CI/CD Pipelines

Initial requirements before you can begin deploying

There are some prior requirements you need to complete before we can get deploying Terraform using Azure DevOps. These are:-

  • Where to store the Terraform state file?
  • Azure DevOps Project
  • Azure Service Principal
  • Sample Terraform code

Lets have a look at each of these requirements; I will include an example of each and how you can configure.

Where to store the Terraform state file?

When deploying Terraform there is a requirement that it must store a state file; this file is used by Terraform to map Azure Resources to your configuration that you want to deploy, keeps track of meta data and can also assist with improving performance for larger Azure Resource deployments.

In this deployment, I want to store the state file remotely in Azure; I will be storing my state file in a Storage Account container called:- tfstatedevops

Lets deploy the required storage container called tfstatedevops in Storage Account tamopstf inside Resource Group tamopstf

Terraform must store state about your managed infrastructure and configuration. This state is used by Terraform to map real world resources to your configuration, keep track of metadata, and to improve performance for large infrastructures.

#Create Resource Group
New-AzureRmResourceGroup -Name "tamopstf" -Location "eastus2"
 
#Create Storage Account
New-AzureRmStorageAccount -ResourceGroupName "tamopstf" -AccountName "tamopstf" -Location eastus2 -SkuName Standard_LRS
 
#Create Storage Container
New-AzureRmStorageContainer -ResourceGroupName "tamopstf" -AccountName "tamopstf" -ContainerName "tfstatedevops"

Azure DevOps Project

Deploying Terraform using Azure DevOps, requires some sort of project; in this blog I will create a new project

This is documented already by Microsoft here, I recommend this guide to show you how to setup a DevOps Project similar to mine below

The DevOps Project in my example will be called TamOpsTerraform as below

Azure Service Principal

A Service Principal (SPN) is considered a best practice for DevOps within your CI/CD pipeline. It is used as an identity to authenticate you within your Azure Subscription to allow you to deploy the relevant Terraform code.

In this blog, I will show you how to create this manually (there is PowerShell / CLI but within this example I want you to understand the initial setup of this)

To begin creation, within your newly created Azure DevOps Project – select Project Settings

Select Service Connections

Select Create Service Connection -> Azure Resource Manager -> Service Principal (Automatic)

For scope level I selected Subscription and then entered as below, for Resource Group I selected tamopstf which I created earlier

Once created you will see similar to below

You can select Manage Service Principal to review further

When creating this way, I like to give it a relevant name so I can reference my SPN easier within my Subscription. This is done within “Manage Service Principal”

Settings -> Properties and change Name as below

You can also reference your SPN easier if you want to give it further IAM control to your subscription, in this setup I also give the SPN “contributor” access to my subscription.

Documented role assignment here by Microsoft

Azure Pipeline Breakdown

Azure pipelines as code are created using .yaml syntax; this pipeline or pipelines that you create is versioned the same way as any code inside a Git Repository – making change to an Azure Pipeline? You can follow a pull-request process to ensure changes are verified and approved before being merged.

Pipelines as code, what is the basics that are needed?

  • Every pipeline that you create, must have one job
  • A job is a step or can consist as a series of steps that run sequentially as an unit
  • Moving from jobs to stages; each pipeline may even contain multiple stages, with each stage containing multiple jobs!

Some examples of Pipelines as code:-

An example of a single step

Multi-stage pipeline example

I’ve got a pipeline created, how can they run?

Azure Pipelines can be ran manually via the Azure DevOps portal; but we don’t want to continually do that!

Pipelines can be trigger! This sounds much better.. Use a trigger to automatically run your pipeline, there is many triggers out there – lets have a look at three triggers I use on a regular basis

Pull Request:- Created a pull-request and merged into a branch? Time to run the pipeline, use this Trigger

Scheduled Triggers:- I want to run a pipeline at a specific time, no problem – create a schedule that can be used to run your pipeline

Pipeline Triggers:- One of my favourites! Trigger one pipeline after another! Add this trigger to run a pipeline after the successful completion of another

Variable Groups

I really do like variable groups; they are awesome! Use a variable group to store any values that you want to control and possibly made available across multiple pipelines. You can even use variable groups to store secrets and other values that might need to be passed into the YAML pipeline.

Read more here about variable groups and their usage 

To use a variable from a variable group, you need to add a reference to the group in your YAML file:

Viewing this group inside the Azure DevOps Portal, we can see a reference to an environment is being used


Azure Pipelines

Hopefully you are still following this blog post! Enough of the theory’ I’ve covered an intro into Terraform and some background on Azure DevOps – its now time to run some pipelines!

All code for pipelines found here

Time to take you from the basic pipeline to a more progressive set of pipelines that will allow you to deploy develop, production & DR environments simultaneously!

See in this blog post on how to setup a pipeline

For my setup, I am going to be using two branches:-

develop:- to deploy the develop environment
master:- to deploy the production and DR environments

Here is an example pipeline (Run.Terraform.yaml) that be used to deploy a develop environment, notice there is no triggers or schedule – it is a manually run pipeline.

Throughout my pipelines; I use Terraform tasks that can be installed here

The Terraform task enables running Terraform commands as part of Azure Build pipelines providing support for the following Terraform commands

  • init
  • validate
  • plan
  • apply
  • destroy

In my Pipeline, I have two Stages

Validate:- To Validate my Terraform code, if validation fails the pipeline fails (consists of Terraform init & validate)

Plan:- Displays a Terraform Plan

Deploy:- if Validation is successful, it moves to next stage of pipeline which is Deploying the Terraform code to deploy required Azure Resources (consists of Terraform plan & deploy)

Throughout the Pipeline, notice my reference to the previously created Storage Account, Resource Group and container for the Terraform state file along with the newly created SPN? (extraction below)

backendServiceArm: 'tamopstf'
backendAzureRmResourceGroupName: 'tamopstf'
backendAzureRmStorageAccountName: 'tamopstf'
backendAzureRmContainerName: 'tfstatedevops'
backendAzureRmKey: 'terraform.tfstate'

Here is the manual pipeline:-

name: $(BuildDefinitionName)_$(date:yyyyMMdd)$(rev:.r)

variables:
  - group: azurefestivecalendar-develop
  
# Only run against develop
trigger: none

pool: linuxtamops

# Don't run against PRs
pr: none

stages :
  - stage: validate
    jobs:
    - job: validate
      continueOnError: false
      steps:
      - task: TerraformInstaller@0
        displayName: 'install'
        inputs:
          terraformVersion: '0.13.3'
      - task: TerraformTaskV1@0
        displayName: 'init'
        inputs:
          provider: 'azurerm'
          command: 'init'
          backendServiceArm: 'tamopstf'
          backendAzureRmResourceGroupName: 'tamopstfstates'
          backendAzureRmStorageAccountName: 'tfstatedevops'
          backendAzureRmContainerName: 'azurefestivecalendar'
          backendAzureRmKey: 'terraformdev.tfstate'
          workingDirectory: '$(System.DefaultWorkingDirectory)/terraform/'
      - task: TerraformTaskV1@0
        displayName: 'validate'
        inputs:
          provider: 'azurerm'
          command: 'validate'
          
  - stage: plan
    dependsOn: [validate]
    condition: succeeded('validate')
    jobs:
      - job: terraform_plan_develop
        steps:
              - checkout: self
              - task: TerraformInstaller@0
                displayName: 'install'
                inputs:
                  terraformVersion: '0.13.3'
              - task: TerraformTaskV1@0
                displayName: 'init'
                inputs:
                  provider: 'azurerm'
                  command: 'init'
                  backendServiceArm: 'tamopstf'
                  backendAzureRmResourceGroupName: 'tamopstfstates'
                  backendAzureRmStorageAccountName: 'tfstatedevops'
                  backendAzureRmContainerName: 'azurefestivecalendar'
                  backendAzureRmKey: 'terraformdev.tfstate'
                  workingDirectory: '$(System.DefaultWorkingDirectory)/terraform/'
              - task: TerraformTaskV1@0
                displayName: 'plan'
                inputs:
                  provider: 'azurerm'
                  command: 'plan'
                  commandOptions: '-input=false -var-file="../vars/$(Environment)/$(Environment).tfvars"'
                  environmentServiceNameAzureRM: 'tamopstf'
                  workingDirectory: '$(System.DefaultWorkingDirectory)/terraform/'

  - stage: apply
    dependsOn: [plan]
    condition: succeeded('plan')
    jobs:
      - job: terraform_apply_develop
        steps:
              - checkout: self
              - task: TerraformInstaller@0
                displayName: 'install'
                inputs:
                  terraformVersion: '0.13.3'
              - task: TerraformTaskV1@0
                displayName: 'init'
                inputs:
                  provider: 'azurerm'
                  command: 'init'
                  backendServiceArm: 'tamopstf'
                  backendAzureRmResourceGroupName: 'tamopstfstates'
                  backendAzureRmStorageAccountName: 'tfstatedevops'
                  backendAzureRmContainerName: 'azurefestivecalendar'
                  backendAzureRmKey: 'terraformdev.tfstate' 
                  workingDirectory: '$(System.DefaultWorkingDirectory)/terraform/'
              - task: TerraformTaskV1@0
                displayName: 'plan'
                inputs:
                  provider: 'azurerm'
                  command: 'plan'
                  commandOptions: '-input=false -var-file="../vars/$(Environment)/$(Environment).tfvars"'
                  environmentServiceNameAzureRM: 'tamopstf'
                  workingDirectory: '$(System.DefaultWorkingDirectory)/terraform/'
              - task: TerraformTaskV1@0
                displayName: 'apply'
                inputs:
                  provider: 'azurerm'
                  command: 'apply'
                  commandOptions: '-input=false -auto-approve -var-file="../vars/$(Environment)/$(Environment).tfvars"'
                  environmentServiceNameAzureRM: 'tamopstf'
                  workingDirectory: '$(System.DefaultWorkingDirectory)/terraform/'

Once you configure & save the above pipeline, you will see it beginning to run and can review both stages

After a few minutes, the build Pipeline will run through and if both stages are successful you will see similar to below

Reviewing the job, you will see a more thorough breakdown of the tasks & we can view the Terraform Plan output

Awesome, you have now ran a manual Azure DevOps pipeline

What has been deployed?

Now that we have deployed a manual pipeline; lets quickly look at a branching strategy that I referenced above between develop and main branches

Branching Strategy

The full branching strategy I am not going to cover in this blog post; it would be a different blog all together.

New to development and CI/CD? I do recommend looking at a feature branch strategy:-

Feature Branching Using Feature Flags
Image Reference:- https://launchdarkly.com/blog/feature-branching-using-feature-flags/

A good blog post to go into the Feature Branching strategy further

For this blog, I am going to be using two branches as mentioned:-

  • Develop Branch:- To deploy the develop environment
  • Main branch:- To deploy the production & DR environments

In theory:- I will be writing any changes or additions to develop and once merged into develop; a branch trigger will then run the develop environment pipeline. Once develop environment has been completed successfully; another pipeline will be ran to run the Production environment and finally a third pipeline to be ran to create the DR environment. This will all created using pipeline triggers!

Triggers

Use a trigger to run a pipeline automatically. Azure Pipelines does support quite a number of triggers; I do recommend you reading this post to view more types of triggers and depending what you are looking to do, select the appropriate trigger.

Use triggers to run a pipeline automatically. Azure Pipelines supports many types of triggers. Based on your pipeline’s type, select the appropriate trigger from the list below:

Branch Trigger

Branch Triggers are used to run a branch automatically once a branch has been updated. I will be using them:-

  • When a pull request has been approved into develop

Pipeline Trigger

Pipeline triggers are triggered whenever another pipeline has been successfully completed, deploying an app? You could have multiple pipelines with pipeline triggers, starting to get into the “CI/CD” world

Awesome; so far I’ve covered a recommended branching strategy and the Triggers that we will be using; now lets look at Terraform and continue the CI/CD journey!

Moving into Develop

Prior to any changes going into develop, I want to ensure the Terraform syntax is valid. I will show how to create a branch policy that will run a CI pipeline to validate Terraform code along with a Terraform plan, during a Pull Request in Azure DevOps and will include the YAML CI Pipeline.

Branch policies help teams protect their important branches of development. Policies enforce your team’s code quality and change management standards.

docs.microsoft.com

Depending on how you create and test your Terraform code; you will probably be doing this type of test locally but during a pull-request it gives a piece of mind to the reviewer(s) that the Terraform pull-request has successfully been validated along with a plan that can be reviewed.

In my Validation Pipeline, I have two Stages

Validate:- To Validate my Terraform code, if validation fails the pipeline fails (consists of Terraform init & validate)

Plan:- if Validation is successful, it moves to next stage of pipeline which is planning the Terraform code to output a Terraform Plan that can be reviewed as part of the pull request. (consists of Terraform plan)

The below YAML Pipeline will validate and plan your Terraform code:-

name: $(BuildDefinitionName)_$(date:yyyyMMdd)$(rev:.r)

variables:
  - group: azurefestivecalendar-develop

trigger: none

pool: linuxtamops

stages :
  - stage: validate
    jobs:
    - job: validate
      continueOnError: false
      steps:
      - task: TerraformInstaller@0
        displayName: 'install'
        inputs:
          terraformVersion: '0.13.4'
      - task: TerraformTaskV1@0
        displayName: 'init'
        inputs:
          provider: 'azurerm'
          command: 'init'
          backendServiceArm: 'tamopstf'
          backendAzureRmResourceGroupName: 'tamopstfstates'
          backendAzureRmStorageAccountName: 'tfstatedevops'
          backendAzureRmContainerName: 'azurefestivecalendar'
          backendAzureRmKey: 'terraformdev.tfstate'
          workingDirectory: '$(System.DefaultWorkingDirectory)/terraform/'
      - task: TerraformTaskV1@0
        displayName: 'validate'
        inputs:
          provider: 'azurerm'
          command: 'validate'
          
  - stage: plan
    dependsOn: [validate]
    condition: succeeded('validate')
    jobs:
      - job: terraform_plan_develop
        steps:
              - checkout: self
              - task: TerraformInstaller@0
                displayName: 'install'
                inputs:
                  terraformVersion: '0.13.4'
              - task: TerraformTaskV1@0
                displayName: 'init'
                inputs:
                  provider: 'azurerm'
                  command: 'init'
                  backendServiceArm: 'tamopstf'
                  backendAzureRmResourceGroupName: 'tamopstfstates'
                  backendAzureRmStorageAccountName: 'tfstatedevops'
                  backendAzureRmContainerName: 'azurefestivecalendar'
                  backendAzureRmKey: 'terraformdev.tfstate'
                  workingDirectory: '$(System.DefaultWorkingDirectory)/terraform/'
              - task: TerraformTaskV1@0
                displayName: 'plan'
                inputs:
                  provider: 'azurerm'
                  command: 'plan'
                  commandOptions: '-input=false -var-file="../vars/$(Environment)/$(Environment).tfvars"'
                  environmentServiceNameAzureRM: 'tamopstf'
                  workingDirectory: '$(System.DefaultWorkingDirectory)/terraform/'

Now you have a Pipeline ready to be part of your branch policy; once the pipeline has been configured in a branch policy, it can run automatically as part of the pull request process.

Apply Branch Policy

In Azure DevOps select Repos -> Branches and you will see a screen similar to below with your branches available.

In my example, I mentioned that I will be applying the branch policy to Develop.

Select (to right of branch) -> Branch Policies

We will be creating a Build Validation; this is used to “Validate code by pre-merging and building pull request changes.

Adding a build policy by selecting + on Build Validation

Below is the build policy I added

  • Build pipeline:- Assign the pipeline that was created earlier in this blog post
  • Trigger:- Automatic
  • Policy requirement:- Required
  • Build expiration:- Immediately when Develop is updated
  • Display Name:- Accurate display name of the build validation

Test the Branch Policy

A branch policy has now been created along with a build pipeline to validate and plan your Terraform code.

Create a pull request to the Develop Branch

Reviewing the pull request you will see in the Overview section the CI Pipeline that was created

This Pipeline will run automatically and the Pull request cannot be approved until the pipeline has been successful.

Awesome! We have now configured a branch policy that will run a CI pipeline to validate and plan your Terraform code during a Pull Request.

We now have validation applied for develop; you can apply the same approach for the main branch also

From Develop to main

I have all my pipelines setup prior to this blog post

Creating and approving a pull-request from develop to main will trigger the pipeline:- Azure-Festive-Calendar-Production; due to the trigger set in the pipeline

trigger:
  batch: true 
  branches:
    include:
      - main

Within this pipeline; I’ve also added an approval stage, where you can review the plan and approve to the terraform apply stage providing the plan is accurate and as expected.

On how to set this up, I’ve blogged it here

Reviewing the pipeline – you can see an approval is waiting

Review the plan stage & confirm the changes are as expected & then approve!

You will also notice that there is an additional final stage on this pipeline:- test

Why test?

  • Inspec-Azure is a resource pack provided by Chef that uses the Azure REST API, to allow you to write tests for resources that you have deployed in Microsoft Azure
  • These tests can be used to validate the Azures resources that were deployed via code using Terraform or  even Azure RM templates
  • Inspec is an open source framework that is used for testing and auditing your infrastructure
  • Could be used as a separate pipeline to run on a schedule to test your Azure resources

On how I set this up is in a previous blog post here

How to trigger from Production to DR?

As mentioned previously, Production & DR environments are deployed from the same branch main . I only want to DR to be deployed once Production pipeline has ran successfully, I have achieved this again; using pipeline resource as below

The source: is the pipeline I want to run successfully before running this pipeline

trigger: none 
resources:
  pipelines:
    - pipeline: Azure-Festive-Calendar-DR 
      source: Azure-Festive-Calendar-Production  
      trigger:
        branches:
          include:
            - main       

DR Pipeline as code is found here

The finish:- Multibranch pipeline!

Reviewing the resource groups in Azure you will see the three that has been created with a storage account in each.

The end of the journey, so far!

Key takeaways:-

  • Hopefully the beginning of your CI/CD deployment journey and overview
  • Terraform is readable and user friendly
  • Use of triggers for automation
  • Begin testing your code outside of your initial pipelines
  • Inspec Testing is a great addition

Thank you for reading such a lengthy blog post but I hope it will assist you with your CI/CD pipeline journey using Terraform!

23 comments

  1. I am using Azure DevOps pipelines with GitHub repository and the only pipeline that is triggered is the azure-pipelines.yml on the root folder.

    1. Hey,

      Thanks for the comment – this post was based on Azure DevOps Repos & Pipelines – although, it should work for GitHub repo too.

      Can you link your repo? What are you trying to do?

      Thanks

      Thomas

      1. This is my repo tree and as you can see I have a pipelines’ folder that contains two pipelines that should be triggered when pushing to a feature/* branch and another that when opening a pull request against main branch.

        The funny part is that without the azure-pipelines.yml on the root folder, the Azure DevOps pipeline is showing an error message on UI saying that the project is missing that file and is not triggering anything at all.

        ├── README.md
        ├── azure-pipelines.yml
        ├── envs
        │   ├── dev
        │   │   └── dev.tfvars
        │   ├── prod
        │   │   └── prod.tfvars
        │   └── test
        │   └── test.tfvars
        ├── main.tf
        ├── pipelines
        │   ├── azure-pipeline-pullrequest.yml
        │   └── azure-pipelines-feature-branch.yml
        ├── provider.tf
        ├── terraform.tf
        └── variables.tf

      2. Sounds like you may need to create pipeline in Azure DevOps pipelines for what is inside your folder
        ├── pipelines
        │ ├── azure-pipeline-pullrequest.yml
        │ └── azure-pipelines-feature-branch.yml

        If these are actual pipelines and not a template for azure-pipelines.yaml

        By default, in Azure DevOps pipelines – the default will be azure-pipelines.yml, the error mentioned – you need to edit the actual pipeline in Azure DevOps to move to the new location.

  2. I am building multistage pipeline. I want to pass variables from pipeline[variable group- those are different for test and dev envinronment]. How to mention those variables in .tf . And how to pass them from release pipeline also. I

  3. Hi Thomas, I need some help please. I have pretty much what you’ve outlined above, slightly different. Backend storage in a storage account etc.

    I upgraded the TF in the pipelines from 1.1.7 to 1.1.9. The Apply worked fine. However, all subsequent pipelines are failing, during the “init”, with the error “Backend configuration changed”. I haven’t changed the backend though. Only one thing happened. key1 expired and didn’t update the key vault for some reason. So I manually updated the key vault with the key1 secret. However that hasn’t fixed things.

    The TF error indicates that I should run one of 2 commands; terraform init -migrate-state, and/or terraform init -reconfigure.

    Trouble is, how do I run those commands against the DevOps repo? I assume I’ll probably need to add a new pipeline task to run it. But then what if those commands are interactive & need responses?

    Is there a way to connect to a DevOps repo in a command line on my local machine & run the terraform commands against that remote repo? Not sure if that’s even possible…

    Look forwards to your response
    BW
    Martin

    1. Hi Martin,

      You can run terraform locally to potentially resolve. Check the init task and run it locally to init into the terraform state file stored on your storage account.

      Let me know how it goes

      Thanks

  4. Yep – tried that – and a local “init” works perfectly fine. Yet within each DevOps pipeline it fails… I don’t understand what’s happening TBH.

    I can run a local fmt/validate/plan etc fine too…

    I know the contents of my tfstate are fine – they exist as per the last Apply that worked. I can do a state pull fine too.

    1. Have you tried either “init” commands?

      Potentially – terraform init -reconfigure?

      Can you upload your pipeline potentially, so I can have a quick look to GitHub?

      Thanks

    2. Potentially thinking .terraform folder is cached if you are using your own hosted > MS hosted

      1. Hi Thomas. You’ve hit the nail on the head here. That’s exactly what it was! I’d been talking to “maxb” in a Hashicorp forum & he pointed at the same. It did turn out that there was a cached .terraform folder causing this. I didn’t know self-hosted agent could cache between pipeline runs. After finding that out I set “Clean=true” in the pipeline & that fixed it.

        One of my previous pipelines had finished with “Bash code exit 1”. I think this must have left a .terraform folder & perhaps corruption/incompleteness or whatever.

        Here’s that forum discussion.
        https://discuss.hashicorp.com/t/backend-configuration-changed-reconfigure-and-migrate-state-dont-fix/39511/17

        So the question I now have is, should we have Clean = true” in all Terraform pipelines?………to stop the potential of this ever happening again. And if so, what setting:
        1. Sources
        2. Sources and output directory
        3. Sources directory
        4. All build directories

        I think it should true BTW, just not sure which setting.

        Thanks Thomas.

      2. Great news! Yeah the .terraform folder can have some interesting issues over time!

        I would probably clean all tbh, I try to treat self-hosted like MS hosted, download artifacts etc each time. Saves issues like this.

        Have you tried looking at hosting them possible as container driven or even on k8s? They spin up/down only when needed.

        Thanks

        Thomas

  5. Hi Thomas, I love your work on your GitHub man. really appreciate what you’re doing.
    Quick question: is it a safe practice to check in var files (dev.tfvars, prod.tfvars) into the repo? since it may contains sensitive info, I’m not sure whether to include in the repo?

    1. Hi Gautam,

      Glad you are enjoying my content.

      Var files are fine to upload to github/source control. If you have a sensitive variable, you’d be referencing it as an Azure Key Vault secret and not plain text in a .tfvars.

      Only use .tfvars for non sensitive information

      Let me know if this helps

      Thanks

  6. Hi Thomas,
    Thanks for the useful explanation/example.

    One question I have is: What is benefit of creating two stages (Apply and Plan) instead of 1 Stage where you do all the work (the init, install and apply)?
    Now in the Plan stage you have to repeat the checkout, install and init again.
    Why not putting it altogether in one stage which is simpler and faster?

    1. Hi Arjen,

      Great question! The decision to separate the stages into “Apply” and “Plan” is driven by a desire to optimise the CI/CD pipeline for enhanced visibility and control. (To allow conditions also, if not main branch “Apply” not to run etc as well)

      By breaking down the process into distinct stages, we gain several advantages:

      1. Isolation of Concerns: Each stage has a specific responsibility, making it easier to identify and address issues in the pipeline. If a problem occurs during the “Plan” stage, for instance, it can be addressed without affecting the subsequent “Apply” stage.
      2. Improved Debugging: Separating stages facilitates more granular debugging. If an issue arises during the “Apply” stage, you can review the “Plan” stage output to pinpoint where the problem originated.

      While it may seem like repeating certain tasks, such as checkout, install, and init, introduces redundancy, it ensures that each stage operates in an independent and reproducible environment. This separation enhances the overall reliability of the pipeline.

      Ultimately, the decision to structure the pipeline into multiple stages is a trade-off between simplicity and flexibility.

      Happy to discuss further, there really is some +/- to both – can depend on your scenario for example, which is better for that usecase

      Thanks

      Thomas

Leave a Reply