Building and deploying to an AKS cluster using Terraform and Azure DevOps with Kubernetes and Helm providers

I have a few blogs now on deploying Azure Kubernetes Services (AKS) with different scenarios, such as deploying AKS with Application Gateway Ingress, in this blog post I am going to be building and deploying to an AKS cluster using Terraform and Azure DevOps. You will also have noticed in the title, it references both Kubernetes and Helm providers – we will be looking at how Terraform can be used to deploy to AKS as well once deployed. I am huge fan of GitOps within AKS, but to test an application or small environment; this way could certainly be useful!

What will Terraform be deploying?

Initially Terraform will be used to deploy the AKS environment:

  • Virtual Network
  • Log Analytics
  • AKS cluster

The initial terraform code for this, can be found [HERE]. In this blog post, I won’t go deeper into the deployment of AKS as I have covered this more in various other blogs, feel free to check these out

Azure DevOps Pipeline

As with most my pipelines, I utilise the use of variables alot , review these [HERE]

The Azure DevOps pipeline will consists of 4 stages:

  • terraform_plan: Running the action: plan will run this stage and output a plan for the above terraform
  • terraform_apply: Running the action: apply will run this stage and apply the above terraform
  • bootstrap: Running the action: apply will run this stage and run the below Terraform using Kubernetes and Helm Providers
  • terraform_destroy: Running the action: destroy will destroy the environment

Full pipeline can be found [HERE]

Deploying to Kubernetes with Terraform using Kubernetes and Helm providers

Once this has been deployed, lets look at using Terraform with the Kubernetes & Helm providers to deploy an example namespace and basic redis helm chart

The folder structure for this Terraform will be two files:

terraform-module-example
└── main.tf
└── providers.tf

Lets look at providers.tf, notice the reference to the helm & kubernetes providers along with config_path?

provider "helm" {
  kubernetes {
    config_path = "/home/vsts/.kube/config"
  }
}

provider "kubernetes" {
  config_path    = "/home/vsts/.kube/config"
  config_context = "tamopsakstest-admin"
}

terraform {
    backend "azurerm" {
    }
}

To configure how the config_path is setup, it is the kubernetes context path – I have achieved this by using running some Azure CLI within my aks_cluster_config stage here

Example output:

/usr/bin/az account set --subscription 04109105-f3ca-44ac-a3a7-66b4936112c3
/usr/bin/bash /home/vsts/work/_temp/azureclitaskscript1667379970075.sh
+ AKS_RG=tamopsakstest-rg
+ AKS_NAME=tamopsakstest
+ az aks get-credentials -g tamopsakstest-rg -n tamopsakstest --admin
WARNING: Merged "tamopsakstest-admin" as current context in /home/vsts/.kube/config

Great! Now that the context is setup and config_path’s are setup, it’s time to deploy some example resources that can be deployed by these providers. I will be deploying a example Kubernetes namespace & helm release as below:

resource "kubernetes_namespace" "test" {
  metadata {
    name = "test"
  }
}

resource "helm_release" "redis" {
  name = "redis"

  repository = "https://charts.bitnami.com/bitnami"
  chart      = "redis"
  namespace  = "test"
}

Reviewing the apply stage, we can see its planning to create these two resources (log output)

Terraform will perform the following actions:

  # helm_release.redis will be created
  + resource "helm_release" "redis" {
      + atomic                     = false
      + chart                      = "redis"
      + cleanup_on_fail            = false
      + create_namespace           = false
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "redis"
      + namespace                  = "test"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "17.3.7"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

Output of log:

kubernetes_namespace.test: Creating...
kubernetes_namespace.test: Creation complete after 0s [id=test]
helm_release.redis: Creating...
helm_release.redis: Still creating... [10s elapsed]
helm_release.redis: Still creating... [20s elapsed]
helm_release.redis: Still creating... [30s elapsed]
helm_release.redis: Still creating... [40s elapsed]
helm_release.redis: Still creating... [50s elapsed]
helm_release.redis: Still creating... [1m0s elapsed]
helm_release.redis: Still creating... [1m10s elapsed]
helm_release.redis: Still creating... [1m20s elapsed]
helm_release.redis: Still creating... [1m30s elapsed]
helm_release.redis: Still creating... [1m40s elapsed]
helm_release.redis: Still creating... [1m50s elapsed]
helm_release.redis: Still creating... [2m0s elapsed]
helm_release.redis: Still creating... [2m10s elapsed]
helm_release.redis: Still creating... [2m20s elapsed]
helm_release.redis: Still creating... [2m30s elapsed]
helm_release.redis: Creation complete after 2m38s [id=redis]

Great! We have successfully deployed a Kubernetes environment in Azure along with using the Helm & Kubernetes terraform providers to deploy an example namespace & Helm Chart!

Finally, logging into the AKS cluster , we can see both the namespace & redis helm released has been deployed successfully

Namespace:

redis helm release:

Although this is a concept – depending on your environment, I do recommend looking at GitOps for your Kubernetes cluster; especially when deploying at scale!

Thanks for reading, another one of my blogs – as always, any queries do reach out!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s