In this blog post, I am going to show how you can upload contents of a folder to Azure Blob Storage using Terraform – this can work great, to keep the contents of the folder in source control!
To achieve the file upload functionality, I will be using the fileset
function within Terraform
https://www.terraform.io/language/functions/fileset
fileset
enumerates a set of regular file names given a path and pattern. The path is automatically removed from the resulting set of file names and any result still containing path separators always returns forward slash (/
) as the path separator for cross-system compatibility.
The Terraform
The below Terraform will be used to create the prior resources required:
- Resource Group
- Storage Account
- Storage Container
resource "azurerm_resource_group" "tamopsrg" {
name = "tamopsdatarg"
location = "uksouth"
}
resource "azurerm_storage_account" "tamopssa" {
name = "tamopsdatasa"
resource_group_name = azurerm_resource_group.tamopsrg.name
location = azurerm_resource_group.tamopsrg.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "tamopssacontainer" {
name = "tamopsdata"
storage_account_name = azurerm_storage_account.tamopssa.name
container_access_type = "blob"
}
Lets look at the azurerm_storage_blob
that will be used to upload the folder contents to blob storage. With this I will be using a for_each
and fileset
function that will loop over all the contents of a specific folder (highlighted below) – awesome!
resource "azurerm_storage_blob" "tamopsblobs" {
for_each = fileset(path.module, "file_uploads/*")
name = trim(each.key, "file_uploads/")
storage_account_name = azurerm_storage_account.tamopssa.name
storage_container_name = azurerm_storage_container.tamopssacontainer.name
type = "Block"
source = each.key
}
Lets look at the folder contents:

Reviewing the plan, we can see the section that is creating the storage blob & uploading the files
# azurerm_storage_blob.tamopsblobs["file_uploads/test.txt"] will be created
+ resource "azurerm_storage_blob" "tamopsblobs" {
+ access_tier = (known after apply)
+ content_type = "application/octet-stream"
+ id = (known after apply)
+ metadata = (known after apply)
+ name = "test.txt"
+ parallelism = 8
+ size = 0
+ source = "file_uploads/test.txt"
+ storage_account_name = "tamopsdatasa"
+ storage_container_name = "tamopsdata"
+ type = "Block"
+ url = (known after apply)
}
# azurerm_storage_blob.tamopsblobs["file_uploads/test2.txt"] will be created
+ resource "azurerm_storage_blob" "tamopsblobs" {
+ access_tier = (known after apply)
+ content_type = "application/octet-stream"
+ id = (known after apply)
+ metadata = (known after apply)
+ name = "test2.txt"
+ parallelism = 8
+ size = 0
+ source = "file_uploads/test2.txt"
+ storage_account_name = "tamopsdatasa"
+ storage_container_name = "tamopsdata"
+ type = "Block"
+ url = (known after apply)
}
# azurerm_storage_blob.tamopsblobs["file_uploads/test3.txt"] will be created
+ resource "azurerm_storage_blob" "tamopsblobs" {
+ access_tier = (known after apply)
+ content_type = "application/octet-stream"
+ id = (known after apply)
+ metadata = (known after apply)
+ name = "test3.txt"
+ parallelism = 8
+ size = 0
+ source = "file_uploads/test3.txt"
+ storage_account_name = "tamopsdatasa"
+ storage_container_name = "tamopsdata"
+ type = "Block"
+ url = (known after apply)
}
# azurerm_storage_container.tamopssacontainer will be created
+ resource "azurerm_storage_container" "tamopssacontainer" {
+ container_access_type = "blob"
+ has_immutability_policy = (known after apply)
+ has_legal_hold = (known after apply)
+ id = (known after apply)
+ metadata = (known after apply)
+ name = "tamopsdata"
+ resource_manager_id = (known after apply)
+ storage_account_name = "tamopsdatasa"
}
Reviewing the Azure Portal, we can see the files have been uploaded successfully

I mentioned at the start of this blog post – this can work great, to keep the contents of the folder in source control! I will now rename and delete a file – lets review the updated terraform plan

# azurerm_storage_blob.tamopsblobs["file_uploads/newfile.txt"] will be created
+ resource "azurerm_storage_blob" "tamopsblobs" {
+ access_tier = (known after apply)
+ content_type = "application/octet-stream"
+ id = (known after apply)
+ metadata = (known after apply)
+ name = "newfile.txt"
+ parallelism = 8
+ size = 0
+ source = "file_uploads/newfile.txt"
+ storage_account_name = "tamopsdatasa"
+ storage_container_name = "tamopsdata"
+ type = "Block"
+ url = (known after apply)
}
# azurerm_storage_blob.tamopsblobs["file_uploads/test.txt"] will be destroyed
- resource "azurerm_storage_blob" "tamopsblobs" {
- access_tier = "Hot" -> null
- content_type = "application/octet-stream" -> null
- id = "https://tamopsdatasa.blob.core.windows.net/tamopsdata/test.txt" -> null
- metadata = {} -> null
- name = "test.txt" -> null
- parallelism = 8 -> null
- size = 0 -> null
- source = "file_uploads/test.txt" -> null
- storage_account_name = "tamopsdatasa" -> null
- storage_container_name = "tamopsdata" -> null
- type = "Block" -> null
- url = "https://tamopsdatasa.blob.core.windows.net/tamopsdata/test.txt" -> null
}
# azurerm_storage_blob.tamopsblobs["file_uploads/test2.txt"] will be destroyed
- resource "azurerm_storage_blob" "tamopsblobs" {
- access_tier = "Hot" -> null
- content_type = "application/octet-stream" -> null
- id = "https://tamopsdatasa.blob.core.windows.net/tamopsdata/test2.txt" -> null
- metadata = {} -> null
- name = "test2.txt" -> null
- parallelism = 8 -> null
- size = 0 -> null
- source = "file_uploads/test2.txt" -> null
- storage_account_name = "tamopsdatasa" -> null
- storage_container_name = "tamopsdata" -> null
- type = "Block" -> null
- url = "https://tamopsdatasa.blob.core.windows.net/tamopsdata/test2.txt" -> null
}
# azurerm_storage_blob.tamopsblobs["file_uploads/testrename.txt"] will be created
+ resource "azurerm_storage_blob" "tamopsblobs" {
+ access_tier = (known after apply)
+ content_type = "application/octet-stream"
+ id = (known after apply)
+ metadata = (known after apply)
+ name = "testrename.txt"
+ parallelism = 8
+ size = 0
+ source = "file_uploads/testrename.txt"
+ storage_account_name = "tamopsdatasa"
+ storage_container_name = "tamopsdata"
+ type = "Block"
+ url = (known after apply)
}
Plan: 2 to add, 0 to change, 2 to destroy.
- 2 files to be destroyed (one will be due to file rename and other is file removed)
- 2 to add (one will be the file renamed and other will be the new file)
Reviewing now in Azure, the files have been changed

You may be wondering, changing file name or removing/adding files works.. Does it work if I change the content within a file? Yes – that can be achieved as well! If I add content_md5
to the below (highlighted) – it will also checks the MD5 sum of the file each time, awesome!
resource "azurerm_storage_blob" "tamopsblobs" {
for_each = fileset(path.module, "file_uploads/*")
name = trim(each.key, "file_uploads/")
storage_account_name = azurerm_storage_account.tamopssa.name
storage_container_name = azurerm_storage_container.tamopssacontainer.name
type = "Block"
content_md5 = filemd5(each.key)
source = each.key
}
Modifying some content of a test file, I can see the plan is looking to replace the file because the MD5 has been changed
-/+ resource "azurerm_storage_blob" "tamopsblobs" {
~ access_tier = "Hot" -> (known after apply)
~ content_md5 = "36c7bb2ea5a4564acaafeee4015f47f4" -> "098f6bcd4621d373cade4e832627b4f6" # forces replacement
~ id = "https://tamopsdatasa.blob.core.windows.net/tamopsdata/test1.txt" -> (known after apply)
~ metadata = {} -> (known after apply)
name = "test1.txt"
~ url = "https://tamopsdatasa.blob.core.windows.net/tamopsdata/test1.txt" -> (known after apply)
# (7 unchanged attributes hidden)
}
Hey, you should be using trimprefix instead of trim(). You are removing characters from your filename.
Thanks for the comment, in my blog is the preferred way I wanted to achieve the outcome
Thanks
Thomas