Using Azure Blob storage & storing data?
Throughout the life time of data, its requirement for access can vary and sometimes may rarely be required again after a few days or the data expires after a month once its been created.
Depending on the amount of data you consume with Azure Blob storage, this can become costly over time.
Azure has introduced Storage Lifecycle management with Blob Storage, it offers rule-based policy creation for GPv2 and Blob Storage accounts.
What can a policy let you do?
Creating policies assist you with how the Storage is handled over a period of time, this can be for a compliance perspective or for costing benefit, examples of policies include:-
- Deleting Blobs after its lifecycle has been met
- Ability to transition between storage tiers after a period of time (hot to cool storage for a costing benefit, ability also includes various transitions such as hot to cool, cool to archive etc)
Creating a policy lets you:-
- Delete blobs at the end of their lifecycles
- Create rules to be ran once per day at the Storage Account Level
- Transition blobs to cooler storage after a period (hot to cool, hot to archive or even cool to archive available)
- Deleting blob snapshots
- Create rules for prefix matches
Lets create a policy
Policies can be created via Portal, PowerShell, CLI & Rest APIs, in my example I will be using Azure Portal along with the JSON output from the policy created
I have created a GPv2 Storage Account: blogstoragelifecycle
To begin creating a policy, select Lifecycle Management within the Storage Account

Select Add Rule

Now create your rule, in my example
Move blob to cool storage: 20 days after last modification
Move blob to archive storage: 25 days after last modification
Delete blob: 30 days after last modification
Note:- Any blob that is moved to Archive is subject to an Archive early deletion period of 180 days. Additionally, any blob that is moved to Cool is subject to a Cool early deletion period of 30 days.

Next, we will create filter set(s) to which the newly created policy will apply to – currently you can use up to 10 prefixes as filters, example below

Review the configuration and then add once ready

To assist with automation of these steps, lets look at the JSON output, by selecting code view from below

{
"rules": [
{
"enabled": true,
"name": "TestRule",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"tierToCool": {
"daysAfterModificationGreaterThan": 20
},
"tierToArchive": {
"daysAfterModificationGreaterThan": 25
},
"delete": {
"daysAfterModificationGreaterThan": 30
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"/blobfolder",
"thomas*"
]
}
}
}
]
}
As mentioned above, this JSON can be added to your automation for creation of a Storage Management Lifecycle.
Pretty cool and definitely worth having a look at in relation to assisting you with data lifecycle management and from a costing perspective
Can this lifecycle management be done accross different storage account – eg I have a hot account and a cool account separately?
Not currently, that would be an additional copy job that you would have to create for that.
You may want to consider the blob-type as below
“Transition blobs to a cooler storage tier (hot to cool, hot to archive, or cool to archive) to optimize for performance and cost”