Deploying a number of YAML files or Helm Charts as part of your Kubernetes deployment? Unsure if they are representing best practices? KubeLinter will help you to achieve best practices within your YAML configurations & Helm Charts – I will show how you can add this tool as part of your CI integration tooling or general pipeline running on Azure DevOps!
KubeLinter is a static analysis tool that has the ability to check both Helm charts and Kubernetes YAML files – it runs a number of default checks, that have been created to assist you in ensuring best practices are being met. The checks can also be disabled to allow you full control of what is suitable and what is not. Review the recommendations; if you don’t find suitable – you can disable!
Sound good? In this blog post, I am going to show how you can setup and use KubeLinter as part of your Azure DevOps pipeline – whether it is part of your CI process or ran during a pipeline run
Setting up KubeLinter
KubeLinter can be setup several ways for local testing:
You can install locally to test KubeLinter on your various YAML files/Helm Charts.
Using GitHub Actions instead of Azure DevOps? There is a native GitHub Action available for this too
Test files for KubeLinter
In this blog, I will be using two example files provided by KubeLinter to show the tool working in Azure DevOps
YAML with no best practices
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
YAML with best practices
apiVersion: apps/v1
kind: Deployment
metadata:
name: compliant
namespace: my-namespace
annotations:
team: database
spec:
replicas: 1
minReadySeconds: 15
selector:
matchLabels:
app: compliant
strategy:
type: Recreate
template:
metadata:
namespace: my-namespace
labels:
app: compliant
spec:
serviceAccountName: my-service-account
containers:
- image: nginx:1.20
name: nginx
securityContext:
runAsNonRoot: true
readOnlyRootFilesystem: true
resources:
requests:
memory: "1Gi"
cpu: "1"
limits:
memory: "4Gi"
cpu: "2"
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
namespace: my-namespace
labels:
app.kubernetes.io/name: my-app
annotations:
team: database
Setting up the Azure DevOps pipeline for KubeLinter
Azure-DevOps-KubeLinter-Scanning
└── ignore-check
└── ignore-check.yaml
└── invalid
└── invalid.yaml
└── valid
└── valid.yaml
└── azure-pipeline.yaml
In the Azure DevOps pipeline, I will be running KubeLinter using a Bash@3
task using docker
The task:
- task: Bash@3
displayName: "KubeLinter Checks"
inputs:
targetType: 'inline'
script: |
docker run --rm -v $(pwd):/examples stackrox/kube-linter lint /examples/invalid
With the line highlighted above, notice the use of stackrox/kube-linter lint /examples/invalid
? Currently I have it set to a directory, a file path can also be used. Further usage references of KubeLinter available here
KubeLinter unsuccessful pipeline with best practices to review
Lets add the task to a pipeline and test running, using the folder structure that has been mentioned.
name: $(BuildDefinitionName)_$(date:yyyyMMdd)$(rev:.r)
trigger:
batch: true
branches:
include:
- main
pool:
vmImage: 'ubuntu-latest'
stages :
- stage: KubeLinter
jobs:
- job: "KubeLinter"
steps:
- task: Bash@3
displayName: "KubeLinter Checks"
inputs:
targetType: 'inline'
script: |
docker run --rm -v $(pwd):/examples stackrox/kube-linter lint /examples/invalid
Notice the pipeline has failed due to finding lint errors? Awesome! This is stopping Kubernetes YAML files without potential best practices from being deployed into your environment!

Full output of errors:
/examples/invalid/bad.yaml: (object: <no namespace>/nginx-deployment apps/v1, Kind=Deployment) object has 3 replicas but does not specify inter pod anti-affinity (check: no-anti-affinity, remediation: Specify anti-affinity in your pod specification to ensure that the orchestrator attempts to schedule replicas on different nodes. Using podAntiAffinity, specify a labelSelector that matches pods for the deployment, and set the topologyKey to kubernetes.io/hostname. Refer to https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity for details.)
/examples/invalid/bad.yaml: (object: <no namespace>/nginx-deployment apps/v1, Kind=Deployment) container "nginx" does not have a read-only root file system (check: no-read-only-root-fs, remediation: Set readOnlyRootFilesystem to true in the container securityContext.)
/examples/invalid/bad.yaml: (object: <no namespace>/nginx-deployment apps/v1, Kind=Deployment) container "nginx" is not set to runAsNonRoot (check: run-as-non-root, remediation: Set runAsUser to a non-zero number and runAsNonRoot to true in your pod or container securityContext. Refer to https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ for details.)
/examples/invalid/bad.yaml: (object: <no namespace>/nginx-deployment apps/v1, Kind=Deployment) container "nginx" has cpu request 0 (check: unset-cpu-requirements, remediation: Set CPU requests and limits for your container based on its requirements. Refer to https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits for details.)
/examples/invalid/bad.yaml: (object: <no namespace>/nginx-deployment apps/v1, Kind=Deployment) container "nginx" has cpu limit 0 (check: unset-cpu-requirements, remediation: Set CPU requests and limits for your container based on its requirements. Refer to https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits for details.)
/examples/invalid/bad.yaml: (object: <no namespace>/nginx-deployment apps/v1, Kind=Deployment) container "nginx" has memory request 0 (check: unset-memory-requirements, remediation: Set memory requests and limits for your container based on its requirements. Refer to https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits for details.)
/examples/invalid/bad.yaml: (object: <no namespace>/nginx-deployment apps/v1, Kind=Deployment) container "nginx" has memory limit 0 (check: unset-memory-requirements, remediation: Set memory requests and limits for your container based on its requirements. Refer to https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits for details.)
Notice the references to https://kubernetes.io/docs/concepts/ for each error? Very cool! To help you understand why KubeLinter suggests it is a best practice with documentation to follow up.
KubeLinter Successful
Lets try running the same pipeline but update the kube-linter scanning location to
docker run --rm -v $(pwd):/examples stackrox/kube-linter lint /examples/valid
Notice the output is now successful? No lint errors found!

KubeLinter testing with ignoring some recommended checks
There may be a time, where some recommended best practices cannot be used, lets look at how this can be achieved.
A couple of the previous errors where in relation to not setting a memory or CPU limit, reviewing the documentation for memory and CPU limits
The full list of KubeLinter default checks are available here
How can I ignore a check? You add specific annotations to your YAML file to ignore.
Example below
metadata:
annotations:
ignore-check.kube-linter.io/unset-cpu-requirements : "cpu requirements not required"
ignore-check.kube-linter.io/unset-memory-requirements : "memory requirements not required"
Full YAML as part of adding the above annotations
With adding the above annotation and removing the containers resources limits, the KubeLinter checks will still run successfully
containers:
- image: nginx:1.20
name: nginx
resources:
limits:
memory: "4Gi"
cpu: "2"
To finish off
Hope you enjoyed this blog post into how you can use KubeLinter as part of your Azure DevOps pipeline to help you in Kubernetes best practices, it really is a great tool! Will be very beneficial as part of your CI integration tool chain to help and ensure the YAML/Helm charts you deploy to Kubernetes are valid against recommended best practices
Some more documentation on how you can configure KubeLinter even further!