As infrastructure-as-code becomes more complex, I wanted to create a systematic way to ensure every Terraform change meets security, cost, and compliance standards before reaching production.
One area that’s particularly ripe for improvement is Terraform plan reviews.
If you’ve ever spent half an hour combing through a terraform plan output, trying to spot subtle issues, you’ll know how easy it is for things to slip past even experienced reviewers.
That’s why I built an AI-powered GitHub Action that automatically analyses Terraform changes across 11 specialised domains before they reach production – consistent, comprehensive, and instant.
In this post, I’ll share why I built it, how it works, and how you can start using it in your CI/CD pipeline in just minutes.
What I Built (and Why)
As teams scale and infrastructure grows more complex, maintaining consistent review quality becomes challenging.
I wanted to create a solution that could systematically analyse every Terraform change with the same level of rigor, regardless of who’s available to review or when the PR is opened.
The GitHub Action I built does exactly that:
Traditional manual approach
- Run
terraform plan - Wait for an engineer to review the output
- Review quality varies by reviewer and timing
- Easy to miss subtle issues in complex plans
Manual Terraform reviews, while valuable, have inherent limitations that create opportunities for improvement:
1. Time-intensive – Senior engineers spending hours on review means less time for architecture and innovation
2. Variable consistency – Review depth naturally varies based on reviewer bandwidth and context
3. Complex pattern recognition – Identifying subtle security or cost patterns across hundreds of resources is challenging at scale
4. Scaling challenges – As infrastructure and teams grow, manual review becomes a bottleneck
Automated AI-powered approach
- Run
terraform planand generate JSON output - AI automatically analyses across 11 specialised domains
- Structured findings posted directly to the PR
- Consistent, comprehensive review on every single PR
The result? Every infrastructure change gets expert-level analysis instantly – establishing a reliable baseline of quality that teams can build upon.
I saw an opportunity to use AI to augment human expertise – handling the systematic analysis while engineers focus on strategic decisions and business logic.
Do check it out here: https://github.com/thomast1906/terraform-review-ai-action
Why AI Makes Terraform reviews better
AI doesn’t replace your team’s expertise – it amplifies it. Here’s what makes AI-powered analysis so effective:
Comprehensive Coverage Across 11 Domains
The analysis covers areas that are easy to overlook manually:
- Security – Exposed resources, missing encryption, overly permissive RBAC
- Cost – Oversized instances, unnecessary resources, missed savings opportunities
- Compliance – Regulatory requirements, organisational policies, governance rules
- Performance – Resource sizing, scaling policies, network optimisation
- Reliability – High availability, disaster recovery, fault tolerance
- Observability – Logging, monitoring, alerting configurations
- Networking – Security groups, firewall rules, connectivity issues
- Data Protection – Encryption, backup strategies, storage optimisation
- Best Practices – Provider-specific recommendations, infrastructure patterns
- Deployment Safety – Breaking changes, data loss risks, rollback readiness
- Governance – Tagging, naming conventions, resource organisation
This is the kind of checklist no one realistically runs by hand for every change — but AI can do it for every pull request
Instant Feedback on Every Pull Request
No more waiting for someone to review your Terraform changes. The moment you open a PR:
- AI analysis runs automatically
- Findings are posted as PR comments with severity levels (🔴 Critical, 🟡 Warning, 🔵 Recommendation, ✅ Good Practice)
- Team sees exactly what needs attention before merging
- Updates happen in the same comment (no spam)
This means faster iteration, faster deployments, and fewer surprises.
Works with ANY Terraform Provider
Whether you’re using AWS, Azure, GCP, Kubernetes, or any of the 1000+ community providers (Auth0, Datadog, MongoDB, Vault, etc.), the analysis adapts automatically.
The system:
- Dynamically detects providers from your plan
- Surfaces provider-specific best practices
- Integrates with HashiCorp’s official Terraform MCP Server for real-time registry access
- Provides tailored recommendations based on your infrastructure
No configuration needed – it just works.
Customisable Analysis for your Workflow
Not every review needs the same depth. You can customise analysis based on your needs:
Analysis Presets:
- security-audit – Deep security and compliance review
- cost-optimisation – Focus on cost savings and performance
- production-ready – Comprehensive production deployment checks
- quick-check – Fast security and best-practice scan for CI/CD
- complete – All 11 domains analysed in detail
Analysis Depth:
- standard – Balanced comprehensive analysis (~8K tokens)
- quick – High-level critical findings (~4K tokens)
- detailed – Exhaustive examination with learning opportunities (~12K tokens)
Analysis Modes:
- plan-only – Fast analysis of just the Terraform plan JSON
- comprehensive – Deep dive including source files for full context
This flexibility means you can run lightweight checks on every commit, and deeper reviews on production deployments.
Getting Started in Minutes
check it out here: https://github.com/thomast1906/terraform-review-ai-action
You can integrate this into your CI/CD pipeline with minimal setup.
Here’s the quickest route from “interesting idea” to “live in production”:
Add the GitHub Action to Your Workflow
name: Terraform AI Review
on:
pull_request:
paths: ['.tf', '.tfvars']
permissions:
contents: read
pull-requests: write
models: read # Required for GitHub Models
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: hashicorp/setup-terraform@v3
- name: Terraform Plan
run: |
terraform init
terraform plan -out=tfplan.binary
terraform show -json tfplan.binary > tfplan.json
- name: AI Review
uses: thomast1906/terraform-review-ai-action@v1
with:
ai-provider: 'github-models'
github-models-token: ${{ secrets.GITHUB_TOKEN }}
github-token: ${{ secrets.GITHUB_TOKEN }}
analysis-preset: 'production-ready'
analysis-depth: 'detailed'
Chose your AI Provider
Two options, both work great:
Option 1: GitHub Models (Easiest)
- Uses GitHub’s built-in AI models
- No additional setup required
- Just use
${{ secrets.GITHUB_TOKEN }} - Add
models: readpermission
Option 2: Azure OpenAI (More Control)
- Use your own Azure OpenAI deployment
- Control costs and model versions
- Add secrets:
AZURE_OPENAI_API_KEY,AZURE_OPENAI_ENDPOINT,AZURE_OPENAI_DEPLOYMENT
- name: AI Review
uses: thomast1906/terraform-review-ai-action@v1
with:
ai-provider: 'azure'
azure-openai-api-key: ${{ secrets.AZURE_OPENAI_API_KEY }}
azure-openai-endpoint: ${{ secrets.AZURE_OPENAI_ENDPOINT }}
azure-openai-deployment: 'gpt-4'
github-token: ${{ secrets.GITHUB_TOKEN }}
analysis-preset: 'production-ready'
Open a PR and see it in Action
That’s it. The next time you open a Terraform PR:
- The action runs automatically
- AI analysis appears as a PR comment
- Team reviews the findings
- Merge with confidence
HashiCorp MCP Integration
The action integrates with HashiCorp’s official Terraform Model Context Protocol (MCP) Server for enhanced validation:
What you get:
- Real-time access to Terraform Registry documentation
- Module recommendations from the registry
- Resource validation against HashiCorp standards
- Latest provider versions and compatibility checks
- Direct links to official documentation
Enabled by default, but you can disable it if needed:
skip-mcp-validation: false
show-mcp-details: true # Show detailed MCP diagnostics
The MCP integration makes recommendations more specific and always up-to-date with the latest Terraform ecosystem.
Real-World Benefits (From Production Use)
Since deploying this action, I’ve observed significant improvements in infrastructure quality and team efficiency:
Proactive Issue Detection
The action systematically identifies opportunities for improvement:
- Public access configurations that should be private
- Encryption settings that can be enhanced
- Resource sizing optimizations (regularly identifying $500+/month in savings)
- Monitoring gaps that would impact observability
- RBAC policies that can follow least-privilege principles more closely
All surfaced automatically during PR review – enabling teams to address them before deployment.
Accelerated Code Reviews
Review workflow becomes streamlined:
1. Review the AI-generated comprehensive summary
2. Focus discussion on critical findings and business logic
3. Approve with confidence in the baseline quality
Typical review time: 30+ minutes → under 5 minutes for standard changes.
Enhanced Team Learning
Every PR becomes a learning opportunity:
- Specific explanations of findings with context
- Actionable remediation steps with examples
- Direct links to relevant documentation
- Best practice patterns reinforced consistently
It creates a continuous learning environment where infrastructure knowledge compounds across the team.
Consistent Standards Across Teams
Every PR receives the same comprehensive analysis. Particularly valuable for:
- Multi-team organisations maintaining unified standards
- Distributed teams across different timezones
- Teams with varying levels of Terraform expertise
Real Example: Before and After
Before (Manual Review):
> "Looks good to me 👍" _(merges PR with public S3 bucket)_
After (AI Review)
Critical Issues (🔴)
1. S3 Bucket Public Access - `aws_s3_bucket.data`
- Issue: Bucket allows public read access via ACL
- Risk: Sensitive data exposure, compliance violation
- Fix: Set `acl = "private"` and use bucket policies for controlled access
- Impact: HIGH - Potential data breach
- Effort: 2 minutes
Warnings (🟡)
1. EC2 Instance Sizing - `aws_instance.web`
- Current: t3.2xlarge (8 vCPU, 32GB RAM)
- Recommended: t3.large (2 vCPU, 8GB RAM)
- Savings: ~$180/month
- Reason: CPU utilisation historically <15%
Recommendations (🔵)
1. Enable CloudWatch Monitoring - `aws_instance.web`
- Add: detailed_monitoring = true
- Benefit: Better visibility into instance health
Good Practices (✅)
1. Encryption enabled on all EBS volumes
2. Proper tagging applied (Environment, Owner, CostCenter)
3. Using latest AMI with security patches
See the difference? Specific, actionable, measurable.
Some Common Questions
- Q: Does this replace manual code reviews?
No – it enhances them. Humans still review the logic and architecture. AI catches the security, cost, and compliance issues that are easy to miss.
- Q: What if I use a niche Terraform provider?
It works with any provider. The system dynamically detects providers from your plan and adapts its analysis.
- Q: Is my Terraform plan data secure?
Yes. Sensitive data is scrubbed before being sent to AI (passwords, API keys, secrets, etc.). You can also disable scrubbing if needed: `enable-data-scrubbing: ‘false’`
- Q: What about costs?
GitHub Models uses your existing GitHub subscription (free for public repos, included with GitHub Enterprise). Azure OpenAI costs depend on your deployment and token usage (~$0.01-0.10 per review with GPT-4).
- Q: Can I customise the analysis?
Absolutely. You can:
- Edit system prompts in `prompts/` directory
- Add company-specific policies
- Adjust focus areas
- Control output formatting
Wrapping Up
I built this action to bring systematic, comprehensive analysis to Terraform reviews – creating a baseline of quality that teams can rely on for every infrastructure change.
After months of production use, the value is clear:
- Proactive identification of optimisation opportunities
- Dramatically faster review cycles
- Consistent standards across all teams
- Automatic cost and security analysis
- Continuous learning built into the workflow
And the implementation is straightforward – under 5 minutes to get running.
Get started today:
1. Add the GitHub Action to your workflow
2. Choose GitHub Models or Azure OpenAI
3. Open a PR and see the comprehensive analysis
4. Iterate and improve your infrastructure systematically
The result? More reliable infrastructure, faster deployments, and teams that can focus on innovation rather than manual review.
I’m sharing this openly because I believe automated, intelligent infrastructure review should be accessible to every team working with Terraform.
Give it a try and let me know what improvements you see!