If you’ve been using AI tools to help with Terraform, you’ve probably noticed they sometimes suggest outdated provider arguments or resource configurations that don’t quite match the latest documentation. That’s where the HashiCorp Terraform MCP Server changes the game.
The Terraform MCP Server is HashiCorp’s official implementation of the Model Context Protocol – essentially a live bridge to everything in the Terraform ecosystem. It gives you direct access to:
- Latest provider versions from the Terraform Registry
- Module recommendations for specific providers
- Resource documentation straight from the registry
- Real-time validation against official schemas
In this post, I’m going to show you how I integrated the HashiCorp MCP server into GitHub Actions workflows.
The result? Intelligent Terraform validation that runs automatically in CI/CD without relying on potentially outdated AI training data.
Why Bother with Terraform MCP Server in GitHub Actions?
Traditional Terraform validation in CI/CD is limited:
terraform validateonly checks syntaxterraform planvalidates against current state, not best practices- Custom linting rules need constant maintenance
The MCP Server fills the gaps:
- Version checking – verify you’re using the latest provider versions
- Automatic documentation – fetch official resource documentation
- Module discovery – find recommended modules for your provider
- AI-enhanced analysis – combine accurate schemas with intelligent insights
- Works with everything – AWS, Azure, GCP, Kubernetes, and 1000+ community providers
The big win? Your team gets consistent validation backed by HashiCorp’s actual documentation, not AI guesses.
The GitHub Actions Terraform MCP Server Step Explained
Here’s the GitHub Actions step I use to start the Terraform MCP Server. It handles errors gracefully so your workflows don’t break if Docker isn’t available:
- name: Start Terraform MCP Server
id: mcp-server
shell: bash
run: |
echo "Starting HashiCorp Terraform MCP Server..."
# Check if Docker is available
if ! command -v docker &> /dev/null; then
echo "Warning: Docker not available, skipping MCP server startup"
echo "mcp_available=false" >> $GITHUB_OUTPUT
exit 0
fi
# Pull the latest MCP server image
echo "Pulling HashiCorp Terraform MCP Server image..."
if ! docker pull hashicorp/terraform-mcp-server:latest; then
echo "Warning: Failed to pull MCP server image, continuing without MCP validation"
echo "mcp_available=false" >> $GITHUB_OUTPUT
exit 0
fi
# Test JSON-RPC stdio mode
echo "Testing MCP server JSON-RPC stdio mode..."
if echo '{"jsonrpc": "2.0", "id": "test", "method": "initialize", "params": {"protocolVersion": "2024-11-05", "clientInfo": {"name": "test", "version": "1.0.0"}}}' | timeout 15 docker run --rm -i hashicorp/terraform-mcp-server:latest 2>/dev/null | grep -q '"result"'; then
echo "MCP server JSON-RPC stdio mode is working"
echo "mcp_available=true" >> $GITHUB_OUTPUT
else
echo "Warning: MCP server stdio mode test failed, continuing without MCP validation"
echo "mcp_available=false" >> $GITHUB_OUTPUT
fi
From hands-on use, here’s what makes this reliable:
- Graceful degradation – If Docker isn’t available, the workflow continues without MCP validation rather than failing. This is crucial for runners that don’t have Docker installed
- Network resilience – The 15-second timeout on the JSON-RPC test prevents hanging workflows. I’ve seen network hiccups cause indefinite hangs without this
- Conditional execution – Downstream steps check `mcp_available=true/false` before calling MCP. This means you can use the same workflow across different environments
- Clear logging – The echo statements make debugging simple. You’ll see exactly why MCP didn’t start if something goes wrong
Complete Working Example
Here’s the full workflow that validates Terraform plans using the MCP Server and GitHub Models. This is production-ready code you can use immediately:
name: Terraform MCP Validation
on:
pull_request:
paths:
- '**.tf'
- '.github/workflows/terraform-mcp.yml'
permissions:
contents: read
pull-requests: write
id-token: write
models: read
jobs:
validate:
name: Validate Terraform with MCP
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v5
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.13.4
- name: Start Terraform MCP Server
id: mcp-server
shell: bash
run: |
echo "Starting HashiCorp Terraform MCP Server..."
# Check if Docker is available
if ! command -v docker &> /dev/null; then
echo "Warning: Docker not available, skipping MCP server startup"
echo "mcp_available=false" >> $GITHUB_OUTPUT
exit 0
fi
# Pull the latest MCP server image
echo "Pulling HashiCorp Terraform MCP Server image..."
if ! docker pull hashicorp/terraform-mcp-server:latest; then
echo "Warning: Failed to pull MCP server image, continuing without MCP validation"
echo "mcp_available=false" >> $GITHUB_OUTPUT
exit 0
fi
# Test JSON-RPC stdio mode
echo "Testing MCP server JSON-RPC stdio mode..."
if echo '{"jsonrpc": "2.0", "id": "test", "method": "initialize", "params": {"protocolVersion": "2024-11-05", "clientInfo": {"name": "test", "version": "1.0.0"}}}' | timeout 15 docker run --rm -i hashicorp/terraform-mcp-server:latest 2>/dev/null | grep -q '"result"'; then
echo "MCP server JSON-RPC stdio mode is working"
echo "mcp_available=true" >> $GITHUB_OUTPUT
else
echo "Warning: MCP server stdio mode test failed, continuing without MCP validation"
echo "mcp_available=false" >> $GITHUB_OUTPUT
fi
- name: Terraform Init
run: terraform init -backend=false
- name: Terraform Validate
run: terraform validate
- name: Use Static Terraform Plan
run: |
echo "Using static tfplan.json (already in repository)"
ls -la tfplan.json
- name: Setup Python
if: steps.mcp-server.outputs.mcp_available == 'true'
uses: actions/setup-python@v6
with:
python-version: '3.14'
- name: Install MCP Client
if: steps.mcp-server.outputs.mcp_available == 'true'
run: pip install mcp requests
- name: Validate with MCP
if: steps.mcp-server.outputs.mcp_available == 'true'
env:
MCP_AVAILABLE: ${{ steps.mcp-server.outputs.mcp_available }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: python validate_terraform.py
- name: Comment on PR
if: github.event_name == 'pull_request' && steps.mcp-server.outputs.mcp_available == 'true'
uses: actions/github-script@v8
with:
script: |
const fs = require('fs');
let aiAnalysis = '';
try {
aiAnalysis = fs.readFileSync('ai_analysis.txt', 'utf8');
} catch (error) {
aiAnalysis = 'AI analysis was not available for this run.';
}
const comment = `## 🔍 Terraform MCP Validation Complete
The Terraform plan has been validated using HashiCorp's official MCP Server.
✅ All resource schemas validated against official provider documentation
✅ Configuration syntax verified
✅ AI-enhanced analysis completed via GitHub Models
### 🤖 AI Analysis
${aiAnalysis}
---
*Powered by HashiCorp Terraform MCP Server & GitHub Models*`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: comment
});
The Python Validation Script
The real magic happens in validate_terraform.py.
This script connects to the MCP Server (via Docker in stdio mode) and:
- Starts a JSON‑RPC session
- Extracts unique providers from your Terraform plan
- Calls
get_latest_provider_versionfor each provider - Uses
search_modulesto find recommended modules - Retrieves resource documentation
- Summarises planned actions (create/update/delete/replace)
- Passes this authoritative data into GitHub Models for analysis
By grounding the analysis in current registry data, you avoid the pitfalls of outdated AI suggestions.
How It All Works
Step 1 – Start the MCP Server
The workflow checks if Docker is available, pulls the HashiCorp MCP server image, and validates it responds to JSON-RPC calls. If any of this fails, it sets mcp_available=false and continues.
Step 2 – Terraform Validation
Standard Terraform flow: init and validate to catch basic syntax errors.
Step 3 – Load Static Plan
Uses a pre-generated tfplan.json file containing the infrastructure to validate. In a real workflow, you’d generate this with terraform plan -out=tfplan.binary && terraform show -json tfplan.binary > tfplan.json
Step 4 – MCP Validation
The Python script:
1. Connects to MCP server via Docker stdio
2. Initialises JSON-RPC session
3. Extracts unique providers from the plan
4. Calls get_latest_provider_version for each provider
5. Calls search_modules to find recommended modules
6. Calls get_resource_docs for resource documentation
7. Counts action types (create/update/delete/replace)
Step 5 – AI Analysis
Combines MCP data with GitHub Models AI:
1. Builds comprehensive prompt with provider versions, resource changes, and documentation
2. Calls GitHub Models API (gpt-4o model)
3. Gets security recommendations, best practices, and risk assessment
4. Saves analysis to `ai_analysis.txt`
Step 6 – PR Comment
Results go into a PR comment with the full AI analysis, so your team sees validation feedback immediately.
What This Fixes
Before using MCP, I was:
- Manually checking if I had the latest provider versions
- Guessing which modules might be relevant
- Searching docs.hashicorp.com for resource documentation
- Hoping AI tools had up-to-date information
- Dealing with inconsistent validation across the team
With MCP:
- Provider versions checked automatically against the registry
- Module recommendations come from official HashiCorp data
- Documentation is fetched in real-time
- AI analysis is grounded in current, accurate information
- Everyone gets the same validation, every time
Combining MCP with GitHub Models
Here’s where it gets really powerful. The script combines MCP’s accurate, real-time data with AI analysis:
# 1. MCP gets current provider version (authoritative)
version = await get_provider_version(process, "hashicorp", "azurerm")
# 2. MCP searches for official modules (from registry)
modules = await search_modules(process, "azurerm")
# 3. MCP fetches resource documentation (latest)
docs = await get_resource_docs(process, "azurerm_storage_account")
# 4. Combine all MCP data into comprehensive prompt
plan_summary = f"""
Provider Version: {version}
Available Modules: {modules}
Resource Docs: {docs}
Resources in Plan: {json.dumps(resources)}
"""
# 5. AI analyzes everything together (intelligent, contextual)
ai_analysis = call_github_models(
"Analyze this Terraform plan comprehensively",
plan_summary
)
What this gives you:
- MCP ensures data is current and accurate
- AI provides security analysis, best practices, and actionable recommendations
- Combined you get authoritative validation plus intelligent insights
The AI can reference actual provider versions, suggest specific modules from the registry, and cite real documentation – not outdated training data.
Example Output
Here’s what the example output looks like (small snippet of output):


Example output of AI analysis, see the PR here for more
Wrapping Up
The HashiCorp Terraform MCP Server gives you authoritative, real-time Terraform knowledge in GitHub Actions. The validation setup I’ve shown:
- Uses official provider versions from the registry
- Fetches current resource documentation
- Combines accurate data with AI insights
The Bottom Line: You catch misconfigurations before production, backed by HashiCorp’s actual registry data instead of AI guesses. The MCP server handles the “what’s accurate” part, GitHub Models handles the “what should I do about it” part.
If you’re already using Terraform in GitHub CI/CD, this is a straightforward way to level up your validation game.
Want to see this in action? Fork my repository and create a PR to see MCP validation in action.
Additional reading: