Building a CI/CD Pipeline for Terraform with GitHub Actions
What you'll find in this guide
Introduction to CI/CD Pipelines for Terraform
Manual Terraform deployments kill productivity. Every terraform apply
from your laptop risks configuration drift, credential exposure, and human error. These inefficiencies lead to teams spending more time fixing broken deployments than building infrastructure.
A proper CI/CD pipeline eliminates this chaos. Open a pull request, and GitHub Actions automatically runs terraform plan
and posts results as comments. Merge the PR, and changes apply automatically to AWS. No more "works on my machine" problems.
This guide shows you how to build such a CI/CD pipeline for Terraform. You'll configure secure OIDC authentication, set up remote state storage, and create a complete workflow that handles everything from formatting to deployment.
Configuring OpenID Connect (OIDC) for GitHub
The setup requires two things: an AWS IAM role that trusts GitHub's OIDC provider, and a GitHub workflow that assumes that role.
Create the AWS IAM Role
1 First, create an identity provider in AWS IAM for GitHub's OIDC endpoint, for example:
aws iam create-open-id-connect-provider \
--url https://token.actions.githubusercontent.com \
--client-id-list sts.amazonaws.com \
--thumbprint-list 6938fd4d98bab04faadb97b24396831d3780aea1
2 Next, create a trust policy that allows GitHub to assume your role. Replace YOUR_ORG
and YOUR_REPO
with your actual values:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::YOUR_ACCOUNT_ID:oidc-provider/token.actions.githubusercontent.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
},
"StringLike": {
"token.actions.githubusercontent.com:sub": "repo:YOUR_ORG/YOUR_REPO:*"
}
}
}
]
}
3 Create the IAM role using this trust policy:
aws iam create-role \
--role-name GitHubActionsTerraformRole \
--assume-role-policy-document file://trust-policy.json
4 Attach the necessary permissions. For Terraform, you typically need broad access, but scope it down for production:
aws iam attach-role-policy \
--role-name GitHubActionsTerraformRole \
--policy-arn arn:aws:iam::aws:policy/PowerUserAccess
Configure GitHub Workflow Authentication
In your GitHub Actions workflow, configure the OIDC authentication:
name: Terraform CI/CD
on:
pull_request:
paths: ['**.tf', '**.tfvars']
push:
branches: [main]
permissions:
id-token: write
contents: read
pull-requests: write
jobs:
terraform:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::YOUR_ACCOUNT_ID:role/GitHubActionsTerraformRole
aws-region: us-east-1
repo:organization/repository:*
for any branch, or repo:organization/repository:ref:refs/heads/main
for specific branches only.
Storing Terraform state remotely
Create the S3 Backend Infrastructure
You need an S3 bucket for state storage and a DynamoDB table for locking. Create these resources first:
# backend-setup.tf
resource "aws_s3_bucket" "terraform_state" {
bucket = "your-company-terraform-state-${random_id.bucket_suffix.hex}"
}
resource "random_id" "bucket_suffix" {
byte_length = 4
}
resource "aws_s3_bucket_versioning" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_public_access_block" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-state-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
terraform apply
. Note the bucket name from the output, you'll need it for the backend configuration.
Configure Remote Backend
Once your backend infrastructure exists, configure your main Terraform code to use it:
# backend.tf
terraform {
backend "s3" {
bucket = "your-company-terraform-state-abc123"
key = "infrastructure/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-locks"
encrypt = true
}
}
For multiple environments, organize state files by path:
# Production
key = "production/infrastructure/terraform.tfstate"
# Staging
key = "staging/infrastructure/terraform.tfstate"
# Development
key = "development/infrastructure/terraform.tfstate"
Environment-Specific Configuration
Use environment variables in your GitHub Actions workflow to handle different environments:
- name: Setup Terraform Backend
run: |
cat > backend.tf << EOF
terraform {
backend "s3" {
bucket = "${{ vars.TERRAFORM_STATE_BUCKET }}"
key = "${{ github.ref_name }}/terraform.tfstate"
region = "${{ vars.AWS_REGION }}"
dynamodb_table = "terraform-state-locks"
encrypt = true
}
}
EOF
Set TERRAFORM_STATE_BUCKET
and AWS_REGION
as repository variables in GitHub. This approach is flexible and keeps environment-specific values out of your code.
Adding PR comments and status badges
PR Comment Automation
The most valuable feedback comes from posting plan output as PR comments. This example configuration adds the plan results directly to your pull request:
- name: Terraform Plan
id: plan
run: terraform plan -no-color -input=false
continue-on-error: true
- name: Comment PR
uses: actions/github-script@v7
if: github.event_name == 'pull_request'
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const output = `#### Terraform Plan 📖\`${{ steps.plan.outcome }}\`
<details><summary>Show Plan</summary>
\`\`\`terraform
${{ steps.plan.outputs.stdout }}
\`\`\`
</details>
*Pusher: @${{ github.actor }}, Action: \`${{ github.event_name }}\`*`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
});
continue-on-error: true
ensures the workflow continues even if the plan fails, so you still get feedback about what went wrong.
Status Badges for Build Health
You can also add status badges to your repository README to show the current deployment status:

For more detailed status tracking, create environment-specific badges:


The best practices for Terraform automation in CI
- Separate plan and apply permissions. Plan jobs need read-only access, apply jobs need write access. Never give plan operations modification rights:
# Plan job - read only
- name: Plan
env:
TF_VAR_read_only: true
- Use environment promotion patterns. Deploy development → staging → production. Each environment gets its own state file and branch protection:
dev/terraform.tfstate
staging/terraform.tfstate
prod/terraform.tfstate
- Lock down apply operations. Automatic plans are safe. Automatic applies are dangerous. It's best to require manual approval for production:
environment:
name: production
approval: required
- Validate early and often, and catch errors before expensive operations:
- run: terraform fmt -check
- run: terraform validate
- run: tfsec .
- Use OIDC for AWS access, Secrets Manager for runtime values. Never commit sensitive data, not even encrypted.
- Plan for rollbacks. Keep previous configurations in version control. Test rollback procedures in non-production. Terraform rollbacks are complex, so prepare before you need them.
- Monitor state drift. Run periodic jobs comparing state files against actual infrastructure. Alert when reality diverges from code.
- Back up state files religiously. S3 versioning plus cross-region replication. Test your recovery procedures regularly. Corrupted state files end careers!
For more best practices, refer to the Terraform Best Practices For CI/CD Pipelines guide.
Finalizing your YAML workflow
name: Terraform CI/CD
on:
pull_request:
paths: ['**.tf', '**.tfvars']
push:
branches: [main]
permissions:
id-token: write
contents: read
pull-requests: write
jobs:
terraform:
runs-on: ubuntu-latest
environment: ${{ github.ref == 'refs/heads/main' && 'production' || 'development' }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::${{ vars.AWS_ACCOUNT_ID }}:role/GitHubActionsTerraformRole
aws-region: ${{ vars.AWS_REGION }}
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
- name: Terraform Format
run: terraform fmt -check
- name: Terraform Init
run: terraform init
- name: Terraform Validate
run: terraform validate
- name: Terraform Plan
id: plan
run: terraform plan -no-color -input=false
continue-on-error: true
- name: Comment PR
uses: actions/github-script@v7
if: github.event_name == 'pull_request'
with:
script: |
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: `#### Terraform Plan 📖 \`${{ steps.plan.outcome }}\`
<details><summary>Show Plan</summary>
\`\`\`terraform
${{ steps.plan.outputs.stdout }}
\`\`\`
</details>`
});
- name: Terraform Apply
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: terraform apply -auto-approve
Your team opens pull requests and immediately sees plan output as comments – no more guessing what infrastructure changes will happen. Format and validation errors get caught before human review. When the PR merges to main, changes apply automatically to production.
Replace ${{ vars.AWS_ACCOUNT_ID }}
and ${{ vars.AWS_REGION }}
with your actual values in GitHub repository variables. Add multiple environments by duplicating the workflow with different branch triggers. For staging deployments, change the branch condition to staging
instead of main
.
The workflow uses GitHub Environments to control production access. Set up branch protection rules requiring reviews before merge. Add required status checks so broken plans can't reach production.
This foundation scales from simple single-environment setups to complex multi-account deployments. Start here, then add complexity as your infrastructure grows. Keep in mind that you don’t need commits or PRs to trigger Github Action workflows, you can also trigger them dynamically with workflow dispatch.
Why choose Terrateam
Terrateam eliminates this complexity by providing enterprise-grade Terraform automation that is out of the box. Instead of maintaining custom GitHub Actions workflows, you get a purpose-built platform that understands Terraform's nuances.
Terrateam automatically implements the security and operational practices covered in this guide. OIDC authentication, proper state management, and approval gates work without much configuration. Plan comments, status tracking, and drift detection are standard features.
When you modify your network configuration, Terrateam automatically determines which downstream applications need replanning. No more manually coordinating changes across multiple directories or forgetting critical dependencies.
Simple branch-based deployments evolve into complex approval matrices as teams grow. Terrateam handles multi-environment promotion, conditional approvals, and integration with external systems like ServiceNow or Jira.
# .terrateam.yml
dirs:
network:
when_modified: ['*.tf']
applications:
when_modified: ['*.tf']
depends_on: ['dir:network']
Policy-as-code enforcement and detailed audit logs come standard. Your hand-built Terraform CI/CD pipeline would need months of additional development to match these capabilities.
The GitHub Actions approach teaches you the fundamentals and works for smaller teams. Terrateam scales those fundamentals to enterprise complexity without the maintenance overhead.