Most Used Terraform Providers in 2025
The Terraform Registry hosts thousands of providers, but just a handful account for the majority of all downloads. This concentration reveals which providers work in production versus those who only work in demos. The AWS provider reached 5 billion downloads as of writing, but what about utility providers like Random and Null, which collectively rival major cloud providers in usage? What makes them so popular?
These patterns matter because provider choice determines whether your infrastructure deploys reliably or fails mysteriously. If you choose the wrong provider, you might discover missing features after committing to it, or you might struggle with credential management across environments. Understanding which providers dominate and why helps you build on proven approaches and speed up your infrastructure deployments.
What Terraform providers actually do
Providers bridge your Terraform code and the actual APIs that create infrastructure. Without them, your aws_instance
resource block is just text in a file. The provider transforms it into a series of API calls needed to launch an EC2 instance, handling all the messy details like authentication tokens expiring mid-request or the API returning paginated results that need to be assembled.
This abstraction is powerful when you realize how different these APIs actually are. Creating an S3 bucket happens instantly, while spinning up an RDS cluster might take 20 minutes for the provider to check "Is it ready yet?" every 30 seconds. Yet in your Terraform code, both resources look nearly identical. The provider smooths over these differences so you don't have to write custom polling logic or handle each service's quirks.
Beyond API translation, providers maintain your infrastructure's state and detect when reality drifts from your configuration. They track relationships between resources, such as ensuring your load balancer knows about the EC2 instances to which it should route traffic. When you run terraform plan, the provider compares what you've declared with what actually exists, then figures out the minimum set of changes needed to reconcile the difference.
Choosing providers that work at scale
Not all providers are created equal.
The official AWS provider gets weekly updates with bug fixes and new features, while a community provider for your niche monitoring tool might not have been touched in months. Before you commit to a provider, check its GitHub repository. Look for recent commits, how quickly issues get addressed, and whether the maintainers respond to questions. A provider that hasn't been updated since last year probably won't support the latest API features you need.
Release patterns tell you what maintenance burden you're signing up for. Providers that release weekly stay current but might break your code with each update. Others release quarterly and offer more stability, but with slower feature additions. If your team only reviews infrastructure changes once a month, using a provider that ships breaking changes every week means you'll always be playing catch-up.
The most frustrating discovery happens three months into a project when you realize the provider doesn't support that one critical feature you need. It could be that it handles basic S3 operations, not bucket replication, or it creates Kubernetes deployments but not custom resources. Before building your architecture around a provider, verify it supports your specific use cases.
The most used terraform providers
AWS provider
The AWS provider's massive download count reflects a simple reality: most cloud infrastructure runs on AWS, so most Terraform manages AWS resources. Years of battle-testing have taught it to handle edge cases that would break newer providers. It knows how to retry when AWS rate limits you, how to wait for consistent services to catch up eventually, and how to manage the hundreds of AWS services that all behave slightly differently.
Production teams rarely use just one AWS provider configuration. You need different settings for different scenarios:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.52" # Patches only, no minor versions
}
}
}
provider "aws" {
region = "us-east-1"
default_tags {
tags = {
ManagedBy = "Terraform"
Environment = var.environment
}
}
}
provider "aws" {
alias = "disaster_recovery"
region = "us-west-2"
}
The default_tags feature automatically applies tags to every resource you create, which sounds minor until you're trying to track cloud costs across hundreds of resources. The aliased provider for disaster recovery lets you create resources in multiple regions from the same configuration.
However, the provider's evolution hasn't been smooth. For example, the jump from v4 to v5 broke how S3 buckets handle ACLs, forcing teams to rewrite configurations to use separate aws_s3_bucket_acl
resources. Teams who stayed on v4 avoided the immediate headache, but now can't use new features like default_tags improvements or support for the latest AWS services.
This is the eternal Terraform dilemma: stay current and deal with breaking changes, or stay stable and miss out on improvements.
Kubernetes provider
The Kubernetes Terraform provider does something kubectl can't: it remembers what you deployed and notices when someone changes it. This matters more than you think. When a developer "quickly fixes" something with kubectl apply, Terraform will catch that drift in the next plan. Whether that's helpful or annoying depends on how your team works.
Getting the provider to authenticate properly is where most teams struggle. You can't just point it at a cluster and go. If you're using EKS, the configuration typically looks like this:
data "aws_eks_cluster" "main" {
name = var.cluster_name
}
data "aws_eks_cluster_auth" "main" {
name = var.cluster_name
}
provider "kubernetes" {
host = data.aws_eks_cluster.main.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.main.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.main.token
}
This setup dynamically pulls credentials from AWS, so the same IAM role that creates your cluster can also deploy resources. There are no kubeconfig files to manage or certificates to rotate. The token refreshes automatically with each run.
The challenge comes when teams try to use Terraform for everything Kubernetes-related. It works great for cluster infrastructure and namespace configuration, but managing application deployments gets messy. Terraform sees drift every time someone updates a deployment with kubectl or their CI/CD tool. Most successful teams draw a clear line: Terraform manages the cluster and core infrastructure, while ArgoCD or Flux handles application deployments. Trying to do both with one tool usually ends in frustration.
Helm provider
The Helm provider manages entire application deployments as single Terraform resources. Instead of defining dozens of Kubernetes resources for an nginx ingress controller, you reference a Helm chart that packages them all together. This sounds convenient until you realize you're now managing two different version systems: the provider version and every chart version you deploy.
resource "helm_release" "ingress" {
name = "nginx-ingress"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
version = "4.7.1" # Chart version, not provider version
set {
name = "controller.service.type"
value = "LoadBalancer"
}
}
That version field is a chart version completely independent of your Helm provider version. You might be using Helm provider v2.11.0 to install nginx chart v4.7.1, and both can break your deployment in different ways. The provider might change how it handles values, while the chart might rename configuration options or require new mandatory settings.
Values management adds another layer of complexity. You can pass values through files, inline YAML, or individual set blocks, and they all override each other in specific ways. When something doesn't work, you're debugging three layers: did Terraform pass the values correctly, did Helm interpret them properly, and did the chart actually use them?
Most teams keep values in separate files and use templatefile() to inject variables, which at least makes the values easier to test outside of Terraform.
Utility providers
Every Terraform configuration needs utility providers for tasks that cloud APIs don't handle:
- You need random passwords that don't regenerate on every run
- You need to execute shell commands when specific resources change
- You need to generate files based on templates
These providers fill those gaps.
The Random provider solves a deceptively tricky problem: generating consistent values across runs. Here's how most teams handle database passwords:
resource "random_password" "database" {
length = 32
special = true
keepers = {
rotation = var.password_rotation_timestamp
}
}
Without the keepers block, this password never changes, which might fail your security audits. With keepers, you control exactly when passwords rotate by updating the timestamp variable. The password stays stable until you explicitly tell it to change.
The Null provider lets you run scripts and commands, which sounds like cheating in Terraform's declarative world, but sometimes you need it:
resource "null_resource" "cluster_config" {
triggers = {
cluster_id = aws_eks_cluster.main.id
}
provisioner "local-exec" {
command = "aws eks update-kubeconfig --name ${aws_eks_cluster.main.name}"
}
}
Triggers determine when provisioners re-run. Without them, the command runs once when you first create the resource and then never again, even if you destroy and recreate the entire cluster. With triggers, the command runs whenever the cluster ID changes. It's a hack that works when you need to bridge Terraform with external tools.
Other providers worth knowing
If you're not using AWS, the Azure (azurerm) and Google Cloud (google) Terraform providers offer similar maturity and features for their respective platforms. Both receive weekly updates and handle the full range of their cloud services, though Azure's provider went through a major breaking change with the shift from v2 to v3 that caught many teams off guard. The Google Beta provider runs alongside the main Google provider for preview features, a pattern unique to GCP.
The Datadog provider leads the pack for monitoring and observability, but New Relic and Grafana providers offer alternatives depending on your monitoring stack. The GitHub provider manages repository infrastructure, while the GitLab and Bitbucket providers serve teams using those platforms.
Where Infrastructure tooling gets interesting is with specialized providers.
- The Cloudflare provider has become the standard for DNS and CDN management across clouds
- The Vault provider integrates secret management, though watch for circular dependencies where Terraform needs secrets from the vault it's trying to create
- The PagerDuty and Opsgenie providers manage on-call schedules and escalation policies as code
- For those running on-premises or hybrid clouds, the vSphere provider manages VMware infrastructure, while the Proxmox provider handles the open-source alternative
- Database-specific providers like MongoDB Atlas and PostgreSQL let you manage cloud databases and their schemas through Terraform
The key is matching providers to your actual stack. If you're all-in on Azure, you might need azurerm, azuread for identity, Datadog for monitoring, and GitHub for source control.
Solving common Terraform provider challenges
Authentication without secrets
The old way of handling provider authentication was simple: put your AWS access keys in environment variables and hope they don't leak. This approach falls apart quickly. Credentials get committed to git, rotation becomes a nightmare across dozens of repositories, and eventually someone's laptop gets stolen with access to production.
OIDC changes the game by eliminating permanent credentials entirely. Instead, your CI/CD pipeline proves its identity to AWS and receives temporary credentials that expire when the job finishes:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::${{ vars.AWS_ACCOUNT_ID }}:role/github-terraform
aws-region: us-east-1
The magic happens in the IAM trust policy, which only allows your specific repository and branch to assume the role. No more secrets in GitHub, no more rotation schedules, and if someone clones your repo, they get code but no access.
You're stuck with API keys for providers that don't support OIDC, but there are ways around it, like centralizing credentials. Store them in GitHub Secrets or AWS Secrets Manager, inject them during pipeline runs, and rotate them regularly. It's not perfect, but it's better than hardcoding them.
Version management strategies
Provider version management is like walking a tightrope. If you update too aggressively, you'll spend your time fixing breaking changes. If you update too conservatively, you'll miss critical bug fixes and new features. The solution is to use different versions for different environments.
In production, pin to exact versions so nothing changes unexpectedly:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "= 5.52.0" # Nothing else accepted
}
}
}
In development, allow patch updates to catch issues early:
version = "~> 5.52" # Allows 5.52.1, 5.52.2, blocks 5.53.0
This way, you make your development environment an early warning system for patch releases that might cause problems, while production stays stable until you deliberately update it.
The lock file (.terraform.lock.hcl) is your safety net. It records exactly which provider version Terraform downloaded, down to the checksum.
Commit this file to git. Without it, two developers running terraform init on the same day might get different provider versions if a new patch was released between their runs. Those tiny differences cause bugs to waste entire afternoons.
Multi-provider dependencies
Real infrastructure needs multiple providers working together, and that's where things get interesting. Your Kubernetes provider can't initialize until the EKS cluster exists. Your Helm provider needs the Kubernetes provider configured first. These dependencies create a chain that Terraform has to respect. The trick is using resource outputs to configure dependent providers:
resource "aws_eks_cluster" "main" {
name = var.cluster_name
role_arn = aws_iam_role.cluster.arn
vpc_config {
subnet_ids = var.subnet_ids
}
}
provider "kubernetes" {
host = aws_eks_cluster.main.endpoint
token = data.aws_eks_cluster_auth.main.token
cluster_ca_certificate = base64decode(
aws_eks_cluster.main.certificate_authority[0].data
)
}
Because the Kubernetes provider references the EKS cluster's endpoint, Terraform knows it can't initialize that provider until the cluster exists. This works great for initial creation, but if you need to destroy and recreate the cluster, you might need targeted applies to manage the dependency chain manually. It's not elegant, but it beats trying to untangle provider initialization errors.
Building a sustainable provider strategy
You don't need every provider that might someday be useful, so start small. Most teams build sophisticated infrastructure with just their cloud provider, Kubernetes, Helm, and a handful of utility providers.
Each provider you add is another dependency to manage, another authentication method to configure, and another source of potential breaking changes. Add providers when you actually need them, not when you think you might.
Documentation seems like overhead until you're debugging provider authentication at midnight. Write down which versions you're using and why. Document your authentication flow, especially if it involves multiple steps or service accounts. Create a provider template that new projects can copy, including all the version constraints and provider configurations that took you months to get right. Your future teammates (including future you) will thank you when they don't have to reverse-engineer your provider setup.
Testing provider updates requires discipline. Never update providers directly in production, no matter how minor the version bump looks. Create a dedicated test environment that mirrors your production infrastructure's core components. Run terraform plan with the new provider version and actually read the output. Look for deprecation warnings that signal future breaking changes. Check if resource attributes you depend on are being renamed or removed. Major version upgrades are projects, not routine updates, so budget time accordingly.
Also, authentication failures that might indicate expiring credentials or changed IAM policies should be monitored. Track API rate limit errors that suggest you need to tune Terraform's parallelism. Watch for provider crashes that could signal compatibility issues with your Terraform version.
These metrics give you early warning before providers cause actual outages. When you see patterns like increasing retry counts or timeout errors, investigate before they become incidents.
Conclusion
The concentration of usage among a handful of providers shows that successful teams stick with proven options rather than experimenting with every new integration.
Getting providers right means thinking beyond features. Version pinning prevents surprises when a provider update breaks production. OIDC authentication eliminates the security credentials workflows that shouldn't exist in the first place. Clear boundaries between tools, for example, Terraform for infrastructure and ArgoCD for applications, prevent the constant drift battles that exhaust engineering teams.
The path forward is to pick the minimal set of providers you need, pin their versions, automate their authentication, and document what you've built. Then, focus on your actual infrastructure instead of fighting with provider configurations.
For teams managing providers across multiple repositories and environments, good code organization and CI/CD practices turn providers from a source of friction into reliable building blocks.
Tools like Terrateam can handle the provider orchestration patterns described here, but the principles apply regardless of your tooling choices. The goal is infrastructure that deploys predictably, whether you're managing ten resources or ten thousand.