Deploy a Terraform load balancer on AWS
A load balancer is a critical piece of infrastructure that can serve a variety of purposes, from tying together multiple services to application health checks and providing resiliency. When you deploy a Terraform elastic load balancer, selecting the right type and configuring it effectively will help you maximize the performance of your application.
Whether you're deploying a Terraform load balancer for the first time or optimizing existing infrastructure, understanding the differences between load balancer types and how to configure them correctly will help you build more resilient applications.
What is a load balancer?
A load balancer is a piece of infrastructure that serves as a single point of ingress for traffic, routing traffic to different destinations based on certain criteria. In this way, a load balancer is very similar to a reverse proxy, though unlike its more feature-rich counterpart, it typically omits functionality like security controls or caching, focusing instead on traffic distribution and health monitoring.
In addition to routing traffic to different services, a load balancer can perform HTTP redirects, SSL termination or HTTP to HTTP redirection. In addition, load balancers support the use of health checks on downstream services to ensure that traffic is only routed to healthy hosts.
Application versus network load balancers
AWS provides two different primary kinds of load balancers (ALB and NLB).
It's critical to know the differences between them and when to use each to ensure that you're maximizing the performance of your application.
The most significant difference between ALB and NLB is the network layer they each operate on. NLBs operate on Layer 4 traffic while ALBs operate on Layer 7. Network layer 4 is the transport layer. This layer is responsible for the end-to-end data transmission, which is usually accomplished using the TCP or UDP protocols.
Because the NLB operates at a low level, routing is based on either IP protocol or port number. The trade-off (lower-level operation means less flexibility) is that NLBs are extremely performant.
ALBs, on the other hand, operate on layer 7 traffic. Layer 7 is the application layer. AWS ALBs support HTTP, HTTPS, and HTTP/2 protocols.
Because it operates at the application layer, the ALB can route traffic based on request content, enabling far more flexible and intelligent routing.
Instead of being confined to things like port number, an ALB can route traffic based on hostname, header, URL path, HTTP method and more. In addition to forwarding traffic to destinations, ALBs can also modify host headers or paths – features usually reserved for more sophisticated reverse proxies.
The following table summarizes the key differences between ALBs and NLBs, so you can determine which load balancer best fits your application requirements:
| Aspect | Application load balancer (ALB) | Network load balancer (NLB) |
|---|---|---|
| Network layer | Layer 7 (Application) | Layer 4 (Transport) |
| Protocols | HTTP, HTTPS, HTTP/2 | TCP, UDP, TLS |
| Routing logic | Path, host, headers, query strings, HTTP methods | IP address and port only |
| Performance | Thousands of requests/sec | Millions of requests/sec, sub-millisecond latency |
| SSL termination | Yes, with ACM integration | Yes, via TLS listener |
| Static IP | No | Yes (Elastic IP supported) |
| Connection type | Terminates and re-establishes | Pass-through |
| Best for | Web apps, APIs, microservices with complex routing | Gaming, streaming, high-performance TCP/UDP, static IP needs |
| Pricing | Higher (LCU-based) | Lower (data processing-based) |
As shown in the comparison, your choice between ALB and NLB typically comes down to whether you need sophisticated HTTP-level routing (favoring ALB) or require maximum performance with minimal latency for TCP/UDP traffic (favoring NLB). Understanding these trade-offs ensures you select the right tool for your specific infrastructure needs.
Why use Terraform to manage load balancers
Just like any infrastructure you manage, using infrastructure as code (IaC) provides numerous benefits, such as auditability, reproducibility, and agility.
Load balancers are particularly well-suited to Terraform management because they require frequent updates as your microservices evolve, meaning you're routinely modifying target groups and routing rules to keep pace with your changing architecture.
Beyond the regular updates, you'll find infrastructure as code essential when creating multiple environments for your applications, as it ensures you can recreate the same resources consistently without discrepancies.
Whether you're spinning up development environments or recovering from a disaster that requires rebuilding production, managing your load balancers programmatically through Terraform eliminates configuration drift and ensures every deployment matches your defined infrastructure state.
What you need to do before deployment
Before deploying your Terraform load balancer on AWS, ensure you have Terraform installed (version 1.0 or higher) and AWS credentials configured with appropriate IAM permissions for Elastic Load Balancing, EC2, and Certificate Manager.
You can verify your setup by running terraform version and aws sts get-caller-identity to confirm everything is properly configured.
The code examples in this guide reference several supporting resources that you'll need to have in place:
- A VPC (
aws_vpc.main) - Subnets spanning multiple availability zones (
aws_subnet.public) - Security groups (
aws_security_group.lb_sg) - SSL/TLS certificates from AWS Certificate Manager (
aws_acm_certificate)
If you plan to enable access logs, you'll also need an S3 bucket configured with permissions for the Elastic Load Balancing service account
These foundational resources ensure your Terraform AWS load balancer can deploy successfully and handle traffic as expected.
Deploying your load balancers to AWS
Before we deploy our load balancers to AWS, it's important to understand the basic anatomy of a load balancer.
Load balancers on AWS are composed of multiple components. The first is the listener, which serves as the point of ingress and binds to a specific port (typically 80 for HTTP and 443 for HTTPS when working with ALBs).
If you're using an ALB with HTTP, you'll likely be binding to ports 80 (HTTP) and 443 (HTTPS). When binding a listener to 443 for HTTPS, you'll need to provide the load balancer with an SSL cert.
The next construct to familiarize yourself with is rules.
Rules are exactly what they sound like. They are the logic layer for load balancers. Rules are used to route traffic from the listener to the destination. Typically, rules will forward traffic onto target groups, but they can also perform redirects or return a fixed response, such as a 404.
Rules are applied based on a priority. Each rule must be provided with a unique priority level (1-50,000), with these levels then evaluated in ascending order. Once a condition on a load balancer is matched, it will stop evaluation and execute the rule. If no prioritized rules match, then the default rule will apply.
The last major components of a load balancer are target groups and targets.
Target groups are the most common destination for routes. Target groups are collections of targets such as EC2 instances, ECS services, EKS clusters, or other load balancers. Targets include health checks that continuously monitor service availability, automatically removing unhealthy instances from rotation to prevent user-facing errors
This automated failover means your application stays available even when individual instances fail, without requiring manual intervention or complex monitoring scripts.
Once you have mapped out your routing, there are a few additional settings to consider. First, you need to determine if your load balancer will be public-facing or not.
Depending on your situation, you may choose to have your load balancer directly publicly facing. However, there are situations where you may wish to have your load balancer private. Then your services will only be available in your VPC unless you use another tool to expose them publicly. Sometimes this is another load balancer or CloudFront.
In content-heavy applications, you might want to place CloudFront in front of your ALB using a VPC Origin, which ensures all traffic flows through CloudFront and respects caching rules rather than allowing users to bypass the CDN by accessing the ALB endpoint directly.
The other notable setting to be aware of is AWS Web Application Firewall (WAF). The WAF enables you to ensure that your load balancer is properly secured. In addition, you'll also need to declare security groups as well as availability zones for your load balancer. Once you've decided on the above settings, you're ready to deploy your load balancer.
Deploying an ALB with Terraform on AWS
Let's start by building out an ALB using a common scenario: hosting a content-based marketing site with a headless content management system (CMS).
In this scenario, the frontend will be served from, and the CMS will be hosted at, /admin. Therefore, we need two target groups to support our two services. Identifying these target groups will allow us to scale our application layers independently. When your frontend experiences high traffic, you can increase its capacity without over-provisioning the CMS, resulting in more efficient resource utilization and lower costs.
resource "aws_lb_target_group" "frontend" {
name = "frontend"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main.id
}
resource "aws_lb_target_group" "cms" {
name = "cms"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main.id
}
All we have done is define the available targets, but not their bindings to services. Depending on where your services are hosted, you will need to declare the target group attachment. For a simple architecture like this with only two services, ECS provides a straightforward hosting solution.
Target group attachments are configured directly in the ECS service declaration (we're leaving out the majority of the ECS service configuration as it is outside of the scope of this article).
resource "aws_ecs_service" "frontend" {
name = "frontend"
# additional config omitted for brevity
load_balancer {
target_group_arn = aws_lb_target_group.frontend.arn
container_name = "app" # Name of the container in your task definition
container_port = 8080
}
}
resource "aws_ecs_service" "cms" {
name = "cms"
# additional config omitted for brevity
load_balancer {
target_group_arn = aws_lb_target_group.cms.arn
container_name = "admin" # Name of the container in your task definition
container_port = 3000
}
}
Now that we have a service running and a target group, let's define our ALB.
resource "aws_lb" "demo" {
name = "demo-lb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.lb_sg.id]
subnets = [for subnet in aws_subnet.public : subnet.id]
enable_deletion_protection = true
}
In the example above, you can see the critical configuration. Once we have a load balancer, we can move on to adding our listeners. We're going to add two listeners: one for HTTPS traffic and one for HTTP. We will route the HTTPS traffic to our services, then redirect the HTTP traffic to HTTPS to ensure that our users have a secure connection.
resource "aws_lb_listener" "secure" {
load_balancer_arn = aws_lb.demo.arn
port = "443"
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = aws_acm_certificate.demo.arn
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.frontend.arn
}
}
Now we have a default action in the listener. This is the default matching destination if no rules match. Let's proceed with adding a listener for HTTP and redirecting that traffic to HTTPS.
resource "aws_lb_listener" "insecure" {
load_balancer_arn = aws_lb.demo.arn
port = "80"
protocol = "HTTP"
default_action {
type = "redirect"
redirect {
port = "443"
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
Now that we have our listeners configured and forwarding to the correct default targets, let's add the final missing piece: the rule that forwards traffic from our /admin path to the CMS.
resource "aws_lb_listener_rule" "cms" {
listener_arn = aws_lb_listener.secure.arn
priority = 100
action {
type = "forward"
target_group_arn = aws_lb_target_group.cms.arn
}
condition {
path_pattern {
values = ["/admin/*"]
}
}
}
Deploying an NLB with Terraform on AWS
As mentioned previously, NLBs have a more refined use case as they are used at a lower level than ALBs.
Since a Terraform AWS network load balancer is particularly performant (handling millions of requests per second with sub-millisecond latency), it's frequently used for gaming services, real-time applications, and high-throughput TCP/UDP workloads
Let's set up a highly available game cluster for our demo, starting with provisioning the load balancer.
resource "aws_lb" "game" {
name = "game-server-lb"
internal = false
load_balancer_type = "network"
security_groups = [aws_security_group.lb_sg.id]
subnets = [for subnet in aws_subnet.public : subnet.id]
enable_deletion_protection = true
}
Now that we have a load balancer, we need to create our load balancer target group. You will notice that, like our ALB, the settings are all the same except for the port number and protocol.
resource "aws_lb_target_group" "game_servers" {
name = "game-server"
port = 25565
protocol = "TCP"
vpc_id = aws_vpc.main.id
}
Let's proceed to set up our autoscaling group for our compute layer. To keep this brief, we'll show a minimal example with some configuration excluded.
resource "aws_autoscaling_group" "game_servers" {
name = "game-servers-asg"
desired_capacity = 2
max_size = 4
min_size = 1
target_group_arns = [aws_lb_target_group.game_servers.arn]
}
Now that we have that set up, the only remaining piece of the puzzle is the listener. Once we've set this up, we'll have a secure port in front of our game servers that supports auto scaling.
resource "aws_lb_listener" "game_listener" {
load_balancer_arn = aws_lb.game.arn
port = "25565"
protocol = "TLS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = aws_acm_certificate.game.arn
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.game_servers.arn
}
}
Other load balancers
Just like AWS, other cloud hosting providers have their own first-class load balancers.
Google Cloud offers both layer 4 and layer 7 load balancers to manage traffic based on different needs.
Azure offers both Azure Load Balancer for Layer 4 traffic and Azure Application Gateway for Layer 7 requirements, providing a similar separation to AWS.
However, if you need more complex rules or something more sophisticated than what's provided natively, you might want to supplement with a reverse proxy such as Caddy, HAProxy, Traefik, or Nginx. These products offer different solutions to the problem of routing traffic at the application level.
Nginx is a very robust, high-performance reverse proxy, but its configuration model is static, meaning that it's great for static web applications or solutions that don't include a number of downstream services that change frequently.
The other solutions (Caddy, HAProxy and Traefik) are cloud-first products that can be dynamically configured based on their environment. As such, these are suited to applications with an ever-changing number of microservices.
Conclusion
Load balancers are a critical point of infrastructure in the support of SSL termination, as well as highly available services that can scale.
When you design your systems and applications, make sure to take a moment to consider your traffic ingress and what criteria you need to route it, as this will guide you in choosing the correct load balancer.
Also, it's recommended to use infrastructure as code for the provisioning of your resources. Tools like Terraform ensure that you are creating your resources and environment consistently.
Terrateam can help you extend that tool to super-charge your DevOps teams, as it brings visibility and observability to your Terraform provisioning, while surfacing infrastructure changes through PRs. Meanwhile, Terrateam's infrastructure cost change estimation can help you avoid expensive mistakes before they happen.
Sign up for Terrateam to try it for yourself.