Terraform Deployment with Layered Architecture

Malcolm Matalka avatar

Malcolm Matalka

Cover for Terraform Deployment with Layered Architecture

Terraform Deployment with Layered Architecture

What happens if you update a VPC route table to direct traffic for a new service but forget to include a route for your existing application servers to connect to the database? Suddenly, your servers can’t reach the database, and the application stops working. This leads to downtime as users can’t access your application. This kind of issue happens when network changes aren’t carefully planned, and the connections between resources like servers and databases aren’t fully considered. To avoid problems like these, it’s important to plan carefully and manage dependencies the right way.

In this blog, we’ll explain what Terraform deployment is and how it manages dependencies. We’ll look at how Terraform can help make deployments easier by handling resource dependencies and how a layered architecture can simplify your complex infrastructure setup.

What is a Terraform Deployment?

A Terraform deployment involves defining, planning, and applying infrastructure changes using Terraform code. Your entire infrastructure, including servers, databases, networks, and more, is declared in code using HCL. This code automates the lifecycle of the infrastructure from creation, updating, and management to the destruction of resources.

In many cloud environments, making changes directly in the cloud console can lead to errors, infrastructure inconsistencies, or debugging issues. For example, a team might update a security group to open port 80 for web traffic but forget to allow port 443 for secure connections. This mistake can block HTTPS traffic, preventing users from accessing the application with security. Similarly, adjusting an ingress load balancer configuration without proper testing might accidentally block traffic to the application, which causes downtime with a 503 Forbidden error. These kinds of issues are very common when changes are made through the cloud console without any proper planning, tracking, or reviewing.

Terraform workflows help prevent these issues with the three simple steps: run terraform init to initialize the providers and plugins, run a terraform plan to preview changes that will be deployed to your infrastructure, and then apply those changes with terraform apply command. This ensures all the changes are reviewed, any configuration drift is identified, and then the changes are deployed to the cloud infrastructure on cloud providers, such as AWS, Azure, or GCP.

Additionally, Terraform maintains a state file that stores the desired state of your infrastructure. This state file keeps track of the resources that Terraform manages and is important for understanding what exists within your environment. When you run the Terraform commands like terraform apply or terraform destroy, it compares the current state in the cloud with the desired state in the configuration files, allowing for accurate deployments and updates.

Dependencies in Terraform

In Terraform, dependency management ensures that resources are created in the right order based on their dependencies. Some resources rely on others to configure or establish a working infrastructure, so they must be created first. Terraform handles these dependencies by default, automatically determining the correct order of resource creation to avoid deployment failures caused by missing or incorrectly ordered resources.

For example, setting up an Amazon EKS cluster requires a VPC with subnets and security groups for networking and security. After the cluster is ready, the worker nodes (EC2 instances) can be launched because they need to connect to the cluster to work. Terraform ensures the VPC is created first, followed by the EKS cluster, and finally, the worker nodes with its dependency management. This helps in avoiding problems like worker nodes not being able to connect to the cluster or network issues causing communication errors.

There are two types of dependencies in Terraform:

  • Implicit dependencies - They are internally managed by Terraform without requiring any explicit mention or argument. For example, in the above EKS example, Terraform understands that the worker nodes rely on the EKS cluster, and the cluster relies on the VPC. Since the worker nodes reference the EKS cluster in one of the resource arguments and, similarly, the cluster references the VPC, Terraform knows how to create these resources in the correct sequence without any additional instructions.
  • Explicit dependencies - These are used when you need to clearly specify the order in which resources should be created using the depends_on argument. For example, when setting up an Amazon EKS cluster, you need to make sure that the IAM role permissions are created first. Specifically, the permissions for the AmazonEKSClusterPolicy and AmazonEKSServicePolicy must be in place. If these permissions are not ready, the EKS cluster may have trouble deleting its related EC2 resources, like Security Groups, later on. By using depends_on, you can tell Terraform to create these IAM role policies before the EKS cluster is created. This ensures that everything is set up correctly for the EKS resources to function properly.

Let’s dive into some practical examples to understand better how Terraform manages resource dependencies. These examples will show how Terraform handles the creation of resources in the right order, either automatically or with explicit argument passing.

Managing Implicit Dependencies

Now, in this example, an EC2 instance is deployed within a VPC. While the EC2 instance can technically operate independently, it is best utilized when deployed alongside several key resources like the VPC, subnet, internet gateway, route table, and security group. These resources help create a secure and efficient network environment for the instance. Terraform automatically manages these implicit dependencies, ensuring that the necessary resources are created in the correct order to facilitate proper network isolation, security controls, and connectivity for the EC2 instance.

Here is the code for that:

resource "aws_vpc" "prod_vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "ProdVPC"
}
}
resource "aws_subnet" "prod_subnet" {
vpc_id = aws_vpc.prod_vpc.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1c"
tags = {
Name = "ProdSubnet"
}
}
resource "aws_internet_gateway" "prod_igw" {
vpc_id = aws_vpc.prod_vpc.id
}
resource "aws_route_table" "prod_route_table" {
vpc_id = aws_vpc.prod_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.prod_igw.id
}
tags = {
Name = "ProdRouteTable"
}
}
resource "aws_route_table_association" "prod_route_table_association" {
subnet_id = aws_subnet.prod_subnet.id
route_table_id = aws_route_table.prod_route_table.id
}
resource "aws_security_group" "prod_sg" {
vpc_id = aws_vpc.prod_vpc.id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "prod_instance" {
ami = "ami-0e86e20dae9224db8"
instance_type = "t2.micro"
subnet_id = aws_subnet.prod_subnet.id
vpc_security_group_ids = [aws_security_group.prod_sg.id]
tags = {
Name = "ProdInstance"
}
}

In the above code, the EC2 instance relies on the subnet, which is part of the VPC. Terraform automatically detects this dependency and ensures the VPC, subnet, internet gateway, route table, and security group are created before the EC2 instance.

Now, run terraform init to initialize the configuration and terraform plan to see what Terraform will do before applying any changes. This helps you review and confirm the resources that will be created.

Apply the changes using terraform apply to create the resources. Terraform will follow the dependency order, ensuring the VPC and other network components are created before the EC2 instance.

Managing Explicit Dependencies

Now, let’s look at a case where we need to explicitly define dependencies using depends_on. This helps when there is no implicit dependency or reference argument to create resources, ensuring everything is set up properly.

In this example, we will create an IAM role for an EC2 instance that needs to access an S3 bucket for tasks such as storing logs or retrieving data. The IAM role’s policy must be attached only after the S3 bucket is created because the role needs to have the right permissions to interact with the bucket. Since Terraform doesn’t automatically know that the IAM role depends on the S3 bucket, we use depends_on to ensure that the S3 bucket is created first.

Here is the code for that:

resource "aws_s3_bucket" "prod01_bucket" {
bucket = "prod-bucket-12345"
acl = "private"
tags = {
Name = "ProdBucket"
}
}
resource "aws_iam_role" "prod_role" {
name = "prod-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}
]
})
tags = {
Name = "ProdRole"
}
}
resource "aws_iam_role_policy_attachment" "prod_role_policy_attach" {
role = aws_iam_role.prod_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
depends_on = [aws_s3_bucket.prod01_bucket]
}

In this code, depends_on ensures that the S3 bucket is created before the policy is attached to the IAM role. This ensures the IAM role has the necessary permissions to interact with the bucket without any issues.

Run terraform plan to see what Terraform will do before making any changes, and after reviewing the changes, run terraform apply to apply those changes to your infrastructure.

Terraform will follow the depends_on[] instruction, ensuring the S3 bucket is created before the IAM role policy is attached.

This example shows how you can use depends_on to control the order of resource creation, making sure everything is set up correctly and avoiding problems like missing or incomplete resources.

What is a Terraform Layered Architecture?

Terraform Layered Architecture takes dependency management further by organizing infrastructure into logical layers. Often referred to as layered workflows or stacked deployments, this approach breaks down infrastructure into logical layers like networking, security, compute, and storage. Each layer is responsible for a specific part of the infrastructure, promoting better organization, modularity, and scalability.

In a layered architecture, each layer has dependencies on the layers beneath it. For example, a networking layer (VPCs, subnets, route tables) must be established before you can deploy a compute layer (EC2 instances, EKS clusters) that relies on it. Similarly, a security layer (IAM roles, security groups) might be defined to ensure proper permissions and access controls before deploying application services.

Terraform manages these dependencies effectively by making sure that each layer is applied in the correct order. This means:

  1. Network Layer: The foundational layer, which includes VPCs, subnets, and gateways.
  2. Security Layer: Defines IAM roles, policies, and security groups.
  3. Compute Layer: Deploys EC2 instances, EKS clusters, or other compute resources.
  4. Application Layer: Includes application services, databases, and any other app-specific resources.

Each layer or workspace has one state file, which helps in an isolated collaboration without conflicts. You can reference outputs from the previous layer. For example, the compute layer might use the subnet IDs from the network layer. Since the layers have different state files, you can easily run multiple Terraform apply simultaneously without worrying about state file lock.

In a nutshell, layers are a practice of splitting huge chunks of code into smaller pieces (but not too small). These layers can be environment-specific, object-oriented, or business-oriented

Use Case for Using Layers

Now, let’s take a look at some pointers where a Terraform layered architecture is useful when managing infrastructure:

  • Handling high-traffic events: During significant software releases or updates, like the launch of a new version of a popular application, tech companies often experience increased user activity. To handle this surge in demand, a company can implement a layered architecture. This approach allows the development team to create a separate layer for application servers that can scale independently from the database and caching layers. For example, as users start accessing the new features, the company can quickly add more application servers to manage the load without affecting the underlying database or caching configurations. By keeping the application layer separate, the tech team can efficiently respond to user demands, ensuring that new features perform well even under high traffic.
  • Facilitating feature rollouts: A company operates multiple environments for its application, such as development, staging, and production. Each environment requires different configurations and resources. By using a layered architecture, the team can manage these environments separately while ensuring they share common components where necessary. For example, the development team can create a dedicated layer for the development environment, where they can test new features and updates without affecting the staging or production environments. If an issue arises in development, the team can make changes or roll back that specific layer without impacting the other environments. This separation allows for smooth transitions from development to staging and then to production, ensuring that new features are thoroughly tested before they reach users.
  • Managing Multi-Cloud Deployments: Many companies utilize multiple cloud providers to leverage the best services available. For instance, a business might use one cloud provider for storage solutions while employing another for compute resources and applications. Managing different configurations across these cloud platforms can be challenging due to the varying networking infrastructures. A layered architecture can simplify this process by allowing teams to define common infrastructure components, such as Virtual Private Clouds (VPCs), subnets, and security groups, once and apply them consistently across all cloud environments. This approach ensures that the networking setup remains uniform, reduces the likelihood of configuration errors, and streamlines resource management in a multi-cloud setup. By using a layered architecture, companies can maintain consistent security policies and access controls, making it easier to integrate and manage their resources across different cloud platforms.

These examples show how Terraform Layered Architecture helps solve real problems, making it easier to manage cloud infrastructure, scale services, and introduce new features.

Advantages of Terraform Layers

As we’ve seen in the use cases, using a Terraform Layered Architecture allows teams to manage their cloud infrastructure more effectively. This approach also offers so many key benefits, such as:

  • Modularity: By dividing your infrastructure into layers, teams can maintain a clear structure. For instance, a SaaS company could have distinct layers for the user interface, backend services, and databases. If the backend services require an update, such as adding a new API endpoint or upgrading a service, the team can implement those changes in the backend layer without impacting the user interface or database layers. This separation helps prevent issues like broken links in the user interface or data retrieval errors in the database, which can occur when multiple components are modified at the same time.
  • Flexible Deployments: Layered architecture allows for flexible deployments because each layer can be updated independently. For example, in a web application, the development team might decide to introduce a new payment processing feature. By placing this new feature in its own layer, the team can add the necessary resources, such as a new payment gateway service, without affecting the existing checkout process or other layers. If any issues arise with the new payment processing feature, the team can quickly roll back just that layer, making sure that the rest of the application remains stable and functional for users.
  • Maintainability: Maintaining an up-to-date infrastructure is more manageable with a layered architecture. For instance, in a cloud application, you might have separate layers for logging, monitoring, and alerting. If the monitoring layer requires enhancements, such as adding new metrics or improving data visualization, those adjustments can be made specifically in the monitoring layer. This means the logging and alerting layers remain unchanged and continue to function as expected. By isolating these components, teams can address issues or introduce improvements in one layer without risking disruptions in others.
  • Collaboration: Collaboration is better in a layered architecture because it lets different teams work on their own parts of the infrastructure without interfering with each other. For example, in a software company, the DevOps team can focus on the infrastructure layers—like networking, storage, and security—while the application development team works on the application layer, which includes the software’s code and features. This separation allows both teams to make progress at the same time. The DevOps team can make important changes to the infrastructure while the application team creates new features. This teamwork leads to a smoother workflow and helps get updates out to users faster.

Now, when you are managing multiple environments and cloud providers, keeping track of dependencies can get very complicated. For example, if an application needs a database on AWS and networking resources on Azure, ensuring that everything is set up in the right order can be challenging. Coordinating these dependencies on your own can easily lead to mistakes and delays, making it hard to keep the project on track.

Layered Runs with Terrateam

Terrateam has a Layered Runs feature, which makes it easier to manage dependencies and execute operations in the correct order. It allows teams to define clear dependencies between layers, so when changes happen in one layer, Terrateam automatically triggers updates in the dependencies. This helps ensure that everything is deployed correctly and reduces the chances of errors or misconfiguration.

It’s very simple to integrate Terrateam within your GitHub organization to get started with the tool. Once integrated, you can use its Layered Runs feature to effectively manage dependencies across different layers. It allows you to manage interdependent dependencies between your infrastructure components by defining them through the [.code]depends_on[.code] configuration.

Now, in your config.yml, you can define your layers for sequential infrastructure deployment. The config.yml is a key file used to set up and manage the features of Terrateam. This is where you specify how your infrastructure is organized into layers and define the relationships between them.

All the controls for Terrateam’s features are configured in this file. As part of the getting started process, users will need to add the following example to set up layers within Terrateam:

dirs:
networking-layer:
when_modified:
file_patterns: ["**/*.tf"]
web-server-layer:
when_modified:
depends_on: 'dir:networking-layer'
file_patterns: ["**/*.tf"]

Now, create a new branch for your changes. This keeps your work organized and separate from the main branch.

Inside your new branch, create two folders: networking-layer and web-server-layer. Each of these folders should contain a main.tf file where you define your configurations.

Add your Terraform configuration to networking-layer/main.tf and web-server-layer/main.tf. In networking-layer, you might define your VPC and subnets, while in web-server-layer, you could set up your EC2 instances, create a pull request, and let Terrateam handle the rest of the process for you.

Terrateam will first identify the dependencies in your configurations. It will check which layers depend on each other and determine how many layers are left to apply. In this case, it recognizes that there are 2 layers remaining. Once it identifies these dependencies, it will manage the deployment process accordingly.

Next, it will guide you through the necessary steps, indicating that the deployment will require multiple rounds of the plan and apply operations. When you comment terrateam plan, it will allow you to plan the next layer while ensuring that all dependencies are deployed successfully in the right order. This approach simplifies managing layered runs, making it easier to update your infrastructure and reducing the risk of mistakes during deployment.

Terrateam’s Layered Runs feature also allows you to handle complex dependencies effectively using logical operators like or and and. This means you can set up your infrastructure in such a way that changes in one directory can trigger updates in multiple other directories or use conditionals to control the deployment flow.

For example, in your config.yml, you can include a section that looks like this:

when_modified:
depends_on: 'dir:network or dir:database'

With this setup, if you make any changes in either the network or database directories, Terrateam will automatically handle the necessary updates for those parts. This means your infrastructure will stay consistent and work properly, as any changes in important areas will trigger the right actions in related layers. Learn more about Layered Runs.

By using Terrateam, you can handle both sequential infrastructure deployments or much more complex structures with multiple layers. One such example of complex setups can be where teams deploy their networking and database layers, followed by the deployment of multiple applications. There might be shared resources, such as AWS IPAM pools among these applications for IP addresses, which, when modified, have a dependency on configuration changes in the networking and database.

With such powerful capability, Terrateam helps teams focus on building their applications and delivering new features for their infrastructure instead of worrying about the complexities of managing dependencies across multiple layers. This leads to a smoother developer experience and faster deployment cycles.

Conclusion

By now, you should have a clear understanding of how to manage dependency in your Terraform code for faster deployments. Terraform’s Layered Architecture helps manage infrastructure more effectively by structuring based on the objects, environments, or topology. By organizing resources into logical layers, you can make deployments easier, improve collaboration, and keep everything stable while adapting to changing needs in your cloud setup. You can leverage the dependency management capabilities provided by Terrateam to help you run error-free deployments in the correct order, irrespective of the complexity of the infrastructure.

Infrastructure as Code. Optimized.

Ready to get started?

Experience powerful infrastructure orchestration, seamlessly integrated with GitHub.