A
Anthony Uketui
Guest
When I started this project, my goal was simple but ambitious. I wanted to build a production-ready infrastructure that could automatically scale based on demand. I also wanted to organize everything in a way that supports multiple environments like
I wanted to keep my project modular and clean. To achieve this, I structured my directory in this way:

This structure makes it easy to manage different configurations without duplicating code.

My first step was to create a dedicated VPC with public and private subnets. Public subnets are for load balancers, and private subnets are for EC2 and RDS.
`
I chose to keep the database and application instances in private subnets for security reasons.

The ALB distributes traffic across EC2 instances. This ensures high availability.
By using public subnets, the ALB became internet-facing, while the EC2 instances behind it stayed private.

Next, I set up EC2 instances with auto scaling. This way, when CPU usage is high, new instances are launched automatically.
I used Amazon Linux 2 AMI and passed in user data through a template to bootstrap the instances.

For the database, I used Amazon RDS with MySQL. One challenge here was how to handle credentials securely.
At first, I hardcoded them, but I quickly realized that wasnβt safe. I switched to AWS Secrets Manager combined with a random password generator.
`hcl
resource "random_password" "db_password" {
length = 16
special = true
}
resource "aws_secretsmanager_secret" "db_pwd" {
name = "${var.env}/db_password"
}
`
This allowed me to store the password securely and avoid exposing it in my code.

I didnβt want to stop at just provisioning resources. I also wanted monitoring and alerts for key components like EC2, ALB, and RDS.
I used CloudWatch Alarms with SNS topics to send email alerts.
This way, I get notified if something goes wrong.

Each environment has its own
Dev
Prod
This separation made it easy to spin up lightweight dev infrastructure without affecting production.


Here are the commands I used to apply everything:
`bash
cd envs/dev
terraform init
terraform plan -var-file="terraform.tfvars"
terraform apply -var-file="terraform.tfvars"
cd ../prod
terraform init
terraform plan -var-file="terraform.tfvars"
terraform apply -var-file="terraform.tfvars"
`
This workflow gave me a smooth way to deploy separate environments using the same modules.
This project gave me a hands-on understanding of how to structure a Terraform project for scalability, security, and maintainability. By modularizing infrastructure, securing credentials, and setting up monitoring, I ended up with a production-grade setup.
Continue reading...
dev
and prod
.Iβll walk you through my full process, including the reasoning behind each step, challenges I faced, and how I resolved them.
Step 1: Project Structure and Multi-Environment Setup
I wanted to keep my project modular and clean. To achieve this, I structured my directory in this way:

- Modules contain reusable building blocks like VPC, ALB, EC2, RDS, and Monitoring.
- Environments (
dev
andprod
) each have their ownmain.tf
andterraform.tfvars
files that reference the modules.
This structure makes it easy to manage different configurations without duplicating code.

Step 2: Defining the VPC
My first step was to create a dedicated VPC with public and private subnets. Public subnets are for load balancers, and private subnets are for EC2 and RDS.
Code Snippet (VPC module)
Code:
resource "aws_vpc" "this" {
cidr_block = var.vpc_cidr
tags = { Name = "${var.env}-vpc" }
}
`
I chose to keep the database and application instances in private subnets for security reasons.

Step 3: Application Load Balancer (ALB)
The ALB distributes traffic across EC2 instances. This ensures high availability.
Code Snippet (ALB module)
Code:
hcl
resource "aws_lb" "this" {
name = "${var.env}-alb"
load_balancer_type = "application"
subnets = var.public_subnets
}
By using public subnets, the ALB became internet-facing, while the EC2 instances behind it stayed private.

Step 4: EC2 Auto Scaling Group
Next, I set up EC2 instances with auto scaling. This way, when CPU usage is high, new instances are launched automatically.
Code Snippet (EC2 module)
Code:
hcl
resource "aws_autoscaling_group" "this" {
desired_capacity = var.desired_capacity
max_size = var.max_size
min_size = var.min_size
vpc_zone_identifier = var.private_subnets
}
I used Amazon Linux 2 AMI and passed in user data through a template to bootstrap the instances.

Step 5: RDS Database Setup
For the database, I used Amazon RDS with MySQL. One challenge here was how to handle credentials securely.
At first, I hardcoded them, but I quickly realized that wasnβt safe. I switched to AWS Secrets Manager combined with a random password generator.
Code Snippet (RDS module)
`hcl
resource "random_password" "db_password" {
length = 16
special = true
}
resource "aws_secretsmanager_secret" "db_pwd" {
name = "${var.env}/db_password"
}
`
This allowed me to store the password securely and avoid exposing it in my code.

Step 6: Monitoring and Alerts
I didnβt want to stop at just provisioning resources. I also wanted monitoring and alerts for key components like EC2, ALB, and RDS.
I used CloudWatch Alarms with SNS topics to send email alerts.
Code Snippet (Monitoring module)
Code:
hcl
resource "aws_cloudwatch_metric_alarm" "cpu_high" {
alarm_name = "${var.env}-cpu-high"
metric_name = "CPUUtilization"
threshold = 70
comparison_operator = "GreaterThanThreshold"
namespace = "AWS/EC2"
statistic = "Average"
period = 300
evaluation_periods = 2
alarm_actions = [aws_sns_topic.alerts.arn]
}
This way, I get notified if something goes wrong.

Step 7: Multi-Environment Variables
Each environment has its own
terraform.tfvars
file. For example, my dev environment uses smaller instances while prod uses larger ones.Dev terraform.tfvars
Code:
hcl
environment = "dev"
region = "us-east-1"
instance_type = "t3.micro"
desired_capacity = 1
db_instance_class = "db.t3.micro"
Prod terraform.tfvars
Code:
hcl
environment = "prod"
region = "us-east-1"
instance_type = "t3.medium"
desired_capacity = 2
db_instance_class = "db.t3.medium"
This separation made it easy to spin up lightweight dev infrastructure without affecting production.


Step 8: Running the Project
Here are the commands I used to apply everything:
`bash
cd envs/dev
terraform init
terraform plan -var-file="terraform.tfvars"
terraform apply -var-file="terraform.tfvars"
cd ../prod
terraform init
terraform plan -var-file="terraform.tfvars"
terraform apply -var-file="terraform.tfvars"
`
This workflow gave me a smooth way to deploy separate environments using the same modules.
Challenges and How I Solved Them
- Hardcoding DB credentials: Initially I put the password directly in code. I later switched to using Secrets Manager with
random_password
to improve security. - Auto scaling not attaching correctly to ALB: At first, my instances werenβt registering as healthy. The fix was to ensure the security groups allowed the right ports and the target group health checks matched the app.
- Multi-environment management: I struggled with duplication until I split logic into modules and kept only environment-specific variables in
tfvars
.
Conclusion
This project gave me a hands-on understanding of how to structure a Terraform project for scalability, security, and maintainability. By modularizing infrastructure, securing credentials, and setting up monitoring, I ended up with a production-grade setup.
Continue reading...