A
ajithmanmu
Guest
Building a Secure 3-Tier Application on AWS
I recently worked on a portfolio project where I built a 3-tier application on AWS. My goal wasnβt only to get the app running, but also to design it with security and best practices in mind, and then migrate everything into Terraform so itβs reproducible.

Project Overview
The setup follows the classic 3-tier architecture:
- Frontend: A React app served by Nginx on EC2, behind a public ALB.
- Backend: A FastAPI app running with Uvicorn on EC2, behind an internal ALB.
- Database: Amazon RDS PostgreSQL in private subnets.
Only the frontend ALB is public β everything else runs in private subnets. Configuration values like the backend ALB DNS and database connection string are securely injected at runtime using AWS SSM Parameter Store and Secrets Manager.
Security Focus
From the start, I set up the application with least-privilege principles:
- No public IPs on app or DB servers β only the ALB is exposed.
- Security Groups allow traffic only along the intended path (ALB β Frontend β Backend β RDS).
- IAM roles are locked down so instances can only read what they need.
- AMIs are kept generic; user data injects environment-specific config at boot.
This way, the environment is both secure and flexible.


Building AMIs with Setup Scripts
A key part of this project was baking AMIs. Instead of installing everything during auto-scaling launches, I ran the setup scripts on temporary builder EC2 instances in public subnets. Once the app was installed and tested, I created an AMI from that instance.
- For the frontend, I launched a temporary EC2, ran the React + Nginx setup script, and created a frontend AMI.
- For the backend, I did the same: launched a builder EC2, installed FastAPI + dependencies, configured systemd, and created a backend AMI.
These AMIs were then used in Launch Templates + Auto Scaling Groups, with user data scripts wiring environment-specific details at boot.
Frontend Setup
Code:
sudo dnf update -y
sudo dnf install -y nginx git
sudo systemctl enable nginx
curl -fsSL https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
. ~/.nvm/nvm.sh
nvm install 20
git clone https://github.com/ajithmanmu/three-tier-architecture-aws.git
cd app
npm ci && npm run build
sudo rm -rf /usr/share/nginx/html/*
sudo cp -R out/* /usr/share/nginx/html/
Nginx config snippet (
/etc/nginx/nginx.conf
):
Code:
location /api/ {
proxy_pass http://__BACKEND_INTERNAL_ALB__;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
Frontend user data script fetches the backend ALB DNS from SSM and rewrites the config at boot.
Backend Setup
Code:
sudo dnf update -y
sudo dnf install -y python3.11 python3.11-pip git
sudo mkdir -p /opt/app && sudo chown ec2-user:ec2-user /opt/app
cd /opt/app
python3.11 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
git clone https://github.com/ajithmanmu/three-tier-architecture-aws.git src
cd src/backend
pip install -r requirements.txt
cp -R . /opt/app/
The backend is wired to a systemd service running Uvicorn. At boot, a user data script pulls the DB connection string from SSM and writes it into
/etc/app.env
before starting the app.Challenges Along the Way
This wasnβt all smooth sailing. A few things I had to troubleshoot:
- Networking: With 12 subnets and multiple route tables, I initially struggled to get NAT and IGW routing right. Debugging outbound access from private subnets was a key learning moment.
- Frontend 404s: The frontend served fine, but API calls failed until I realized Nginx needed the backend ALB DNS injected dynamically.
- Secrets Management: At first I hardcoded DB creds. Moving them into Secrets Manager and pulling them at runtime made the setup much cleaner and safer.
- Terraform Migration: Rebuilding everything as code was tedious, but it forced me to understand the resource dependencies and gave me a reproducible setup.
Whatβs Next
Some natural next steps to build on this project:
- Add ACM + HTTPS for the frontend ALB.
- Configure CloudWatch logs and alarms for monitoring and alerting.
- Use S3 + CloudFront for hosting assets (like images), while continuing to serve the frontend itself from EC2.

Continue reading...