This lab demonstrates a production-style AWS network like you’d design as a Solutions Architect:
- Custom VPC
- Public and private subnets across 2 Availability Zones
- Internet Gateway and NAT Gateway
- Application Load Balancer (ALB) in public subnets
- EC2 web servers in private subnets
- Layered route tables and security groups
Traffic flow: Client → ALB (public subnet) → EC2 (private subnet)
I built this to deepen my understanding of AWS networking, high availability, and exam-style architectures for the AWS Solutions Architect Associate.
Region: us-east-1 (N. Virginia)
VPC CIDR: 10.0.0.0/16
| Subnet name | Type | AZ | CIDR | Purpose |
|---|---|---|---|---|
public-a |
Public | us-east-1a |
10.0.1.0/24 |
ALB, NAT gateway |
public-b |
Public | us-east-1b |
10.0.2.0/24 |
ALB |
private-a |
Private | us-east-1a |
10.0.11.0/24 |
EC2 web server |
private-b |
Private | us-east-1b |
10.0.12.0/24 |
EC2 web server |
-
Internet Gateway (
sa-lab-igw)- Attached to the VPC
- Public route table sends
0.0.0.0/0→ IGW
-
NAT Gateway (
sa-lab-nat-a)- Lives in
public-a - Private route table sends
0.0.0.0/0→ NAT - Allows private EC2 instances to reach the internet outbound only (e.g., OS updates) while remaining non-public.
- Lives in
-
Public Route Table (
sa-lab-public-rt)- Associated with
public-a,public-b - Routes:
10.0.0.0/16→ local0.0.0.0/0→ Internet Gateway
- Associated with
-
Private Route Table (
sa-lab-private-rt)- Associated with
private-a,private-b - Routes:
10.0.0.0/16→ local0.0.0.0/0→ NAT Gateway
- Associated with
-
EC2 Instances
- AMI: Amazon Linux
- Type:
t2.micro/t3.micro - Subnets:
sa-lab-web-ainprivate-asa-lab-web-binprivate-b
- No public IPs
- Bootstrapped with a user data script to install Apache and serve a simple page:
user-data/webserver.sh
-
Application Load Balancer (
sa-lab-alb)- Scheme: Internet-facing
- Subnets:
public-a,public-b - Listener: HTTP :80 → Target group
sa-lab-tg-web - Target type: Instance
- Health checks: HTTP
/
-
ALB Security Group (
sa-lab-alb-sg)- Inbound:
- HTTP 80 from
0.0.0.0/0(internet) — for the lab
- HTTP 80 from
- Outbound:
- All traffic (default)
- Inbound:
-
Web Server Security Group (
sa-lab-web-sg)- Inbound:
- HTTP 80 from
sa-lab-alb-sgonly
- HTTP 80 from
- Outbound:
- All traffic (default)
- Inbound:
This creates a proper layered security model:
- Internet can reach only the ALB
- ALB can reach the web servers
- Web servers are not directly reachable from the internet
I created everything using the AWS Console to really see how the pieces fit:
-
VPC
- Created
sa-lab-vpcwith CIDR10.0.0.0/16
- Created
-
Subnets
- Created two public and two private subnets across
us-east-1aandus-east-1b
- Created two public and two private subnets across
-
Internet Gateway & NAT
- Created and attached
sa-lab-igw - Created
sa-lab-nat-ainpublic-awith an Elastic IP
- Created and attached
-
Route Tables
- Public RT: associated with public subnets, default route to IGW
- Private RT: associated with private subnets, default route to NAT
-
EC2 Instances in Private Subnets
- Launched
sa-lab-web-ainprivate-aandsa-lab-web-binprivate-b - Disabled public IPs
- Added user data to install Apache and serve a simple HTML page
- Launched
-
Security Groups
sa-lab-alb-sg: HTTP from internetsa-lab-web-sg: HTTP only fromsa-lab-alb-sg
-
Target Group & ALB
- Created
sa-lab-tg-web(instance target type, HTTP:80, health check/) - Registered both EC2 instances
- Created ALB
sa-lab-albtargetingsa-lab-tg-weband mapped it topublic-aandpublic-b
- Created
-
Go to EC2 → Load Balancers in the AWS Console.
-
Select
sa-lab-alb. -
Copy the DNS name, e.g.:
sa-lab-alb-1234567890.us-east-1.elb.amazonaws.com