This project is a URL shortener service designed to efficiently generate short URLs from long URLs via a simple web interface. The application is built using a Flask backend that manages endpoints for URL creation. Key features of this project include:
- Fast URL Resolution: Shortened URLs are stored in a Redis database, ensuring rapid lookup times.
- High Availability & Scalability: The service is deployed with Docker and orchestrated using Kubernetes. This setup uses three replicated instances for the Flask application along with Redis Sentinel for enhanced database redundancy.
- Load Balancing & Auto-Scaling: Kubernetes manages container orchestration, enabling automatic scaling and load balancing to maintain performance during traffic spikes.
Week 1
python3 -m venv venv
source venv/bin/activatedocker network create redischmod +x script.sh
./script.sh
docker pull redis:7.4.2-alpineBuild the image (development target):
docker build --target dev -t snipbal .Or build the default image:
docker build -t snipbal .Run the container:
docker run -it -v "${PWD}:/work" -p 5000:5000 \
--net redis \
-e REDIS_SENTINELS="sentinel-0:5000,sentinel-1:5000,sentinel-2:5000" \
-e REDIS_MASTER_NAME="mymaster" \
-e REDIS_PASSWORD="okok" \
snipbalAccess a Redis container shell:
docker exec -it redis-0 shConnect with Redis CLI:
redis-cli
auth <password>
KEYS *
GET <key_val>- Replace
<password>and<key_val>with your password and desired Redis key.
Week 2
kind create cluster --name redis --image kindest/node:v1.23.5kubectl create ns redisNavigate to the redis/kubernetes/ directory and apply the configuration files:
kubectl apply -n redis -f ./redis/redis-configmap.yaml
kubectl apply -n redis -f ./redis/redis-statefulset.yamlCheck the status of pods and persistent volumes:
kubectl -n redis get pods
kubectl -n redis get pvAccess the Redis pod shell:
kubectl -n redis exec -it redis-0 -- shConnect to Redis CLI and check replication status:
redis-cli
auth <your-redis-password>
info replicationView logs for Redis pods:
kubectl -n redis logs redis-0
kubectl -n redis logs redis-1
kubectl -n redis logs redis-2Apply the Sentinel StatefulSet:
kubectl apply -n redis -f ./sentinel/sentinel-statefulset.yamlCheck Sentinel pods and logs:
kubectl -n redis get pods
kubectl -n redis get pv
kubectl -n redis logs sentinel-0Navigate to the /redis/kubernetes/app/ directory and deploy the application:
kubectl apply -n redis -f app-deployment.yaml
kubectl apply -n redis -f app-configmap.yaml
kubectl apply -n redis -f app-secret.yamlCheck if the SnipBalancer pods are running:
kubectl get pods -n redis -l app=snipbalCheck deployment and service status:
kubectl get deployment -n redis snipbal
kubectl get service -n redis snipbalPort-forward the SnipBalancer service to your local machine:
kubectl port-forward -n redis service/snipbal 5000:5000Get the names of SnipBalancer pods:
kubectl get pods -n redis -l app=snipbalCheck logs for a specific pod:
kubectl logs -n redis <pod-name>Access the Redis CLI from a pod:
kubectl exec -it -n redis redis-0 -- redis-cliAuthenticate and interact with Redis:
auth <your-redis-password>
KEYS *
GET <key_name>To verify Redis Sentinel failover and cluster availability:
-
Check Current Redis Master:
kubectl exec -n redis sentinel-0 -- redis-cli -p 5000 SENTINEL get-master-addr-by-name mymaster -
Simulate Master Failure:
kubectl delete pod -n redis redis-0
-
Monitor Sentinel Logs for Failover Events:
kubectl logs -f -n redis sentinel-0
-
Check Pod Status and Master Re-election:
kubectl -n redis get pods -o wide
-
Verify New Master:
- Repeat step 1 to confirm which Redis pod is now the master.
- You can also refer to step 4 above to check the roles of
redis-0,redis-1, andredis-2.
These steps help ensure your Redis cluster remains available and automatically recovers from node failures.
- Replace
<password>and<key_val>with your password and desired Redis key.
Week 3
Navigate to the /redis/kubernetes/app directory and apply the HPA configuration:
kubectl apply -f app-hpa.yaml -n redisNavigate to the /redis/kubernetes/metric-server directory and deploy the metrics server:
kubectl apply -f components.yaml -n redisStart a temporary load generator pod:
kubectl run -n redis -it --rm load-generator --image=busybox -- /bin/shInside the load-generator pod shell, run the following command to continuously send requests to the SnipBalancer service:
while true; do wget -q -O- http://snipbal:5000; doneYou can exit the load generator at any time by pressing Ctrl + C.
In separate terminals, monitor the status of pods and the HPA:
kubectl get pods -n redis --watch
kubectl get hpa -n redis --watchObserve as the HPA scales the number of SnipBalancer pods up and down in response to the generated load. After stopping the load generator, the number of replicas will decrease following the HPA cooldown period (typically 10–15 minutes).
Note: The cooldown period is a standard HPA property and may vary based on your configuration.
To perform stress testing on the SnipBalancer application, follow these steps:
-
Navigate to the
stress-testsdirectory:cd stress-tests -
Make the test scripts executable:
chmod +x get-test.sh chmod +x post-test.sh
-
Run the Application and Start Stress Tests
kubectl port-forward -n redis service/snipbal 5000:5000Then, in a separate terminal, run a stress test:
# For GET requests:
./get-test.sh
# Or for POST requests:
./post-test.sh- Monitor Scaling Activity in Real Time
Open another terminal and watch the scaling behavior:
kubectl get pods -n redis --watch
kubectl get hpa -n redis --watchThese steps will help you observe how the application automatically scales up and down in response to increased load.
Reference:
This project setup was taken from this guide.