This example creates an EKS cluster with Fargate profiles only (no EC2 managed node groups). Pods in kube-system and in the configured application namespace run on AWS Fargate.
- Terraform >= 1.6.0
- AWS provider >= 6.0
-
Configure AWS credentials.
-
Edit
terraform.tfvarsand setaccess_entries(required for kubectl access). -
Run:
terraform init terraform plan terraform apply
- A VPC (via
cloudbuildlab/vpc/aws) with public and private subnets and a NAT gateway - An EKS cluster using the root module with:
- CoreDNS and vpc-cni addons (CoreDNS uses
computeType = "fargate") - Fargate profiles for
kube-systemand the namespace fromfargate_namespace(defaultapp) - No
eks_managed_node_groupsand noenable_automode
- CoreDNS and vpc-cni addons (CoreDNS uses
cluster_name: EKS cluster name (default:eks-fargate)aws_region: Region (default:ap-southeast-2)cluster_version: Kubernetes version (default:1.35)fargate_namespace: Namespace whose pods are scheduled on Fargate via thedefaultprofile (default:app)access_entries: Map of IAM principals for cluster accesstags: Tags for resources
aws eks update-kubeconfig --name $(terraform output -raw cluster_name) --region $(terraform output -raw aws_region)
kubectl get pods -A- Fargate workloads need private subnets with outbound internet (NAT). This example uses private subnets for Fargate profile
subnet_ids. - For AWS API access from application pods, use IRSA (IAM Roles for Service Accounts). EKS Pod Identity is not supported on Fargate; see EKS Pod Identity.
- You can combine Fargate profiles with managed node groups or Auto Mode in the module for hybrid clusters; this example stays Fargate-only.