-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
Description
Is your request related to a new offering from AWS?
Is this functionality available in the AWS provider for Terraform? See CHANGELOG.md, too.
- Yes ✅: please list the AWS provider version which introduced this functionality
Is your request related to a problem? Please describe.
Our Current EKS Cluster version is :- 1.32
Current Node Groups under this cluster is :- "mars" and "venus"
Current OS running in this Node goups :- Amazon Linux 2
EKS Module that is currently Used :- 17.24.0
In this module, we found a way to make use of Amazon Linux 2023 as a base OS(The EKS Module that we used is provided below)
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "17.24.0"
cluster_endpoint_private_access = true
cluster_endpoint_public_access = false
cluster_name = var.cluster_name
cluster_version = var.cluster_version
subnets = data.aws_subnets.private.ids
manage_aws_auth = false
cluster_enabled_log_types = ["api", "audit", "controllerManager", "authenticator", "scheduler"]
cluster_log_retention_in_days = 7
vpc_id = data.aws_vpc.vpc.id
enable_irsa = true
worker_ami_owner_id_windows = "amazon"
tags = {
Environment = terraform.workspace
"New-Platform" = true
"Terraform" = "true"
}
node_groups_defaults = {
ami_type = "AL2023_x86_64_STANDARD"
disk_size = 150
}
node_groups = {
mars = {
desired_capacity = var.mars_node_group_asg_desired
max_capacity = var.mars_node_group_asg_max
min_capacity = var.mars_node_group_asg_min
key_name = var.EC2_KEY_NAME
instance_types = [ var.mars_instance_type ]
name = "mars"
k8s_labels = { # This will go to k8s cluster nodes
Environment = terraform.workspace,
k8s_labels = true
NodeGroup = "mars"
"New-Platform" = true
"Terraform" = "true"
}
tags = {
Environment = terraform.workspace
"New-Platform" = true
"Terraform" = "true"
NodeGroup = "mars"
}
additional_tags = { # This will be appled to nodegroup and will not be assigned to EC2 instances
additional_tags = true
ExtraTag = "mars"
"k8s.io/cluster-autoscaler/${var.cluster_name}" = "owned"
"k8s.io/cluster-autoscaler/enabled" = "true"
"New-Platform" = true
}
}
venus = {
desired_capacity = var.venus_node_group_asg_desired
max_capacity = var.venus_node_group_asg_max
min_capacity = var.venus_node_group_asg_min
key_name = var.EC2_KEY_NAME
instance_types = [ var.venus_instance_type ]
name = "venus"
k8s_labels = {
Environment = terraform.workspace
k8s_labels = true
NodeGroup = "venus"
"New-Platform" = true
"Terraform" = "true"
}
tags = {
Environment = terraform.workspace
"New-Platform" = true
"Terraform" = "true"
NodeGroup = "venus"
}
additional_tags = {
additional_tags = true
ExtraTag = "venus"
"k8s.io/cluster-autoscaler/${var.cluster_name}" = "owned"
"k8s.io/cluster-autoscaler/enabled" = "true"
"New-Platform" = true
}
}
}
}
...terraform config...
After modifying the terraform main.tf and backend.tf files , we executed below commands too
#terraform init -upgrade
#terraform state replace-provider registry.terraform.io/hashicorp/aws registry.terraform.io/hashicorp/aws
#terraform state rm module.eks.aws_iam_role.workers[0]
Then, we destroy the old node groups:
terraform destroy
-target='module.eks.module.node_groups.aws_eks_node_group.workers["mars"]'
-target='module.eks.module.node_groups.aws_eks_node_group.workers["venus"]'
-auto-approve
then we Created the node groups with the changes:
terraform apply -auto-approve
`
But doing above we managed to bring the nodes up with Alazon Linux 2023 as Operation System for the worker nodes
Describe the solution you'd like.
1)Has anyone tried like this in EKS Module 17.24 (That is making use of Amazon Linux 2023 as worker node OS)
2)Is this a recommended approach for bringing AL2023 for my worker nodes ?
3)Do we have any specific version of EKS module that we need to strictly follow inorder to make of AL2023 ? if so, then can you please let us know the version name ?
Describe alternatives you've considered.
N/A
Additional context
Aim :-
1)Migrating Node Group OS from AL2 to AL2023
2) Looking for an suggestion if we can achieve this via 17.24.0
3) If we make use of 17.24.0 EKS module version, do we need to worry about any unexpected behavior of my eks cluster ?