-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
Description
Description
setting enable_cluster_creator_admin_permissions = true is intended to add the following block to aws-auth configMap. But it appears to do nothing, at all.
Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.
- [ x ] ✋ I have searched the open/closed issues and my issue is not listed.
Versions
-
Module version [Required]: 21.15.1
-
Terraform version: v1.14.4 on darwin_arm64
-
Provider version(s):
-
kubernetes v3.0.1
Reproduction Code [Required]
`module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 21"
endpoint_private_access = true
endpoint_public_access = true
create_cloudwatch_log_group = false
enable_irsa = true
create_iam_role = true
enable_cluster_creator_admin_permissions = true
authentication_mode = "API_AND_CONFIG_MAP"
cloudwatch_log_group_class = "INFREQUENT_ACCESS"
name = var.cluster_name
kubernetes_version = var.kubernetes_cluster_version
vpc_id = var.vpc_id
subnet_ids = var.private_subnets
control_plane_subnet_ids = var.private_subnets
kms_key_administrators = var.kms_key_owners
kms_key_owners = var.kms_key_owners
kms_key_users = var.kms_key_owners
kms_key_description = "eks ${var.cluster_name} cluster encryption key"
tags = local.tags
compute_config = {
enabled = false
}
Required add-ons for basic cluster functionality. Avoid
adding unnecessary configuration details. The default settings work well
for most use cases.
addons = {
coredns = {}
eks-pod-identity-agent = {
before_compute = true
}
kube-proxy = {}
vpc-cni = {
before_compute = true
}
aws-ebs-csi-driver = {
service_account_role_arn = aws_iam_role.AmazonEKS_EBS_CSI_DriverRole.arn
}
}
node_security_group_additional_rules = {
ingress_self_all = {
description = "smarter: Node to node all ports/protocols"
protocol = "-1"
from_port = 0 # FIX NOTE: this is a security vulnerability.
to_port = 0 # Ideally this should be narrowed to the IP address of the
# nginx ingress controller's load balancer.
type = "ingress"
cidr_blocks = [
"172.16.0.0/12",
"192.168.0.0/16",
]
}
port_8443 = {
description = "smarter: open port 8443 to vpc"
protocol = "-1"
from_port = 8443
to_port = 8443
type = "ingress"
self = true
}
egress_all = {
description = "smarter: Node all egress"
protocol = "-1"
from_port = 0
to_port = 0
type = "egress"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
}
eks_managed_node_groups = {
smarter = {
capacity_type = "SPOT"
enable_monitoring = false
cluster_enabled_log_types = []
min_size = var.eks_node_group_min_size
max_size = var.eks_node_group_max_size
desired_size = var.eks_node_group_min_size
instance_types = var.eks_node_group_instance_types
subnet_ids = var.private_subnets
node_repair_config = {
enabled = true
update_config = {
max_unavailable_percentage = 33
}
}
# Configure containerd to transparently redirect Docker Hub to ECR pull-through cache
# Pods continue using docker.io/image:tag - no manifest changes needed
pre_bootstrap_user_data = <<-EOT
#!/bin/bash
set -e
# Configure containerd registry mirror for docker.io
mkdir -p /etc/containerd/certs.d/docker.io
cat > /etc/containerd/certs.d/docker.io/hosts.toml <<'EOF'
server = "https://registry-1.docker.io"
[host."https://${data.aws_caller_identity.current.account_id}.dkr.ecr.${data.aws_region.current.region}.amazonaws.com/docker-hub"]
capabilities = ["pull", "resolve"]
[host."https://registry-1.docker.io"]
capabilities = ["pull", "resolve"]
EOF
EOT
iam_role_additional_policies = {
AmazonEKSWorkerNodePolicy = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
AmazonEKS_CNI_Policy = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
AmazonEC2ContainerRegistryReadOnly = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
# Required by Karpenter
AmazonSSMManagedInstanceCore = "arn:${local.partition}:iam::aws:policy/AmazonSSMManagedInstanceCore"
# Required by EBS CSI Add-on
AmazonEBSCSIDriverPolicy = data.aws_iam_policy.AmazonEBSCSIDriverPolicy.arn
# Required for ECR pull-through cache
ECRPullThroughCache = aws_iam_policy.ecr_pull_through_cache.arn
}
block_device_mappings = {
xvda = {
device_name = "/dev/xvda"
ebs = {
volume_type = "gp3"
volume_size = 150
delete_on_termination = true
}
}
}
tags = merge(
local.tags,
# Tag node group resources for Karpenter auto-discovery
# NOTE - if creating multiple security groups with this module, only tag the
# security group that Karpenter should utilize with the following tag
{ Name = "eks-${var.shared_resource_identifier}-smarter" },
)
}
}
}`
Steps to reproduce the behavior:
Expected behavior
configMap aws-auth modified as follows
`# Please edit the object below. Lines beginning with a '#' will be ignored,
and an empty file will abort the edit. If an error occurs while saving this file will be
reopened with the relevant failures.
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::090511222473:role/smarter-eks-node-group-20260212230619390100000005
groups:
- system:bootstrappers
- system:nodes
username: system:node:{{EC2PrivateDNSName}}
mapUsers: |
- userarn: arn:aws:iam::YOUR-AWS-ACCOUNT-NUMBER:user/IAM-USERNAME
username: IAM-USERNAME
groups:
- system:masters
kind: ConfigMap`
Actual behavior
Terminal Output Screenshot(s)
[SEE SCREEN SHOT ABOVE]