-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
Description
Description
on Terraform apply, the following error occurs when providing a value of false for create_kms_key:
-
Failed to execute "terraform apply" in ./.terragrunt-cache/4u5oChu2CIU5aUdSanJhOzfir_U/3jRKubbgivTxQaDN_rDYXmfCJcM
╷
│ Error: Missing required argument
│
│ with module.eks.aws_eks_cluster.this[0],
│ on .terraform/modules/eks/main.tf line 36, in resource "aws_eks_cluster" "this":
│ 36: resource "aws_eks_cluster" "this" {
│
│ The argument "encryption_config.0.provider.0.key_arn" is required, but no
│ definition was found.
╵exit status 1
This should not happen. If you set create_kms_key = false then no KMS resources should be created, at all.
⚠️ Note
Before you submit an issue, please perform the following first:
- Remove the local
.terraformdirectory (! ONLY if state is stored remotely, which hopefully you are following that best practice!):rm -rf .terraform/ - Re-initialize the project root to pull down modules:
terraform init - Re-attempt your terraform plan or apply and check if the issue still persists
Versions
-
Module version [Required]: 21.15.1
-
Terraform version:
Terraform v1.14.4
on darwin_arm64
- Provider version(s):
- provider registry.terraform.io/hashicorp/aws v6.32.0
- provider registry.terraform.io/hashicorp/cloudinit v2.3.7
- provider registry.terraform.io/hashicorp/helm v3.1.1
- provider registry.terraform.io/hashicorp/kubernetes v3.0.1
- provider registry.terraform.io/hashicorp/local v2.6.2
- provider registry.terraform.io/hashicorp/null v3.2.4
- provider registry.terraform.io/hashicorp/random v3.8.1
- provider registry.terraform.io/hashicorp/time v0.13.1
- provider registry.terraform.io/hashicorp/tls v4.2.1
Reproduction Code [Required]
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 21"
endpoint_private_access = true
endpoint_public_access = true
create_cloudwatch_log_group = false
enable_irsa = true
create_iam_role = true
enable_cluster_creator_admin_permissions = true
authentication_mode = "API_AND_CONFIG_MAP"
cloudwatch_log_group_class = "INFREQUENT_ACCESS"
name = var.namespace
kubernetes_version = var.kubernetes_cluster_version
vpc_id = var.vpc_id
subnet_ids = var.private_subnets
control_plane_subnet_ids = var.private_subnets
create_kms_key = false
kms_key_owners = var.kms_key_owners
tags = local.tags
compute_config = {
enabled = false
}
addons = {
coredns = {}
eks-pod-identity-agent = {
before_compute = true
}
kube-proxy = {}
vpc-cni = {
before_compute = true
}
aws-ebs-csi-driver = {
service_account_role_arn = aws_iam_role.AmazonEKS_EBS_CSI_DriverRole.arn
}
}
node_security_group_additional_rules = {
ingress_self_all = {
description = "smarter: Node to node all ports/protocols"
protocol = "-1"
from_port = 0 # FIX NOTE: this is a security vunerability.
to_port = 0 # Ideally this should be narrowed to the IP address of the
# nginx ingress controller's load balancer.
type = "ingress"
cidr_blocks = [
"172.16.0.0/12",
"192.168.0.0/16",
]
}
port_8443 = {
description = "smarter: open port 8443 to vpc"
protocol = "-1"
from_port = 8443
to_port = 8443
type = "ingress"
self = true
}
egress_all = {
description = "smarter: Node all egress"
protocol = "-1"
from_port = 0
to_port = 0
type = "egress"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
}
eks_managed_node_groups = {
smarter = {
capacity_type = "SPOT"
enable_monitoring = false
cluster_enabled_log_types = []
min_size = var.eks_node_group_min_size
max_size = var.eks_node_group_max_size
desired_size = var.eks_node_group_min_size
node_repair_config = {
enabled = true
update_config = {
max_unavailable_percentage = 33
}
}
instance_types = var.eks_node_group_instance_types
subnet_ids = var.private_subnets
pre_bootstrap_user_data = <<-EOT
#!/bin/bash
set -e
# Configure containerd registry mirror for docker.io
mkdir -p /etc/containerd/certs.d/docker.io
cat > /etc/containerd/certs.d/docker.io/hosts.toml <<'EOF'
server = "https://registry-1.docker.io"
[host."https://${data.aws_caller_identity.current.account_id}.dkr.ecr.${data.aws_region.current.region}.amazonaws.com/docker-hub"]
capabilities = ["pull", "resolve"]
[host."https://registry-1.docker.io"]
capabilities = ["pull", "resolve"]
EOF
EOT
iam_role_additional_policies = {
AmazonEKSWorkerNodePolicy = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
AmazonEKS_CNI_Policy = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
AmazonEC2ContainerRegistryReadOnly = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
# Required by Karpenter
AmazonSSMManagedInstanceCore = "arn:${local.partition}:iam::aws:policy/AmazonSSMManagedInstanceCore"
# Required by EBS CSI Add-on
AmazonEBSCSIDriverPolicy = data.aws_iam_policy.AmazonEBSCSIDriverPolicy.arn
# Required for ECR pull-through cache
ECRPullThroughCache = aws_iam_policy.ecr_pull_through_cache.arn
}
block_device_mappings = {
xvda = {
device_name = "/dev/xvda"
ebs = {
volume_type = "gp3"
volume_size = 150
delete_on_termination = true
}
}
}
tags = merge(
local.tags,
# Tag node group resources for Karpenter auto-discovery
# NOTE - if creating multiple security groups with this module, only tag the
# security group that Karpenter should utilize with the following tag
{ Name = "eks-${var.shared_resource_identifier}-smarter" },
)
}
}
}
Steps to reproduce the behavior:
terraform init
terraform apply
Expected behavior
terraform should create the eks cluster
Actual behavior
08:42:30.690 ERROR error occurred:
-
Failed to execute "terraform apply" in ./.terragrunt-cache/4u5oChu2CIU5aUdSanJhOzfir_U/3jRKubbgivTxQaDN_rDYXmfCJcM
╷
│ Error: Missing required argument
│
│ with module.eks.aws_eks_cluster.this[0],
│ on .terraform/modules/eks/main.tf line 36, in resource "aws_eks_cluster" "this":
│ 36: resource "aws_eks_cluster" "this" {
│
│ The argument "encryption_config.0.provider.0.key_arn" is required, but no
│ definition was found.
╵exit status 1
Terminal Output Screenshot(s)
Additional context
Following is the complete console output: