-
Notifications
You must be signed in to change notification settings - Fork 10.3k
Description
Current Terraform Version
v0.15.5
Use-cases
I'm one of the maintainers of the Kubernetes provider and I'm trying to solve an issue that we see pretty often. Users would like to be able to pass credentials from other providers into the Kubernetes and Helm providers. This allows them to fetch their cluster credentials from EKS, AKS, GKE, etc, and pass them into the Kubernetes provider. The problem is that when they do this in a single apply with their cluster build, they run into the issue where they can't pass an unknown value to a provider configuration block. It's because this is not supported in Terraform Core. To quote the docs:
You can use expressions in the values of these configuration arguments,
but can only reference values that are known before the configuration is applied.
TL;DR: I would like to be able to pass unknown values into the provider block via data sources which are refreshed after provider initialization but before each CRUD operation. This would keep the credentials "fresh" and mitigate credential expiration.
Attempted Solutions
If the configuration is set up with the right dependencies, I can successfully build an EKS cluster and put Kubernetes resources on top of it in a single apply, despite not knowing the EKS credentials prior to apply time. However, since this is not supported, it does not work reliably in subsequent applies when the credentials change.
As a work-around, I tried removing the Kubernetes provider resources from state. Removing the Kubernetes provider resources from state did solve the problem in many cases, but this manual intervention is not intuitive to users, and it sometimes has unwanted consequences like orphaned infrastructure.
Many other work-arounds were tried in an attempt to accommodate our users who prefer to use the unsupported single-apply configurations. But the end result is that work-arounds place too much of an operational burden on the user. And the alternative (supported configuration of using two applies or two Terraform states) places a requirement on the user to maintain a separate Terraform State for the cluster infrastructure, which becomes burdensome if the user needs different environments for Dev/Stage/Prod. The number of managed states then goes from 3 to 6 (dev/stage/prod state for their
cluster infrastructure resources and dev/stage/prod for the Kubernetes resources). They would need to separate out their databases similarly, since the database providers also read in credentials from the cluster infrastructure. The users are understandably burdened by this.
Proposal
I'm hoping Core can consider the idea of refreshing the data sources used in a provider prior to using that provider. For example, in this provider block, I would want data.aws_eks_cluster and data.aws_eks_cluster_auth to be refreshed prior to running any CRUD operations using the Kubernetes provider:
provider "kubernetes" {
host = data.aws_eks_cluster.default.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.default.token
}
The token needs to be refreshed before any CRUD operations take place because the token expires every 15 minutes, and because it may have changed during apply.
References
- Removing state of dependent providers (partial fix for progressive apply) #27728
- Partial/Progressive Configuration Changes #4149
- AKS - dial tcp [::1]:80: connect: connection refused on all plans modifying the azurerm_kubernetes_cluster resource terraform-provider-kubernetes#1307 (comment)
- Update/replace resource when a dependency is changed #8099
- Configuring one provider with a dynamic attribute from another (was: depends_on for providers) #2430 (comment)
- connection refused error caused by missing 'Authorization' header in request to Kubernetes API terraform-provider-kubernetes#1152
- Plan stalls due to failed tiller during
helm_resourcestate refresh terraform-provider-helm#315 (comment) - v2.0.1 Authentication failures with token retrieved via aws_eks_cluster_auth terraform-provider-kubernetes#1131
- Unauthorized on resource deletion in EKS terraform-provider-kubernetes#1113
- terraform refresh attempts to dial localhost terraform-provider-kubernetes#546 (comment)
- Kubernetes provider does not detect if cluster is recreated at runtime terraform-provider-kubernetes#545
- Kubernetes Provider tries to reach localhost:80/api when targeting azurerm resources terraform-provider-kubernetes#405 (comment)
- Kubernetes provider does not re-create resources on new GKE cluster. terraform-provider-kubernetes#688
- Token not being set in provider when trying to upgrade the cluster terraform-provider-kubernetes#1095
- Getting x509-certificate-signed-by-unknown-authority terraform-provider-kubernetes#1154