This module provides a quick and easy way to launch the Lifecycle application on different cloud providers with support for multiple DNS providers. It is ideal for testing, demos, and learningβdo not use in production!
- Multi-cloud support: AWS EKS, GCP GKE, and OpenStack Magnum
- Multi-DNS support: Cloudflare, AWS RouteΒ 53, GCP Cloud DNS
- Self-contained logic: No external OpenTofu/Terraform modulesβeverything is implemented in internal submodules
- Minimal infrastructure: Creates just the necessary resources for the Lifecycle app, optimizing cost
| Cloud Provider | Module Parameter | Status |
|---|---|---|
| Amazon EKS | cluster_provider = "eks" |
Stable |
| Google GKE | cluster_provider = "gke" |
Stable |
| OpenStack Magnum | cluster_provider = "magnum" |
Beta |
| DNS Provider | Parameter Key | Status |
|---|---|---|
| Cloudflare | dns_provider = "cloudflare" |
Stable |
| AWS RouteΒ 53 | dns_provider = "route53" |
Stable |
| GCP CloudΒ DNS | dns_provider = "cloud-dns" |
Stable |
You can mix-and-match, e.g. AWS EKS with Cloudflare DNS or GKE with RouteΒ 53.
-
OpenTofu CLI installed and initialized. OpenTofu is an open-source infrastructure-as-code tool and is fully compatible with Terraform v0.13+ syntax.
-
A cloud account for your chosen provider:
- AWS: free tier available
- GCP: free $300 credit trial
- OpenStack: self-hosted (see OpenStack Detailed Requirements for setup details)
-
DNS account for your chosen DNS provider:
- Cloudflare: free tier includes 1 zone
- RouteΒ 53: pay-as-you-go
- Cloud DNS: pay-as-you-go
Warning
Using this module with OpenStack requires advanced knowledge of OpenStack administration. Many environment-specific nuances (networking, storage backends, security groups) are not covered by this setup. It is assumed the user is an experienced administrator or has access to a correctly pre-configured environment.
Your OpenStack environment must have the following services enabled and functional:
| Service | Component | Purpose |
|---|---|---|
| Keystone | Identity | Authentication and service catalog |
| Nova | Compute | Provisioning worker and master nodes |
| Neutron | Network | Managing SDN, subnets, and security groups |
| Glance | Image | Storing Fedora CoreOS images |
| Cinder/v3 | Volume | Persistent storage for Kubernetes PVs |
| Heat | Orchestration | Magnum uses Heat templates to deploy clusters |
| Barbican | Key Manager | Certificate and secret management for Magnum |
| Octavia | Load Balancer | Handling K8s API and Service LoadBalancers |
| Magnum | Container Infra | The core service for K8s lifecycle management |
To run this module, you need a dedicated project and a user with specific roles. Notably, Barbican requires the creator role to allow Magnum to manage certificates.
Example Setup Commands:
# 1. Create Project and User
openstack project create --domain default lifecycle-project
openstack user create --domain default --project lifecycle-project --password YOUR_PASS lifecycle-user
# 2. Assign Basic Roles
openstack role add --project lifecycle-project --user lifecycle-user member
# 3. Assign Barbican (Key Manager) Roles
# This is CRITICAL for Magnum to store cluster certificates
openstack role add --project lifecycle-project --user lifecycle-user creator
The following flavors must be created in your system:
| Name | Specs | Usage |
|---|---|---|
| m1.small | 1 vCPU, 2.00 GiB RAM | Small worker nodes / Testing |
| m1.medium | 2 vCPU, 4.00 GiB RAM | Master nodes / Standard workers |
| amphora | System specific | Required for Octavia Load Balancer instances |
The cluster is configured to use Kubernetes v1.32.5.
- Requirement: You must pre-upload the Fedora-CoreOS-41 image to Glance.
- Ensure the image has the property
os_distro='fedora-coreos', as Magnum uses this to identify the bootstrap logic.
This configuration has been tested on an OpenStack environment deployed via Kolla-Ansible with the following parameters:
- OpenStack Release:
2025.1(Epoxy) - Base Distro:
rocky - Network: Neutron ML2/OVS
(neutron_plugin_agent: "openvswitch") - Storage: Cinder with LVM/Ceph backend
git clone https://github.com/GoodRxOSS/lifecycle-opentofu.git
cd opentofu-lifecycle-
Create an IAM user with a programmatic key (for testing only!).
-
Attach AdministratorAccess policy (or fine-grained DNS, EKS, VPC permissions).
-
Configure AWS CLI profile:
aws configure --profile lifecycle-oss-eks
-
Optional: Create a DNS zone, delegate NS records at your registrar.
-
Create a Google Cloud project and enable the Kubernetes Engine API.
-
Install and authenticate
gcloudCLI. -
Get credentials:
gcloud config set project lifecycle-oss-123456 gcloud auth application-default login -
Optional: Create a DNS zone, delegate NS records at your registrar.
- Identity Setup: Ensure you have a Project, User, and the necessary roles (see below).
- Environment Variables: OpenStack provider uses standard credentials. You must define the following variable in your
secrets.auto.tfvars:
openstack_auth = {
user_name = "your-username"
password = "your-password"
auth_url = "https://your-openstack-auth-url:5000/v3"
}- CLI Authentication: Source your OpenStack RC file or ensure your environment is configured to interact with the API:
export OS_CLOUD=lifecycle-project
# or
source project-openrc.sh- Sign up for Cloudflare and add your domain.
- Create a DNS zone, delegate NS records at your registrar.
- Create an API token with Zone.Zone, Zone.DNS:Edit permissions.
- Save the token securely (e.g., in a secret manager).
cp example.auto.tfvars secrets.auto.tfvars
# Edit secrets.auto.tfvars with your valuestofu init
tofu plan
tofu apply* Sometimes, running tofu apply once is not enough to fully provision all resources. This can happen due to eventual consistency in cloud APIs or delays in external systems.
Common reasons why multiple tofu apply runs may be needed:
-
DNS Propagation β Some cloud resources depend on DNS names that may not resolve immediately after being created. Dependent resources may fail on the first run.
-
Service Readiness β If a resource (e.g., Load Balancer, DB instance) needs time to become fully ready, another resource depending on it might fail during the same apply.
-
IAM Permissions Delay β Recently updated roles or policies might not be fully propagated across the providerβs infrastructure.
-
Rate Limits / API Race Conditions β Some providers impose soft throttling or transient errors during rapid provisioning.
β Solution: Just run tofu apply again. OpenTofu is designed to pick up from the current state and continue applying remaining changes. No need to worry β this is part of normal behavior when working with eventual-consistency cloud environments.
After running tofu apply, you should see a cheatsheet like this:
-
Amazon EKS
help = <<EOT Quick help of usage [eks]: - Update `kubeconfig` file: $ aws eks update-kubeconfig --name lifecycle-oss --region us-west-2 --profile lifecycle-oss-eks - Check cluster permissions, e.g. List Pods in `lifecycle-app` namespace $ kubectl -n lifecycle-app get pods - Check public endpoint, DNS, certificates etc. $ curl -v https://kuard.example.com EOT
-
Google GKE
help = <<EOT Quick help of usage [gke]: - Update `kubeconfig` file: $ gcloud container clusters get-credentials lifecycle-oss --zone us-central1-b --project lifecycle-oss-123456 - Check cluster permissions, e.g. List Pods in `lifecycle-app` namespace $ kubectl -n lifecycle-app get pods - Check public endpoint, DNS, certificates etc. $ curl -v https://kuard.example.com EOT
-
Openstack Magnum
help = <<EOT Quick help of usage [magnum]: - Update `kubeconfig` file: $ tofu output -raw kubeconfig > ~/.kube/config-magnum-lfc && export KUBECONFIG=~/.kube/config:$HOME/.kube/config-magnum-lfc && kubectl config view --flatten > ~/.kube/config_temp && mv ~/.kube/config_temp ~/.kube/config && rm ~/.kube/config-magnum-lfc - Check cluster permissions, e.g. List Pods in `lifecycle-app` namespace $ kubectl -n lifecycle-app get pods - Check public endpoint, DNS, certificates etc. $ curl -v https://kuard.example.com EOT
This is an autogenerated cheatsheet: copy, paste and run π
tofu destroy* Sometimes, when you run tofu destroy, not all resources are removed in a single attempt β and thatβs expected in some cases. Network delays, expired credentials, or external system propagation (like DNS or IAM updates) can temporarily block proper cleanup. Just try running tofu destroy again after a short wait.
Before resorting to manual intervention:
-
Run tofu destroy multiple times to give the system time to resolve dependencies and update states.
-
If a resource still cannot be destroyed automatically (due to external constraints or provider API limitations), only then consider manual deletion, and document the action carefully to avoid state drift.
Manually removing resources can lead to:
-
Inconsistent state files
-
Broken dependencies on future deployments
| Name | Version |
|---|---|
| aws | ~> 5.0 |
| cloudflare | ~> 5.0 |
| ~> 6.0 | |
| helm | ~> 2.0 |
| kubectl | ~> 1.0 |
| kubernetes | ~> 2.0 |
| openstack | ~> 3.0 |
| random | ~> 3.0 |
| tls | ~> 4.0 |
| Name | Version |
|---|---|
| cloudflare | 5.12.0 |
| helm | 2.17.0 |
| kubectl | 1.19.0 |
| kubernetes | 2.38.0 |
| random | 3.7.2 |
| time | 0.13.1 |
| Name | Source | Version |
|---|---|---|
| cloud_dns | ./modules/gcp-cloud-dns | n/a |
| cloudflare | ./modules/cloudflare-dns | n/a |
| cloudflare_tunnel | ./modules/cloudflare-dns | n/a |
| eks | ./modules/aws-eks | n/a |
| gke | ./modules/gcp-gke | n/a |
| magnum | ./modules/openstack-magnum | n/a |
| route53 | ./modules/aws-route53 | n/a |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| app_buildkit_enabled | Toggle to control whether BuildKit is deployed (e.g., for image builds). | bool |
false |
no |
| app_distribution_enabled | Toggle to enable or disable the distribution module (e.g., API, frontend). | bool |
false |
no |
| app_distribution_subdomain | Subdomain used to expose the distribution module. | string |
"distribution" |
no |
| app_domain | n/a | string |
"example.com" |
no |
| app_enabled | Global toggle to enable or disable the entire application deployment. | bool |
true |
no |
| app_lifecycle_enabled | Toggle to control whether PostgreSQL is deployed. | bool |
true |
no |
| app_lifecycle_keycloak | Toggle to control whether Keycloak instance for Lifecycle is deployed. | bool |
false |
no |
| app_lifecycle_ui | Toggle to control whether Lifecycle UI is deployed. | bool |
false |
no |
| app_namespace | n/a | string |
"application-env" |
no |
| app_postgres_database | Name of the PostgreSQL database to create and use. | string |
"lifecycle" |
no |
| app_postgres_enabled | Toggle to control whether PostgreSQL is deployed. | bool |
false |
no |
| app_postgres_port | Port used to connect to the PostgreSQL service. | number |
5432 |
no |
| app_postgres_username | Username for accessing the PostgreSQL database. | string |
"lifecycle" |
no |
| app_redis_enabled | Toggle to control whether Redis is deployed. | bool |
false |
no |
| app_redis_port | Port used to connect to the Redis service. | number |
6379 |
no |
| app_subdomain | Subdomain used to expose the Application module. | string |
"app" |
no |
| aws_profile | The AWS CLI profile name to use for authentication and authorization when interacting with AWS services. This profile should be configured in your AWS credentials file (usually located at ~/.aws/credentials). The profile name must: - Be a non-empty string - Contain only alphanumeric characters, underscores (_), hyphens (-), and dots (.) - Start and end with an alphanumeric character Example valid profile names: - default - lifecycle-oss-eks - my_profile-1 Note: Make sure the profile exists and has the necessary permissions. |
string |
"default" |
no |
| aws_region | The AWS region where the EKS cluster and related resources will be deployed. Example: "us-east-1", "eu-west-1", "us-west-2" |
string |
"us-west-2" |
no |
| cloudflare_api_token | n/a | string |
null |
no |
| cloudflare_tunnel_domain | The domain name for the tunnel's ingress rules. If null, 'var.app_domain' will be used as a fallback. |
string |
null |
no |
| cloudflare_tunnel_enabled | Controls whether to create and deploy the Cloudflare Tunnel resources. | bool |
false |
no |
| cloudflare_tunnel_name | The display name of the Cloudflare Tunnel. Used to identify the tunnel in the Zero Trust dashboard. |
string |
"lifecycle" |
no |
| cluster_name | The name of the Kubernetes cluster. Must consist of alphanumeric characters, dashes, and be 1β100 characters long. |
string |
"k8s-cluster" |
no |
| cluster_provider | n/a | string |
"eks" |
no |
| dns_provider | n/a | string |
"route53" |
no |
| gcp_credentials_file | n/a | string |
null |
no |
| gcp_project | The Google Cloud Project ID to use for creating and managing resources. This should be the unique identifier of your GCP project. If not provided (null), some modules might attempt to infer the project from your environment or credentials. Format requirements: - Length between 6 and 30 characters - Lowercase letters, digits, and hyphens only - Must start with a lowercase letter - Cannot end with a hyphen |
string |
null |
no |
| gcp_region | The Google Cloud region or zone where the GKE cluster is deployed. Example: "us-central1" or "us-central1-b" |
string |
"us-central1-b" |
no |
| keycloak_operator_enabled | Toggle to control whether Keycloak Operator is deployed. | bool |
true |
no |
| openstack_auth | OpenStack authentication credentials. Includes user_name, password, and auth_url. |
object({ |
{ |
no |
| openstack_project | The name of the OpenStack project (tenant). Must be URL-safe and follow corporate naming conventions. |
string |
null |
no |
| openstack_region | The name of the OpenStack region. Standard default is RegionOne. | string |
"RegionOne" |
no |
| pbkdf2_passphrase | n/a | string |
n/a | yes |
| private_registries | Configuration for private registries (Helm charts and Container images). If empty, no registry blocks will be created. |
list(object({ |
[] |
no |
| ssh_public_key | The content of the SSH public key (e.g., contents of ~/.ssh/id_rsa.pub). Supports RSA, ECDSA, and ED25519 formats. |
string |
null |
no |
| ssh_public_key_path | The local filesystem path to the SSH public key file. | string |
"~/.ssh/id_rsa.pub" |
no |
| Name | Description |
|---|---|
| help | Quick help of usage |
| kubeconfig | The Kubernetes configuration file (kubeconfig) for the Magnum cluster. This configuration is primarily generated based on the client certificates, private keys, and CA data provided by the OpenStack Magnum API. It provides 'kubectl' with the necessary credentials to authenticate and manage the cluster with administrative privileges. |
This module is provided as-is for demonstration and testing purposes. Do not use it in production environments without proper security review and adaptation.
Contributions, issues, and feature requests are welcome! Please open an issue or submit a pull request on the GitHub repository.
This project is licensed under the Apache License 2.0. See LICENSE for details.