Skip to content

GoodRxOSS/lifecycle-opentofu

Repository files navigation

OpenTofu Lifecycle Module

This module provides a quick and easy way to launch the Lifecycle application on different cloud providers with support for multiple DNS providers. It is ideal for testing, demos, and learningβ€”do not use in production!


πŸš€ Features

  • Multi-cloud support: AWS EKS, GCP GKE, and OpenStack Magnum
  • Multi-DNS support: Cloudflare, AWS RouteΒ 53, GCP Cloud DNS
  • Self-contained logic: No external OpenTofu/Terraform modulesβ€”everything is implemented in internal submodules
  • Minimal infrastructure: Creates just the necessary resources for the Lifecycle app, optimizing cost

πŸ“‹ Supported Providers

Cloud Provider Module Parameter Status
Amazon EKS cluster_provider = "eks" Stable
Google GKE cluster_provider = "gke" Stable
OpenStack Magnum cluster_provider = "magnum" Beta
DNS Provider Parameter Key Status
Cloudflare dns_provider = "cloudflare" Stable
AWS RouteΒ 53 dns_provider = "route53" Stable
GCP CloudΒ DNS dns_provider = "cloud-dns" Stable

You can mix-and-match, e.g. AWS EKS with Cloudflare DNS or GKE with RouteΒ 53.


βš™οΈ Prerequisites

  1. OpenTofu CLI installed and initialized. OpenTofu is an open-source infrastructure-as-code tool and is fully compatible with Terraform v0.13+ syntax.

  2. A cloud account for your chosen provider:

  3. DNS account for your chosen DNS provider:

    • Cloudflare: free tier includes 1 zone
    • RouteΒ 53: pay-as-you-go
    • Cloud DNS: pay-as-you-go

πŸ—οΈ OpenStack Detailed Requirements

Warning

Using this module with OpenStack requires advanced knowledge of OpenStack administration. Many environment-specific nuances (networking, storage backends, security groups) are not covered by this setup. It is assumed the user is an experienced administrator or has access to a correctly pre-configured environment.

🧩 Required Components

Your OpenStack environment must have the following services enabled and functional:

Service Component Purpose
Keystone Identity Authentication and service catalog
Nova Compute Provisioning worker and master nodes
Neutron Network Managing SDN, subnets, and security groups
Glance Image Storing Fedora CoreOS images
Cinder/v3 Volume Persistent storage for Kubernetes PVs
Heat Orchestration Magnum uses Heat templates to deploy clusters
Barbican Key Manager Certificate and secret management for Magnum
Octavia Load Balancer Handling K8s API and Service LoadBalancers
Magnum Container Infra The core service for K8s lifecycle management

πŸ› οΈ Identity & Permissions Setup

To run this module, you need a dedicated project and a user with specific roles. Notably, Barbican requires the creator role to allow Magnum to manage certificates.

Example Setup Commands:

# 1. Create Project and User
openstack project create --domain default lifecycle-project
openstack user create --domain default --project lifecycle-project --password YOUR_PASS lifecycle-user

# 2. Assign Basic Roles
openstack role add --project lifecycle-project --user lifecycle-user member

# 3. Assign Barbican (Key Manager) Roles
# This is CRITICAL for Magnum to store cluster certificates
openstack role add --project lifecycle-project --user lifecycle-user creator

πŸ–₯️ Resource Requirements

Flavors

The following flavors must be created in your system:

Name Specs Usage
m1.small 1 vCPU, 2.00 GiB RAM Small worker nodes / Testing
m1.medium 2 vCPU, 4.00 GiB RAM Master nodes / Standard workers
amphora System specific Required for Octavia Load Balancer instances

Images

The cluster is configured to use Kubernetes v1.32.5.

  • Requirement: You must pre-upload the Fedora-CoreOS-41 image to Glance.
  • Ensure the image has the property os_distro='fedora-coreos', as Magnum uses this to identify the bootstrap logic.

πŸ§ͺ Verified Environment

This configuration has been tested on an OpenStack environment deployed via Kolla-Ansible with the following parameters:

  • OpenStack Release: 2025.1 (Epoxy)
  • Base Distro: rocky
  • Network: Neutron ML2/OVS (neutron_plugin_agent: "openvswitch")
  • Storage: Cinder with LVM/Ceph backend

πŸ› οΈ Quick Start

1. Clone the Repository

git clone https://github.com/GoodRxOSS/lifecycle-opentofu.git
cd opentofu-lifecycle

2. Configure Cloud CLI

AWS EKS

  1. Create an IAM user with a programmatic key (for testing only!).

  2. Attach AdministratorAccess policy (or fine-grained DNS, EKS, VPC permissions).

  3. Configure AWS CLI profile:

    aws configure --profile lifecycle-oss-eks
  4. Optional: Create a DNS zone, delegate NS records at your registrar.

GCP GKE

  1. Create a Google Cloud project and enable the Kubernetes Engine API.

  2. Install and authenticate gcloud CLI.

  3. Get credentials:

    gcloud config set project lifecycle-oss-123456
    gcloud auth application-default login
  4. Optional: Create a DNS zone, delegate NS records at your registrar.

OpenStack Magnum

  1. Identity Setup: Ensure you have a Project, User, and the necessary roles (see below).
  2. Environment Variables: OpenStack provider uses standard credentials. You must define the following variable in your secrets.auto.tfvars:
openstack_auth = {
  user_name = "your-username"
  password  = "your-password"
  auth_url  = "https://your-openstack-auth-url:5000/v3"
}
  1. CLI Authentication: Source your OpenStack RC file or ensure your environment is configured to interact with the API:
export OS_CLOUD=lifecycle-project
# or
source project-openrc.sh

Cloudflare DNS

  1. Sign up for Cloudflare and add your domain.
  2. Create a DNS zone, delegate NS records at your registrar.
  3. Create an API token with Zone.Zone, Zone.DNS:Edit permissions.
  4. Save the token securely (e.g., in a secret manager).

3. Copy Example Variables

cp example.auto.tfvars secrets.auto.tfvars
# Edit secrets.auto.tfvars with your values

4. Initialize & Apply

tofu init
tofu plan
tofu apply

* Sometimes, running tofu apply once is not enough to fully provision all resources. This can happen due to eventual consistency in cloud APIs or delays in external systems.

Common reasons why multiple tofu apply runs may be needed:

  1. DNS Propagation β€” Some cloud resources depend on DNS names that may not resolve immediately after being created. Dependent resources may fail on the first run.

  2. Service Readiness β€” If a resource (e.g., Load Balancer, DB instance) needs time to become fully ready, another resource depending on it might fail during the same apply.

  3. IAM Permissions Delay β€” Recently updated roles or policies might not be fully propagated across the provider’s infrastructure.

  4. Rate Limits / API Race Conditions β€” Some providers impose soft throttling or transient errors during rapid provisioning.

βœ… Solution: Just run tofu apply again. OpenTofu is designed to pick up from the current state and continue applying remaining changes. No need to worry β€” this is part of normal behavior when working with eventual-consistency cloud environments.

After running tofu apply, you should see a cheatsheet like this:

  • Amazon EKS

    help = <<EOT
    Quick help of usage [eks]:
    - Update `kubeconfig` file:
          $ aws eks update-kubeconfig --name lifecycle-oss --region us-west-2 --profile lifecycle-oss-eks
    
    - Check cluster permissions, e.g. List Pods in `lifecycle-app` namespace
          $ kubectl -n lifecycle-app get pods
    
    - Check public endpoint, DNS, certificates etc.
          $ curl -v https://kuard.example.com
    
    EOT
  • Google GKE

    help = <<EOT
    Quick help of usage [gke]:
    - Update `kubeconfig` file:
          $ gcloud container clusters get-credentials lifecycle-oss --zone us-central1-b --project lifecycle-oss-123456
    
    - Check cluster permissions, e.g. List Pods in `lifecycle-app` namespace
          $ kubectl -n lifecycle-app get pods
    
    - Check public endpoint, DNS, certificates etc.
          $ curl -v https://kuard.example.com
    
    EOT
  • Openstack Magnum

    help = <<EOT
    Quick help of usage [magnum]:
    - Update `kubeconfig` file:
          $ tofu output -raw kubeconfig > ~/.kube/config-magnum-lfc && export KUBECONFIG=~/.kube/config:$HOME/.kube/config-magnum-lfc && kubectl config view --flatten > ~/.kube/config_temp && mv ~/.kube/config_temp ~/.kube/config && rm ~/.kube/config-magnum-lfc
    
    - Check cluster permissions, e.g. List Pods in `lifecycle-app` namespace
          $ kubectl -n lifecycle-app get pods
    
    - Check public endpoint, DNS, certificates etc.
          $ curl -v https://kuard.example.com
    EOT

This is an autogenerated cheatsheet: copy, paste and run πŸš€

5. Cleaning up

tofu destroy

* Sometimes, when you run tofu destroy, not all resources are removed in a single attempt β€” and that’s expected in some cases. Network delays, expired credentials, or external system propagation (like DNS or IAM updates) can temporarily block proper cleanup. Just try running tofu destroy again after a short wait.

⚠️ Avoid Manual Deletion Unless Absolutely Necessary Manually deleting resources that were provisioned programmatically (via OpenTofu) is not recommended unless you are fully aware of the consequences.

Before resorting to manual intervention:

  1. Run tofu destroy multiple times to give the system time to resolve dependencies and update states.

  2. If a resource still cannot be destroyed automatically (due to external constraints or provider API limitations), only then consider manual deletion, and document the action carefully to avoid state drift.

Manually removing resources can lead to:

  1. Inconsistent state files

  2. Broken dependencies on future deployments


Requirements

Name Version
aws ~> 5.0
cloudflare ~> 5.0
google ~> 6.0
helm ~> 2.0
kubectl ~> 1.0
kubernetes ~> 2.0
openstack ~> 3.0
random ~> 3.0
tls ~> 4.0

Providers

Name Version
cloudflare 5.12.0
helm 2.17.0
kubectl 1.19.0
kubernetes 2.38.0
random 3.7.2
time 0.13.1

Modules

Name Source Version
cloud_dns ./modules/gcp-cloud-dns n/a
cloudflare ./modules/cloudflare-dns n/a
cloudflare_tunnel ./modules/cloudflare-dns n/a
eks ./modules/aws-eks n/a
gke ./modules/gcp-gke n/a
magnum ./modules/openstack-magnum n/a
route53 ./modules/aws-route53 n/a

Resources

Name Type
cloudflare_zero_trust_tunnel_cloudflared.this resource
helm_release.app_buildkit resource
helm_release.app_distribution resource
helm_release.app_lifecycle resource
helm_release.app_lifecycle_keycloak resource
helm_release.app_postgres resource
helm_release.app_redis resource
helm_release.cert_manager resource
helm_release.cloudflare_tunnel resource
helm_release.cluster_autoscaler resource
helm_release.ingress_nginx_controller resource
helm_release.keycloak_operator resource
helm_release.lifecycle_ui resource
kubectl_manifest.letsencrypt_clusterissuer resource
kubectl_manifest.letsencrypt_dns_certificate resource
kubectl_manifest.letsencrypt_dns_clusterissuer resource
kubectl_manifest.letsencrypt_dns_credentials_secret resource
kubectl_manifest.wildcard_certificate_secret resource
kubernetes_deployment.this resource
kubernetes_ingress_v1.cloudflare_tunnel resource
kubernetes_ingress_v1.this resource
kubernetes_namespace_v1.app resource
kubernetes_secret.image_pull_secret resource
kubernetes_secret_v1.app_postgres resource
kubernetes_secret_v1.app_redis resource
kubernetes_service.this resource
kubernetes_storage_class.aws_gp3 resource
kubernetes_storage_class.openstack_ssd resource
random_password.app_postgres resource
random_password.app_redis resource
random_password.cloudflare_tunnel resource
time_sleep.ingress_nginx_controller resource
cloudflare_zone.this data source
kubernetes_service.ingress_nginx_controller data source

Inputs

Name Description Type Default Required
app_buildkit_enabled Toggle to control whether BuildKit is deployed (e.g., for image builds). bool false no
app_distribution_enabled Toggle to enable or disable the distribution module (e.g., API, frontend). bool false no
app_distribution_subdomain Subdomain used to expose the distribution module. string "distribution" no
app_domain n/a string "example.com" no
app_enabled Global toggle to enable or disable the entire application deployment. bool true no
app_lifecycle_enabled Toggle to control whether PostgreSQL is deployed. bool true no
app_lifecycle_keycloak Toggle to control whether Keycloak instance for Lifecycle is deployed. bool false no
app_lifecycle_ui Toggle to control whether Lifecycle UI is deployed. bool false no
app_namespace n/a string "application-env" no
app_postgres_database Name of the PostgreSQL database to create and use. string "lifecycle" no
app_postgres_enabled Toggle to control whether PostgreSQL is deployed. bool false no
app_postgres_port Port used to connect to the PostgreSQL service. number 5432 no
app_postgres_username Username for accessing the PostgreSQL database. string "lifecycle" no
app_redis_enabled Toggle to control whether Redis is deployed. bool false no
app_redis_port Port used to connect to the Redis service. number 6379 no
app_subdomain Subdomain used to expose the Application module. string "app" no
aws_profile The AWS CLI profile name to use for authentication and authorization
when interacting with AWS services. This profile should be configured
in your AWS credentials file (usually located at ~/.aws/credentials).

The profile name must:
- Be a non-empty string
- Contain only alphanumeric characters, underscores (_), hyphens (-), and dots (.)
- Start and end with an alphanumeric character

Example valid profile names:
- default
- lifecycle-oss-eks
- my_profile-1

Note: Make sure the profile exists and has the necessary permissions.
string "default" no
aws_region The AWS region where the EKS cluster and related resources will be deployed.
Example: "us-east-1", "eu-west-1", "us-west-2"
string "us-west-2" no
cloudflare_api_token n/a string null no
cloudflare_tunnel_domain The domain name for the tunnel's ingress rules.
If null, 'var.app_domain' will be used as a fallback.
string null no
cloudflare_tunnel_enabled Controls whether to create and deploy the Cloudflare Tunnel resources. bool false no
cloudflare_tunnel_name The display name of the Cloudflare Tunnel.
Used to identify the tunnel in the Zero Trust dashboard.
string "lifecycle" no
cluster_name The name of the Kubernetes cluster.
Must consist of alphanumeric characters, dashes, and be 1–100 characters long.
string "k8s-cluster" no
cluster_provider n/a string "eks" no
dns_provider n/a string "route53" no
gcp_credentials_file n/a string null no
gcp_project The Google Cloud Project ID to use for creating and managing resources.
This should be the unique identifier of your GCP project.
If not provided (null), some modules might attempt to infer the project from
your environment or credentials.

Format requirements:
- Length between 6 and 30 characters
- Lowercase letters, digits, and hyphens only
- Must start with a lowercase letter
- Cannot end with a hyphen
string null no
gcp_region The Google Cloud region or zone where the GKE cluster is deployed.
Example: "us-central1" or "us-central1-b"
string "us-central1-b" no
keycloak_operator_enabled Toggle to control whether Keycloak Operator is deployed. bool true no
openstack_auth OpenStack authentication credentials.
Includes user_name, password, and auth_url.
object({
user_name = optional(string)
password = optional(string)
auth_url = optional(string)
})
{
"auth_url": null,
"password": null,
"user_name": null
}
no
openstack_project The name of the OpenStack project (tenant).
Must be URL-safe and follow corporate naming conventions.
string null no
openstack_region The name of the OpenStack region. Standard default is RegionOne. string "RegionOne" no
pbkdf2_passphrase n/a string n/a yes
private_registries Configuration for private registries (Helm charts and Container images).
If empty, no registry blocks will be created.
list(object({
url = string
username = string
password = string
usage = list(string) # ["charts", "images"]
}))
[] no
ssh_public_key The content of the SSH public key (e.g., contents of ~/.ssh/id_rsa.pub).
Supports RSA, ECDSA, and ED25519 formats.
string null no
ssh_public_key_path The local filesystem path to the SSH public key file. string "~/.ssh/id_rsa.pub" no

Outputs

Name Description
help Quick help of usage
kubeconfig The Kubernetes configuration file (kubeconfig) for the Magnum cluster.
This configuration is primarily generated based on the client certificates,
private keys, and CA data provided by the OpenStack Magnum API.
It provides 'kubectl' with the necessary credentials to authenticate
and manage the cluster with administrative privileges.

⚠️ Disclaimer

This module is provided as-is for demonstration and testing purposes. Do not use it in production environments without proper security review and adaptation.


πŸ”„ Contributing

Contributions, issues, and feature requests are welcome! Please open an issue or submit a pull request on the GitHub repository.


πŸ“œ License

This project is licensed under the Apache License 2.0. See LICENSE for details.

About

IaC for Lifecycle Stack

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages