CI is managed through the OpenShift CI system (Prow + ci-operator). The job configuration lives in openshift/release.
| Job | Schedule | Description |
|---|---|---|
check-docs |
Pre-submit | Checks markdown formatting with Prettier |
terraform-validate |
Pre-submit | Runs terraform validate on all root modules |
helm-lint |
Pre-submit | Lints Helm charts |
check-rendered-files |
Pre-submit | Verifies rendered deploy files are up to date |
on-demand-e2e |
Pre-submit (manual) | End-to-end: provisions ephemeral environment using PR rosa-regional-platform branch, runs tests, tears down. Trigger with /test on-demand-e2e on a PR |
nightly-ephemeral |
Daily at 07:00 UTC | End-to-end: provisions ephemeral environment using main rosa-regional-platform branch, runs tests, tears down |
nightly-integration |
Daily at 07:00 UTC | Runs e2e tests against a standing integration environment |
ephemeral-resources-janitor |
Weekly (Sunday 12:00 UTC) | Purges leaked AWS resources using aws-nuke |
The CI image is built from ci/Containerfile and includes all required tools (Terraform, Helm, AWS CLI, Python/uv, etc.).
The ci/ephemeral-provider/main.py script manages ephemeral environments for CI testing. It supports three modes — provision, teardown (--teardown), and resync (--resync) — designed to run as separate CI steps with tests in between.
- Creates a CI-owned git branch from the source repo/branch
- Bootstraps the pipeline-provisioner pointing at the CI branch
- Pushes rendered deploy files to trigger pipelines via GitOps
- Waits for RC/MC pipelines to provision infrastructure
- (Separate CI step) Runs the testing suite against the provisioned environment
- Tears down infrastructure via GitOps (
delete: truein config.yaml) - Destroys the pipeline-provisioner
- CI branch is retained for post-run troubleshooting (delete manually via
git push ci --delete <branch>)
The recommended way to run ephemeral environments locally is via Make targets. These handle container builds, Vault credential fetching, and state tracking automatically. Credentials are fetched from Vault via OIDC and passed as environment variables to the container — they never touch disk.
make ephemeral-provision # Interactive remote/branch picker, provisions environment
make ephemeral-teardown # Interactive picker or BUILD_ID=<id>, tears down environment
make ephemeral-resync # Interactive picker or BUILD_ID=<id>, rebases CI branch onto latest source
make ephemeral-list # List tracked environments with statePrerequisites: fzf, vault, git, python3, uv, and podman or docker.
- Obtain an API token by visiting https://oauth-openshift.apps.ci.l2s4.p1.openshiftapps.com/oauth/token/request
- Log in with
oc login - Start the job:
# Trigger nightly-ephemeral
curl -X POST \
-H "Authorization: Bearer $(oc whoami -t)" \
'https://gangway-ci.apps.ci.l2s4.p1.openshiftapps.com/v1/executions/' \
-d '{"job_name": "periodic-ci-openshift-online-rosa-regional-platform-main-nightly-ephemeral", "job_execution_type": "1"}'
# Trigger nightly-integration
curl -X POST \
-H "Authorization: Bearer $(oc whoami -t)" \
'https://gangway-ci.apps.ci.l2s4.p1.openshiftapps.com/v1/executions/' \
-d '{"job_name": "periodic-ci-openshift-online-rosa-regional-platform-main-nightly-integration", "job_execution_type": "1"}'- Copy the
idfrom the response and check the execution to get the Prow URL:
curl -X GET \
-H "Authorization: Bearer $(oc whoami -t)" \
'https://gangway-ci.apps.ci.l2s4.p1.openshiftapps.com/v1/executions/<id>'Open the job_url from the response to watch the job in Prow.
When a Prow job is running (e.g. on-demand-e2e), you can watch its logs in real time:
-
Open the Prow job page (e.g. from the PR status check link or the job history -- see jobs table above).
-
In the build log output, look for a line like:
INFO[2026-03-10T11:41:49Z] Using namespace https://console.xxxxx.ci.openshift.org/k8s/cluster/projects/ci-op-XXXXXXXX -
Click the namespace link to open the OpenShift console for the CI cluster where the job pods are running. From there you can inspect pod logs, events, and resources in real time.
Note: Access to the namespace is restricted to the person who triggered the job (i.e. the PR author for pre-submit jobs). There is no configuration option to grant access to additional users.
The e2e jobs use credentials mounted at /var/run/rosa-credentials/. Credentials are managed in Vault. Two credential secrets are used:
rosa-regional-platform-ephemeral-creds— grants access to the AWS accounts used to spin up an ephemeral environment. Used bynightly-ephemeral,on-demand-e2e, andephemeral-resources-janitor.rosa-regional-platform-integration-creds— grants access to AWS credentials for testing against the API gateway in the regional integration account. Used bynightly-integration.
The ephemeral tests create AWS resources across multiple accounts. Teardown relies on terraform destroy, which can fail and leak resources. The ephemeral-resources-janitor job is a weekly fallback that purges everything except resources we need to keep between tests using aws-nuke.
See ./ci/aws-nuke-config.yaml.
# Dry-run (list only, no deletions)
./ci/janitor/purge-aws-account.sh
# Live run (actually delete resources)
./ci/janitor/purge-aws-account.sh --no-dry-runThe script uses whatever AWS credentials are active in your environment. The account must be in the allowlist in purge-aws-account.sh.