OCPBUGS-76381,OCPBUGS-76383,OCPBUGS-78211,OCPBUGS-78455: Synchronize From Upstream Repositories#682
Conversation
Change GetDeploymentConfig() to return (*DeploymentConfig, error) instead of map[string]any, eliminating the intermediate convertToDeploymentConfig() function in provider.go. The caller was immediately converting the map to a DeploymentConfig anyway, so this simplifies the API and removes unnecessary indirection. Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…after upgrade (#2578)
Bumps [marocchino/sticky-pull-request-comment](https://github.com/marocchino/sticky-pull-request-comment) from 2 to 3. - [Release notes](https://github.com/marocchino/sticky-pull-request-comment/releases) - [Commits](marocchino/sticky-pull-request-comment@v2...v3) --- updated-dependencies: - dependency-name: marocchino/sticky-pull-request-comment dependency-version: '3' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* Externalize CER phase objects into Secret refs Add support for storing ClusterExtensionRevision phase objects in content-addressable immutable Secrets instead of inline in the CER spec. This removes the etcd object size limit as a constraint on bundle size. API changes: - Add ObjectSourceRef type with name, namespace, and key fields - Make ClusterExtensionRevisionObject.Object optional (omitzero) - Add optional Ref field with XValidation ensuring exactly one is set - Add RefResolutionFailed condition reason - Add RevisionNameKey label for ref Secret association Applier (boxcutter.go): - Add SecretPacker to bin-pack serialized objects into Secrets with gzip compression for objects exceeding 800KiB - Add createExternalizedRevision with crash-safe three-step sequence: create Secrets, create CER with refs, patch ownerReferences - Externalize desiredRevision before SSA comparison so the patch compares refs-vs-refs instead of inline-vs-refs - Add ensureSecretOwnerReferences for crash recovery - Pass SystemNamespace to Boxcutter from main.go CER controller: - Add resolveObjectRef to fetch and decompress objects from Secrets - Handle ref resolution in buildBoxcutterPhases - Add RBAC for Secret get/list/watch E2e tests: - Add scenario verifying refs, immutability, labels, and ownerRefs - Add step definitions for ref Secret validation - Fix listExtensionRevisionResources and ClusterExtensionRevisionObjectsNotFoundOrNotOwned to resolve refs Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Address PR #2595 review feedback - Fix duplicate key size inflation in SecretPacker by only incrementing size for new content hash keys - Add io.LimitReader (10 MiB cap) for gzip decompression to prevent gzip bombs in controller and e2e helpers - Add doc comment clarifying ObjectSourceRef.Namespace defaults to OLM system namespace during ref resolution - Fix docs: orphan cleanup uses ownerReference GC, ref resolution failures are retried (not terminal) - Remove unused ClusterExtensionRevisionReasonRefResolutionFailed constant - Add default error branch in e2e listExtensionRevisionResources for objects missing both ref and inline content Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Change gzipThreshold from 800 KiB to 900 KiB Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Adds a new large-crd-operator test bundle containing a ~1MB CRD to verify that the Boxcutter runtime correctly handles large bundle installations. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
… (#2589) Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
|
@tmshort: This pull request explicitly references no jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
WalkthroughRenames the ClusterExtensionRevision API and runtime surfaces to ClusterObjectSet and adds support for externalizing large bundle objects into Secrets via a new mutually-exclusive inline Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes ✨ Finishing Touches🧪 Generate unit tests (beta)
|
|
@tmshort: This pull request references Jira Issue OCPBUGS-78455, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: The bug has been updated to refer to the pull request using the external bug tracker. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
Signed-off-by: Francesco Giudici <fgiudici@redhat.com>
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: tmshort The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/lgtm |
|
/label qe-approved |
Signed-off-by: Todd Short <tshort@redhat.com>
…t in OTE tests Update all remaining references to ClusterExtensionRevision in openshift/tests-extension to use ClusterObjectSet, matching the upstream rename in operator-framework/operator-controller#2589. Files updated: - test/qe/specs/olmv1_ce.go: RBAC resource names and comments - test/olmv1-preflight.go: scenario constants, test names, RBAC rules - .openshift-tests-extension/openshift_payload_olmv1.json: test name - pkg/bindata/qe/bindata.go: embedded RBAC templates - test/qe/testdata/olm/sa-nginx-limited-boxcutter.yaml: RBAC resources - test/qe/testdata/olm/sa-nginx-insufficient-operand-rbac-boxcutter.yaml: RBAC resources Signed-off-by: Camila Macedo <cmacedo@redhat.com> Made-with: Cursor
…s ClusterObjectSet The upstream rename of ClusterExtensionRevision to ClusterObjectSet (operator-framework/operator-controller#2589) breaks the incompatible operator detection in cluster-olm-operator. The cluster-olm-operator binary still reads ClusterExtensionRevision resources to find operators with olm.maxOpenShiftVersion, so after the rename it never detects incompatible operators and InstalledOLMOperatorsUpgradeable stays True. Skip this test when NewOLMBoxCutterRuntime feature gate is enabled until cluster-olm-operator is updated to read ClusterObjectSet. Signed-off-by: Camila Macedo <cmacedo@redhat.com> Made-with: Cursor
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (2)
test/e2e/features/install.feature (1)
528-528:⚠️ Potential issue | 🟠 MajorDon't pin the ref-Secret check to
olmv1-system.Line 528 still hard-codes the controller namespace. That makes this scenario environment-specific and will fail on installs that run the controller in
openshift-operator-controller.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/features/install.feature` at line 528, The step "And ClusterObjectSet \"${NAME}-1\" ref Secrets exist in \"olmv1-system\" namespace" hard-codes the controller namespace; change it to use the dynamic controller/operator namespace variable used elsewhere (e.g., ${CONTROLLER_NAMESPACE} or ${OPERATOR_NAMESPACE}) or the scenario context value instead of "olmv1-system". Update the Gherkin step invocation for ClusterObjectSet "${NAME}-1" ref Secrets exist to reference that variable so the check runs against the actual controller namespace in each environment.internal/operator-controller/controllers/clusterobjectset_controller.go (1)
539-543:⚠️ Potential issue | 🟠 MajorHonor the default namespace contract for Secret refs.
ObjectSourceRef.Namespaceis documented as optional, but this lookup uses it verbatim. A ref that omitsnamespacewill result in a lookup against an empty namespace, which will fail. Add a fallback to the OLM system namespace whenref.Namespaceis empty.🔧 Proposed fix
func (c *ClusterObjectSetReconciler) resolveObjectRef(ctx context.Context, ref ocv1.ObjectSourceRef) (*unstructured.Unstructured, error) { secret := &corev1.Secret{} - key := client.ObjectKey{Name: ref.Name, Namespace: ref.Namespace} + ns := ref.Namespace + if ns == "" { + ns = c.SystemNamespace + } + key := client.ObjectKey{Name: ref.Name, Namespace: ns} if err := c.Client.Get(ctx, key, secret); err != nil { - return nil, fmt.Errorf("getting Secret %s/%s: %w", ref.Namespace, ref.Name, err) + return nil, fmt.Errorf("getting Secret %s/%s: %w", ns, ref.Name, err) }Note: The
ClusterObjectSetReconcilerstruct would need aSystemNamespacefield added if not already present.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/operator-controller/controllers/clusterobjectset_controller.go` around lines 539 - 543, resolveObjectRef currently uses ref.Namespace verbatim which fails when Namespace is omitted; change it to fallback to the operator/OLM system namespace by using ref.Namespace if non-empty otherwise c.SystemNamespace (add a SystemNamespace field to ClusterObjectSetReconciler if it doesn’t exist). Update the key construction (client.ObjectKey{Name: ref.Name, Namespace: effectiveNamespace}) and keep the existing Get call and error wrapping; ensure the behavior is identical when a namespace is provided but uses c.SystemNamespace when ref.Namespace == "".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api/v1/clusterobjectset_types_test.go`:
- Around line 154-209: The phase-related test cases in
clusterobjectset_types_test.go are using incomplete/zero-value specs so Create
validation rejects them before phase rules run; update each failing-case fixture
to include a valid CollisionProtection and otherwise-valid fields
(LifecycleState, Revision) and for the 21-phases case populate Phases with 21
entries that have valid non-empty names (e.g., "phase-1"...) so the "no more
than 20" rule triggers, for the 51-objects case populate a single phase with 51
valid ClusterObjectSetObject entries (fill required fields like
Group/Version/Kind/Name) so the "no more than 50 objects" rule triggers, and for
the name-related cases ensure CollisionProtection and other fields are valid and
that other validators (like object count) are within limits so the intended name
validation in the ClusterObjectSetSpec/ClusterObjectSetPhase validators runs.
In `@docs/api-reference/olmv1-api-reference.md`:
- Around line 382-384: The docs were partially updated to refer to
ClusterObjectSet but the API reference was not regenerated and is missing the
ClusterObjectSet resource and related types; regenerate the OpenAPI/swagger
reference/page so the documentation includes the ClusterObjectSet resource and
add explicit entries for ClusterObjectSet, ClusterObjectSetObject.ref and
ObjectSourceRef (search for symbols ClusterObjectSet,
ClusterObjectSetObject.ref, and ObjectSourceRef in the source/spec generator
output), ensuring their schemas, field descriptions, and links are emitted and
then update the sections that currently mention activeRevisions/ClusterExtension
to point to the newly generated ClusterObjectSet API docs.
In `@test/e2e/steps/steps.go`:
- Around line 840-875: ClusterObjectSetRefSecretsHaveOwnerRef does a single read
and can fail spuriously; change it to poll until ownerReferences appear (or
timeout) instead of asserting once: wrap the current logic that lists secrets
(listRefSecrets) and checks each Secret's OwnerReferences in a retry loop (e.g.,
using wait.PollImmediate or a context-aware retry) that re-lists secrets and
re-checks owner refs until success or deadline, honoring the provided ctx for
timeout/cancellation and still returning the same error messages when the final
attempt fails; keep getResource(cos) usage to obtain cosUID once and then poll
for the secrets' owner refs.
- Around line 1613-1649: resolveObjectRef currently always uses ref.Namespace
verbatim when calling k8sClient and in error messages, which breaks refs that
omit namespace; update resolveObjectRef to compute a resolvedNamespace =
ref.Namespace; if empty set it to the OLM system/test default namespace used
elsewhere in the suite, then use resolvedNamespace in the k8sClient(...) call
and in all error messages and Secret lookups (keep references to
ObjectSourceRef.Namespace but replace uses with resolvedNamespace) so ref-less
refs correctly resolve to the OLM system namespace.
---
Duplicate comments:
In `@internal/operator-controller/controllers/clusterobjectset_controller.go`:
- Around line 539-543: resolveObjectRef currently uses ref.Namespace verbatim
which fails when Namespace is omitted; change it to fallback to the operator/OLM
system namespace by using ref.Namespace if non-empty otherwise c.SystemNamespace
(add a SystemNamespace field to ClusterObjectSetReconciler if it doesn’t exist).
Update the key construction (client.ObjectKey{Name: ref.Name, Namespace:
effectiveNamespace}) and keep the existing Get call and error wrapping; ensure
the behavior is identical when a namespace is provided but uses
c.SystemNamespace when ref.Namespace == "".
In `@test/e2e/features/install.feature`:
- Line 528: The step "And ClusterObjectSet \"${NAME}-1\" ref Secrets exist in
\"olmv1-system\" namespace" hard-codes the controller namespace; change it to
use the dynamic controller/operator namespace variable used elsewhere (e.g.,
${CONTROLLER_NAMESPACE} or ${OPERATOR_NAMESPACE}) or the scenario context value
instead of "olmv1-system". Update the Gherkin step invocation for
ClusterObjectSet "${NAME}-1" ref Secrets exist to reference that variable so the
check runs against the actual controller namespace in each environment.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c46397ef-74cf-42a9-91c8-185443e43058
⛔ Files ignored due to path filters (5)
go.sumis excluded by!**/*.sumopenshift/tests-extension/vendor/github.com/operator-framework/operator-controller/api/v1/clusterextension_types.gois excluded by!**/vendor/**openshift/tests-extension/vendor/github.com/operator-framework/operator-controller/api/v1/clusterobjectset_types.gois excluded by!**/vendor/**openshift/tests-extension/vendor/github.com/operator-framework/operator-controller/api/v1/zz_generated.deepcopy.gois excluded by!**/vendor/**vendor/modules.txtis excluded by!vendor/**,!**/vendor/**
📒 Files selected for processing (112)
.claude/commands/list-e2e-steps.mdAGENTS.mdOWNERS_ALIASESapi/v1/clusterextension_types.goapi/v1/clusterextensionrevision_types_test.goapi/v1/clusterobjectset_types.goapi/v1/clusterobjectset_types_test.goapi/v1/validation_test.goapi/v1/zz_generated.deepcopy.goapplyconfigurations/api/v1/clusterextensionrevisionstatus.goapplyconfigurations/api/v1/clusterextensionstatus.goapplyconfigurations/api/v1/clusterobjectset.goapplyconfigurations/api/v1/clusterobjectsetobject.goapplyconfigurations/api/v1/clusterobjectsetphase.goapplyconfigurations/api/v1/clusterobjectsetspec.goapplyconfigurations/api/v1/clusterobjectsetstatus.goapplyconfigurations/api/v1/objectsourceref.goapplyconfigurations/api/v1/revisionstatus.goapplyconfigurations/utils.gocmd/operator-controller/main.gocommitchecker.yamlconfig/samples/olm_v1_clusterextension.yamldocs/api-reference/crd-ref-docs-gen-config.yamldocs/api-reference/olmv1-api-reference.mddocs/concepts/large-bundle-support.mdgo.modhack/tools/update-crds.shhelm/olmv1/base/operator-controller/crd/experimental/olm.operatorframework.io_clusterextensions.yamlhelm/olmv1/base/operator-controller/crd/experimental/olm.operatorframework.io_clusterobjectsets.yamlhelm/olmv1/templates/crds/customresourcedefinition-clusterobjectsets.olm.operatorframework.io.ymlhelm/olmv1/templates/rbac/clusterrole-operator-controller-manager-role.ymlinternal/operator-controller/applier/boxcutter.gointernal/operator-controller/applier/boxcutter_test.gointernal/operator-controller/applier/externalize_test.gointernal/operator-controller/applier/phase.gointernal/operator-controller/applier/phase_test.gointernal/operator-controller/applier/provider.gointernal/operator-controller/applier/provider_test.gointernal/operator-controller/applier/secretpacker.gointernal/operator-controller/applier/secretpacker_test.gointernal/operator-controller/authorization/rbac.gointernal/operator-controller/authorization/rbac_test.gointernal/operator-controller/config/config.gointernal/operator-controller/config/config_test.gointernal/operator-controller/controllers/boxcutter_reconcile_steps.gointernal/operator-controller/controllers/clusterextension_reconcile_steps.gointernal/operator-controller/controllers/clusterobjectset_controller.gointernal/operator-controller/controllers/clusterobjectset_controller_internal_test.gointernal/operator-controller/controllers/clusterobjectset_controller_test.gointernal/operator-controller/controllers/common_controller.gointernal/operator-controller/controllers/common_controller_test.gointernal/operator-controller/controllers/resolve_ref_test.gointernal/operator-controller/controllers/revision_engine_factory.gointernal/operator-controller/labels/labels.gointernal/operator-controller/rukpak/render/registryv1/generators/generators.gointernal/operator-controller/rukpak/render/registryv1/generators/generators_test.gointernal/shared/util/test/artifacts.gomanifests/experimental-e2e.yamlmanifests/experimental.yamlopenshift/operator-controller/manifests-experimental.yamlrequirements.txttest/e2e/README.mdtest/e2e/features/install.featuretest/e2e/features/revision.featuretest/e2e/features/status.featuretest/e2e/features/update.featuretest/e2e/steps/hooks.gotest/e2e/steps/steps.gotest/e2e/steps/testdata/olm-sa-boxcutter-no-create-rbac-template.yamltest/e2e/steps/testdata/olm-sa-boxcutter-rbac-template.yamltest/e2e/steps/testdata/pvc-probe-sa-boxcutter-rbac-template.yamltest/e2e/steps/testdata/rbac-template-for-all-extensions.yamltest/regression/convert/generate-manifests.gotest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/00_clusterrole_argocd-operator-metrics-reader.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/01_clusterrole_argocd-operator.v0-1dhiybrldl1gyksid1dk2dqjsc72psdybc7iyvse5gpx.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/02_clusterrole_argocd-operator.v0.-3gkm3u8zfarktdile5wekso69zs9bgzb988mhjm0y6p.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/03_clusterrolebinding_argocd-operator.v0-1dhiybrldl1gyksid1dk2dqjsc72psdybc7iyvse5gpx.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/04_clusterrolebinding_argocd-operator.v0.-3gkm3u8zfarktdile5wekso69zs9bgzb988mhjm0y6p.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/05_configmap_argocd-operator-manager-config.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/06_customresourcedefinition_applications.argoproj.io.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/07_customresourcedefinition_applicationsets.argoproj.io.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/08_customresourcedefinition_appprojects.argoproj.io.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/09_customresourcedefinition_argocdexports.argoproj.io.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/10_customresourcedefinition_argocds.argoproj.io.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/11_deployment_argocd-operator-controller-manager.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/12_service_argocd-operator-controller-manager-metrics-service.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/13_serviceaccount_argocd-operator-controller-manager.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/00_clusterrole_argocd-operator-metrics-reader.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/01_clusterrole_argocd-operator.v0-1dhiybrldl1gyksid1dk2dqjsc72psdybc7iyvse5gpx.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/02_clusterrole_argocd-operator.v0.-3gkm3u8zfarktdile5wekso69zs9bgzb988mhjm0y6p.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/03_clusterrolebinding_argocd-operator.v0-1dhiybrldl1gyksid1dk2dqjsc72psdybc7iyvse5gpx.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/04_clusterrolebinding_argocd-operator.v0.-3gkm3u8zfarktdile5wekso69zs9bgzb988mhjm0y6p.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/05_configmap_argocd-operator-manager-config.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/06_customresourcedefinition_applications.argoproj.io.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/07_customresourcedefinition_applicationsets.argoproj.io.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/08_customresourcedefinition_appprojects.argoproj.io.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/09_customresourcedefinition_argocdexports.argoproj.io.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/10_customresourcedefinition_argocds.argoproj.io.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/11_deployment_argocd-operator-controller-manager.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/12_service_argocd-operator-controller-manager-metrics-service.yamltest/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/13_serviceaccount_argocd-operator-controller-manager.yamltestdata/images/bundles/large-crd-operator/v1.0.0/manifests/largecrd.operatorframework.io_largecrdtests.yamltestdata/images/bundles/large-crd-operator/v1.0.0/manifests/largecrdoperator.clusterserviceversion.yamltestdata/images/bundles/large-crd-operator/v1.0.0/manifests/script.configmap.yamltestdata/images/bundles/large-crd-operator/v1.0.0/metadata/annotations.yamltestdata/images/bundles/own-namespace-operator/v1.0.0/manifests/ownnamespaceoperator.clusterserviceversion.yamltestdata/images/bundles/single-namespace-operator/v1.0.0/manifests/singlenamespaceoperator.clusterserviceversion.yamltestdata/images/bundles/test-operator/v1.0.0/manifests/testoperator.clusterserviceversion.yamltestdata/images/bundles/test-operator/v1.0.2/manifests/testoperator.clusterserviceversion.yamltestdata/images/bundles/test-operator/v1.0.3/manifests/testoperator.clusterserviceversion.yamltestdata/images/bundles/test-operator/v1.2.0/manifests/testoperator.clusterserviceversion.yamltestdata/images/catalogs/test-catalog/v1/configs/catalog.yaml
💤 Files with no reviewable changes (3)
- commitchecker.yaml
- applyconfigurations/api/v1/clusterextensionrevisionstatus.go
- api/v1/clusterextensionrevision_types_test.go
✅ Files skipped from review due to trivial changes (41)
- OWNERS_ALIASES
- test/e2e/README.md
- requirements.txt
- internal/operator-controller/controllers/clusterextension_reconcile_steps.go
- testdata/images/bundles/test-operator/v1.0.3/manifests/testoperator.clusterserviceversion.yaml
- testdata/images/bundles/own-namespace-operator/v1.0.0/manifests/ownnamespaceoperator.clusterserviceversion.yaml
- test/e2e/features/status.feature
- testdata/images/bundles/single-namespace-operator/v1.0.0/manifests/singlenamespaceoperator.clusterserviceversion.yaml
- test/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/04_clusterrolebinding_argocd-operator.v0.-3gkm3u8zfarktdile5wekso69zs9bgzb988mhjm0y6p.yaml
- AGENTS.md
- testdata/images/bundles/large-crd-operator/v1.0.0/metadata/annotations.yaml
- applyconfigurations/api/v1/revisionstatus.go
- applyconfigurations/api/v1/clusterextensionstatus.go
- test/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/05_configmap_argocd-operator-manager-config.yaml
- test/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/00_clusterrole_argocd-operator-metrics-reader.yaml
- test/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/13_serviceaccount_argocd-operator-controller-manager.yaml
- test/e2e/steps/hooks.go
- go.mod
- testdata/images/bundles/large-crd-operator/v1.0.0/manifests/script.configmap.yaml
- helm/olmv1/templates/crds/customresourcedefinition-clusterobjectsets.olm.operatorframework.io.yml
- test/e2e/steps/testdata/pvc-probe-sa-boxcutter-rbac-template.yaml
- test/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/12_service_argocd-operator-controller-manager-metrics-service.yaml
- test/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/04_clusterrolebinding_argocd-operator.v0.-3gkm3u8zfarktdile5wekso69zs9bgzb988mhjm0y6p.yaml
- api/v1/clusterextension_types.go
- test/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/12_service_argocd-operator-controller-manager-metrics-service.yaml
- helm/olmv1/base/operator-controller/crd/experimental/olm.operatorframework.io_clusterextensions.yaml
- test/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/00_clusterrole_argocd-operator-metrics-reader.yaml
- test/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/03_clusterrolebinding_argocd-operator.v0-1dhiybrldl1gyksid1dk2dqjsc72psdybc7iyvse5gpx.yaml
- test/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/02_clusterrole_argocd-operator.v0.-3gkm3u8zfarktdile5wekso69zs9bgzb988mhjm0y6p.yaml
- test/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/09_customresourcedefinition_argocdexports.argoproj.io.yaml
- applyconfigurations/api/v1/clusterobjectsetstatus.go
- testdata/images/catalogs/test-catalog/v1/configs/catalog.yaml
- test/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/08_customresourcedefinition_appprojects.argoproj.io.yaml
- internal/operator-controller/applier/secretpacker_test.go
- test/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/09_customresourcedefinition_argocdexports.argoproj.io.yaml
- test/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/05_configmap_argocd-operator-manager-config.yaml
- test/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/13_serviceaccount_argocd-operator-controller-manager.yaml
- applyconfigurations/api/v1/objectsourceref.go
- internal/operator-controller/applier/externalize_test.go
- test/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity/02_clusterrole_argocd-operator.v0.-3gkm3u8zfarktdile5wekso69zs9bgzb988mhjm0y6p.yaml
- test/regression/convert/testdata/expected-manifests/argocd-operator.v0.6.0/with-empty-affinity-subtype/03_clusterrolebinding_argocd-operator.v0-1dhiybrldl1gyksid1dk2dqjsc72psdybc7iyvse5gpx.yaml
🚧 Files skipped from review as they are similar to previous changes (21)
- internal/operator-controller/applier/provider_test.go
- test/e2e/steps/testdata/olm-sa-boxcutter-rbac-template.yaml
- internal/shared/util/test/artifacts.go
- internal/operator-controller/authorization/rbac.go
- test/e2e/steps/testdata/rbac-template-for-all-extensions.yaml
- internal/operator-controller/labels/labels.go
- applyconfigurations/utils.go
- docs/api-reference/crd-ref-docs-gen-config.yaml
- test/e2e/features/revision.feature
- internal/operator-controller/authorization/rbac_test.go
- internal/operator-controller/config/config_test.go
- helm/olmv1/templates/rbac/clusterrole-operator-controller-manager-role.yml
- api/v1/validation_test.go
- openshift/operator-controller/manifests-experimental.yaml
- internal/operator-controller/controllers/boxcutter_reconcile_steps.go
- testdata/images/bundles/large-crd-operator/v1.0.0/manifests/largecrdoperator.clusterserviceversion.yaml
- hack/tools/update-crds.sh
- .claude/commands/list-e2e-steps.md
- applyconfigurations/api/v1/clusterobjectsetphase.go
- internal/operator-controller/controllers/revision_engine_factory.go
- internal/operator-controller/applier/provider.go
| "phases must have no more than 20 phases": { | ||
| spec: ClusterObjectSetSpec{ | ||
| LifecycleState: ClusterObjectSetLifecycleStateActive, | ||
| Revision: 1, | ||
| Phases: make([]ClusterObjectSetPhase, 21), | ||
| }, | ||
| valid: false, | ||
| }, | ||
| "phases entries must have no more than 50 objects": { | ||
| spec: ClusterObjectSetSpec{ | ||
| LifecycleState: ClusterObjectSetLifecycleStateActive, | ||
| Revision: 1, | ||
| Phases: []ClusterObjectSetPhase{ | ||
| { | ||
| Name: "too-many-objects", | ||
| Objects: make([]ClusterObjectSetObject, 51), | ||
| }, | ||
| }, | ||
| }, | ||
| valid: false, | ||
| }, | ||
| "phases entry names cannot be empty": { | ||
| spec: ClusterObjectSetSpec{ | ||
| LifecycleState: ClusterObjectSetLifecycleStateActive, | ||
| Revision: 1, | ||
| Phases: []ClusterObjectSetPhase{ | ||
| { | ||
| Name: "", | ||
| }, | ||
| }, | ||
| }, | ||
| valid: false, | ||
| }, | ||
| "phases entry names cannot start with symbols": { | ||
| spec: ClusterObjectSetSpec{ | ||
| LifecycleState: ClusterObjectSetLifecycleStateActive, | ||
| Revision: 1, | ||
| Phases: []ClusterObjectSetPhase{ | ||
| { | ||
| Name: "-invalid", | ||
| }, | ||
| }, | ||
| }, | ||
| valid: false, | ||
| }, | ||
| "phases entry names cannot start with numeric characters": { | ||
| spec: ClusterObjectSetSpec{ | ||
| LifecycleState: ClusterObjectSetLifecycleStateActive, | ||
| Revision: 1, | ||
| Phases: []ClusterObjectSetPhase{ | ||
| { | ||
| Name: "1-invalid", | ||
| }, | ||
| }, | ||
| }, | ||
| valid: false, |
There was a problem hiding this comment.
Make the phase-validation cases fail for the intended reason.
Lines 154-209 don't start from a valid spec: CollisionProtection is missing, the 21-phase case uses empty phase names, and the 51-object case uses zero-value objects. Create can be rejected before the 20-phase/50-object/name validators run, so these tests won't catch regressions in those rules.
🧪 Example of a tighter fixture setup
+ baseValidSpec := ClusterObjectSetSpec{
+ LifecycleState: ClusterObjectSetLifecycleStateActive,
+ Revision: 1,
+ CollisionProtection: CollisionProtectionPrevent,
+ }
+
"phases must have no more than 20 phases": {
- spec: ClusterObjectSetSpec{
- LifecycleState: ClusterObjectSetLifecycleStateActive,
- Revision: 1,
- Phases: make([]ClusterObjectSetPhase, 21),
- },
+ spec: func() ClusterObjectSetSpec {
+ spec := baseValidSpec
+ spec.Phases = make([]ClusterObjectSetPhase, 21)
+ for i := range spec.Phases {
+ spec.Phases[i] = ClusterObjectSetPhase{Name: fmt.Sprintf("phase-%d", i)}
+ }
+ return spec
+ }(),
valid: false,
},
"phases entries must have no more than 50 objects": {
- spec: ClusterObjectSetSpec{
- LifecycleState: ClusterObjectSetLifecycleStateActive,
- Revision: 1,
- Phases: []ClusterObjectSetPhase{
- {
- Name: "too-many-objects",
- Objects: make([]ClusterObjectSetObject, 51),
- },
- },
- },
+ spec: func() ClusterObjectSetSpec {
+ spec := baseValidSpec
+ objs := make([]ClusterObjectSetObject, 51)
+ for i := range objs {
+ objs[i] = ClusterObjectSetObject{
+ Ref: ObjectSourceRef{Name: "my-secret", Key: fmt.Sprintf("obj-%d", i)},
+ }
+ }
+ spec.Phases = []ClusterObjectSetPhase{{
+ Name: "too-many-objects",
+ Objects: objs,
+ }}
+ return spec
+ }(),
valid: false,
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "phases must have no more than 20 phases": { | |
| spec: ClusterObjectSetSpec{ | |
| LifecycleState: ClusterObjectSetLifecycleStateActive, | |
| Revision: 1, | |
| Phases: make([]ClusterObjectSetPhase, 21), | |
| }, | |
| valid: false, | |
| }, | |
| "phases entries must have no more than 50 objects": { | |
| spec: ClusterObjectSetSpec{ | |
| LifecycleState: ClusterObjectSetLifecycleStateActive, | |
| Revision: 1, | |
| Phases: []ClusterObjectSetPhase{ | |
| { | |
| Name: "too-many-objects", | |
| Objects: make([]ClusterObjectSetObject, 51), | |
| }, | |
| }, | |
| }, | |
| valid: false, | |
| }, | |
| "phases entry names cannot be empty": { | |
| spec: ClusterObjectSetSpec{ | |
| LifecycleState: ClusterObjectSetLifecycleStateActive, | |
| Revision: 1, | |
| Phases: []ClusterObjectSetPhase{ | |
| { | |
| Name: "", | |
| }, | |
| }, | |
| }, | |
| valid: false, | |
| }, | |
| "phases entry names cannot start with symbols": { | |
| spec: ClusterObjectSetSpec{ | |
| LifecycleState: ClusterObjectSetLifecycleStateActive, | |
| Revision: 1, | |
| Phases: []ClusterObjectSetPhase{ | |
| { | |
| Name: "-invalid", | |
| }, | |
| }, | |
| }, | |
| valid: false, | |
| }, | |
| "phases entry names cannot start with numeric characters": { | |
| spec: ClusterObjectSetSpec{ | |
| LifecycleState: ClusterObjectSetLifecycleStateActive, | |
| Revision: 1, | |
| Phases: []ClusterObjectSetPhase{ | |
| { | |
| Name: "1-invalid", | |
| }, | |
| }, | |
| }, | |
| valid: false, | |
| baseValidSpec := ClusterObjectSetSpec{ | |
| LifecycleState: ClusterObjectSetLifecycleStateActive, | |
| Revision: 1, | |
| CollisionProtection: CollisionProtectionPrevent, | |
| } | |
| "phases must have no more than 20 phases": { | |
| spec: func() ClusterObjectSetSpec { | |
| spec := baseValidSpec | |
| spec.Phases = make([]ClusterObjectSetPhase, 21) | |
| for i := range spec.Phases { | |
| spec.Phases[i] = ClusterObjectSetPhase{Name: fmt.Sprintf("phase-%d", i)} | |
| } | |
| return spec | |
| }(), | |
| valid: false, | |
| }, | |
| "phases entries must have no more than 50 objects": { | |
| spec: func() ClusterObjectSetSpec { | |
| spec := baseValidSpec | |
| objs := make([]ClusterObjectSetObject, 51) | |
| for i := range objs { | |
| objs[i] = ClusterObjectSetObject{ | |
| Ref: ObjectSourceRef{Name: "my-secret", Key: fmt.Sprintf("obj-%d", i)}, | |
| } | |
| } | |
| spec.Phases = []ClusterObjectSetPhase{{ | |
| Name: "too-many-objects", | |
| Objects: objs, | |
| }} | |
| return spec | |
| }(), | |
| valid: false, | |
| }, | |
| "phases entry names cannot be empty": { | |
| spec: ClusterObjectSetSpec{ | |
| LifecycleState: ClusterObjectSetLifecycleStateActive, | |
| Revision: 1, | |
| Phases: []ClusterObjectSetPhase{ | |
| { | |
| Name: "", | |
| }, | |
| }, | |
| }, | |
| valid: false, | |
| }, | |
| "phases entry names cannot start with symbols": { | |
| spec: ClusterObjectSetSpec{ | |
| LifecycleState: ClusterObjectSetLifecycleStateActive, | |
| Revision: 1, | |
| Phases: []ClusterObjectSetPhase{ | |
| { | |
| Name: "-invalid", | |
| }, | |
| }, | |
| }, | |
| valid: false, | |
| }, | |
| "phases entry names cannot start with numeric characters": { | |
| spec: ClusterObjectSetSpec{ | |
| LifecycleState: ClusterObjectSetLifecycleStateActive, | |
| Revision: 1, | |
| Phases: []ClusterObjectSetPhase{ | |
| { | |
| Name: "1-invalid", | |
| }, | |
| }, | |
| }, | |
| valid: false, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api/v1/clusterobjectset_types_test.go` around lines 154 - 209, The
phase-related test cases in clusterobjectset_types_test.go are using
incomplete/zero-value specs so Create validation rejects them before phase rules
run; update each failing-case fixture to include a valid CollisionProtection and
otherwise-valid fields (LifecycleState, Revision) and for the 21-phases case
populate Phases with 21 entries that have valid non-empty names (e.g.,
"phase-1"...) so the "no more than 20" rule triggers, for the 51-objects case
populate a single phase with 51 valid ClusterObjectSetObject entries (fill
required fields like Group/Version/Kind/Name) so the "no more than 50 objects"
rule triggers, and for the name-related cases ensure CollisionProtection and
other fields are valid and that other validators (like object count) are within
limits so the intended name validation in the
ClusterObjectSetSpec/ClusterObjectSetPhase validators runs.
| | `conditions` _[Condition](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#condition-v1-meta) array_ | conditions represents the current state of the ClusterExtension.<br />The set of condition types which apply to all spec.source variations are Installed and Progressing.<br />The Installed condition represents whether the bundle has been installed for this ClusterExtension:<br /> - When Installed is True and the Reason is Succeeded, the bundle has been successfully installed.<br /> - When Installed is False and the Reason is Failed, the bundle has failed to install.<br />The Progressing condition represents whether or not the ClusterExtension is advancing towards a new state.<br />When Progressing is True and the Reason is Succeeded, the ClusterExtension is making progress towards a new state.<br />When Progressing is True and the Reason is Retrying, the ClusterExtension has encountered an error that could be resolved on subsequent reconciliation attempts.<br />When Progressing is False and the Reason is Blocked, the ClusterExtension has encountered an error that requires manual intervention for recovery.<br /><opcon:experimental:description><br />When Progressing is True and Reason is RollingOut, the ClusterExtension has one or more ClusterObjectSets in active roll out.<br /></opcon:experimental:description><br />When the ClusterExtension is sourced from a catalog, it surfaces deprecation conditions based on catalog metadata.<br />These are indications from a package owner to guide users away from a particular package, channel, or bundle:<br /> - BundleDeprecated is True if the installed bundle is marked deprecated, False if not deprecated, or Unknown if no bundle is installed yet or if catalog data is unavailable.<br /> - ChannelDeprecated is True if any requested channel is marked deprecated, False if not deprecated, or Unknown if catalog data is unavailable.<br /> - PackageDeprecated is True if the requested package is marked deprecated, False if not deprecated, or Unknown if catalog data is unavailable.<br /> - Deprecated is a rollup condition that is True when any deprecation exists, False when none exist, or Unknown when catalog data is unavailable. | | Optional: \{\} <br /> | | ||
| | `install` _[ClusterExtensionInstallStatus](#clusterextensioninstallstatus)_ | install is a representation of the current installation status for this ClusterExtension. | | Optional: \{\} <br /> | | ||
| | `activeRevisions` _[RevisionStatus](#revisionstatus) array_ | activeRevisions holds a list of currently active (non-archived) ClusterExtensionRevisions,<br />including both installed and rolling out revisions.<br /><opcon:experimental> | | Optional: \{\} <br /> | | ||
| | `activeRevisions` _[RevisionStatus](#revisionstatus) array_ | activeRevisions holds a list of currently active (non-archived) ClusterObjectSets,<br />including both installed and rolling out revisions.<br /><opcon:experimental> | | Optional: \{\} <br /> | |
There was a problem hiding this comment.
Regenerate this reference for the new ClusterObjectSet API.
These edits switch the narrative to ClusterObjectSet, but the page still never documents the ClusterObjectSet resource, ClusterObjectSetObject.ref, or ObjectSourceRef. The public reference stays incomplete.
Also applies to: 554-563
🧰 Tools
🪛 LanguageTool
[style] ~382-~382: Consider shortening this phrase to just ‘whether’, unless you mean ‘regardless of whether’.
Context: ... />The Progressing condition represents whether or not the ClusterExtension is advancing towar...
(WHETHER)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/api-reference/olmv1-api-reference.md` around lines 382 - 384, The docs
were partially updated to refer to ClusterObjectSet but the API reference was
not regenerated and is missing the ClusterObjectSet resource and related types;
regenerate the OpenAPI/swagger reference/page so the documentation includes the
ClusterObjectSet resource and add explicit entries for ClusterObjectSet,
ClusterObjectSetObject.ref and ObjectSourceRef (search for symbols
ClusterObjectSet, ClusterObjectSetObject.ref, and ObjectSourceRef in the
source/spec generator output), ensuring their schemas, field descriptions, and
links are emitted and then update the sections that currently mention
activeRevisions/ClusterExtension to point to the newly generated
ClusterObjectSet API docs.
| // ClusterObjectSetRefSecretsHaveOwnerRef verifies that all ref Secrets for the named | ||
| // ClusterObjectSet have an ownerReference pointing to the ClusterObjectSet with controller=true. | ||
| func ClusterObjectSetRefSecretsHaveOwnerRef(ctx context.Context, revisionName string) error { | ||
| sc := scenarioCtx(ctx) | ||
| revisionName = substituteScenarioVars(strings.TrimSpace(revisionName), sc) | ||
|
|
||
| cosObj, err := getResource("clusterobjectset", revisionName, "") | ||
| if err != nil { | ||
| return fmt.Errorf("getting ClusterObjectSet %q: %w", revisionName, err) | ||
| } | ||
| cosUID := cosObj.GetUID() | ||
|
|
||
| secrets, err := listRefSecrets(ctx, revisionName) | ||
| if err != nil { | ||
| return err | ||
| } | ||
| if len(secrets) == 0 { | ||
| return fmt.Errorf("no ref Secrets found for revision %q", revisionName) | ||
| } | ||
|
|
||
| for _, s := range secrets { | ||
| found := false | ||
| for _, ref := range s.OwnerReferences { | ||
| if ref.Kind == ocv1.ClusterObjectSetKind && ref.Name == revisionName && ref.UID == cosUID { | ||
| if ref.Controller == nil || !*ref.Controller { | ||
| return fmt.Errorf("secret %s/%s has ownerReference to ClusterObjectSet but controller is not true", s.Namespace, s.Name) | ||
| } | ||
| found = true | ||
| break | ||
| } | ||
| } | ||
| if !found { | ||
| return fmt.Errorf("secret %s/%s does not have ownerReference to ClusterObjectSet %q (uid %s)", s.Namespace, s.Name, revisionName, cosUID) | ||
| } | ||
| } | ||
| return nil |
There was a problem hiding this comment.
Poll for Secret ownerReferences instead of asserting once.
This helper says it “polls with timeout”, but it does a single read and returns immediately. Since ref Secret ownerReferences are attached after the ClusterObjectSet exists, this step can fail spuriously right after rollout.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/e2e/steps/steps.go` around lines 840 - 875,
ClusterObjectSetRefSecretsHaveOwnerRef does a single read and can fail
spuriously; change it to poll until ownerReferences appear (or timeout) instead
of asserting once: wrap the current logic that lists secrets (listRefSecrets)
and checks each Secret's OwnerReferences in a retry loop (e.g., using
wait.PollImmediate or a context-aware retry) that re-lists secrets and re-checks
owner refs until success or deadline, honoring the provided ctx for
timeout/cancellation and still returning the same error messages when the final
attempt fails; keep getResource(cos) usage to obtain cosUID once and then poll
for the secrets' owner refs.
| // resolveObjectRef fetches an object from a Secret ref using kubectl. | ||
| func resolveObjectRef(ref ocv1.ObjectSourceRef) (*unstructured.Unstructured, error) { | ||
| out, err := k8sClient("get", "secret", ref.Name, "-n", ref.Namespace, "-o", "json") | ||
| if err != nil { | ||
| return nil, fmt.Errorf("getting Secret %s/%s: %w", ref.Namespace, ref.Name, err) | ||
| } | ||
| var secret corev1.Secret | ||
| if err := json.Unmarshal([]byte(out), &secret); err != nil { | ||
| return nil, fmt.Errorf("unmarshaling Secret %s/%s: %w", ref.Namespace, ref.Name, err) | ||
| } | ||
| data, ok := secret.Data[ref.Key] | ||
| if !ok { | ||
| return nil, fmt.Errorf("key %q not found in Secret %s/%s", ref.Key, ref.Namespace, ref.Name) | ||
| } | ||
| // Auto-detect gzip compression (magic bytes 0x1f 0x8b) | ||
| if len(data) >= 2 && data[0] == 0x1f && data[1] == 0x8b { | ||
| reader, err := gzip.NewReader(bytes.NewReader(data)) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("creating gzip reader for key %q in Secret %s/%s: %w", ref.Key, ref.Namespace, ref.Name, err) | ||
| } | ||
| defer reader.Close() | ||
| const maxDecompressedSize = 10 * 1024 * 1024 // 10 MiB | ||
| limited := io.LimitReader(reader, maxDecompressedSize+1) | ||
| decompressed, err := io.ReadAll(limited) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("decompressing key %q in Secret %s/%s: %w", ref.Key, ref.Namespace, ref.Name, err) | ||
| } | ||
| if len(decompressed) > maxDecompressedSize { | ||
| return nil, fmt.Errorf("decompressed data for key %q in Secret %s/%s exceeds maximum size (%d bytes)", ref.Key, ref.Namespace, ref.Name, maxDecompressedSize) | ||
| } | ||
| data = decompressed | ||
| } | ||
| obj := &unstructured.Unstructured{} | ||
| if err := json.Unmarshal(data, &obj.Object); err != nil { | ||
| return nil, fmt.Errorf("unmarshaling object from key %q in Secret %s/%s: %w", ref.Key, ref.Namespace, ref.Name, err) | ||
| } | ||
| return obj, nil |
There was a problem hiding this comment.
Honor the default namespace for ref.
ObjectSourceRef.namespace is optional, but this helper always resolves Secrets with -n <ref.Namespace>. A valid ref that omits namespace will resolve against "" here instead of the OLM system namespace and break the new ref-backed paths.
Suggested fix
func resolveObjectRef(ref ocv1.ObjectSourceRef) (*unstructured.Unstructured, error) {
- out, err := k8sClient("get", "secret", ref.Name, "-n", ref.Namespace, "-o", "json")
+ namespace := ref.Namespace
+ if namespace == "" {
+ namespace = olmNamespace
+ }
+ out, err := k8sClient("get", "secret", ref.Name, "-n", namespace, "-o", "json")
if err != nil {
- return nil, fmt.Errorf("getting Secret %s/%s: %w", ref.Namespace, ref.Name, err)
+ return nil, fmt.Errorf("getting Secret %s/%s: %w", namespace, ref.Name, err)
}
var secret corev1.Secret
if err := json.Unmarshal([]byte(out), &secret); err != nil {
- return nil, fmt.Errorf("unmarshaling Secret %s/%s: %w", ref.Namespace, ref.Name, err)
+ return nil, fmt.Errorf("unmarshaling Secret %s/%s: %w", namespace, ref.Name, err)
}
data, ok := secret.Data[ref.Key]
if !ok {
- return nil, fmt.Errorf("key %q not found in Secret %s/%s", ref.Key, ref.Namespace, ref.Name)
+ return nil, fmt.Errorf("key %q not found in Secret %s/%s", ref.Key, namespace, ref.Name)
}
// Auto-detect gzip compression (magic bytes 0x1f 0x8b)
if len(data) >= 2 && data[0] == 0x1f && data[1] == 0x8b {
reader, err := gzip.NewReader(bytes.NewReader(data))
if err != nil {
- return nil, fmt.Errorf("creating gzip reader for key %q in Secret %s/%s: %w", ref.Key, ref.Namespace, ref.Name, err)
+ return nil, fmt.Errorf("creating gzip reader for key %q in Secret %s/%s: %w", ref.Key, namespace, ref.Name, err)
}
defer reader.Close()
const maxDecompressedSize = 10 * 1024 * 1024 // 10 MiB
limited := io.LimitReader(reader, maxDecompressedSize+1)
decompressed, err := io.ReadAll(limited)
if err != nil {
- return nil, fmt.Errorf("decompressing key %q in Secret %s/%s: %w", ref.Key, ref.Namespace, ref.Name, err)
+ return nil, fmt.Errorf("decompressing key %q in Secret %s/%s: %w", ref.Key, namespace, ref.Name, err)
}
if len(decompressed) > maxDecompressedSize {
- return nil, fmt.Errorf("decompressed data for key %q in Secret %s/%s exceeds maximum size (%d bytes)", ref.Key, ref.Namespace, ref.Name, maxDecompressedSize)
+ return nil, fmt.Errorf("decompressed data for key %q in Secret %s/%s exceeds maximum size (%d bytes)", ref.Key, namespace, ref.Name, maxDecompressedSize)
}
data = decompressed
}
obj := &unstructured.Unstructured{}
if err := json.Unmarshal(data, &obj.Object); err != nil {
- return nil, fmt.Errorf("unmarshaling object from key %q in Secret %s/%s: %w", ref.Key, ref.Namespace, ref.Name, err)
+ return nil, fmt.Errorf("unmarshaling object from key %q in Secret %s/%s: %w", ref.Key, namespace, ref.Name, err)
}
return obj, nil
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // resolveObjectRef fetches an object from a Secret ref using kubectl. | |
| func resolveObjectRef(ref ocv1.ObjectSourceRef) (*unstructured.Unstructured, error) { | |
| out, err := k8sClient("get", "secret", ref.Name, "-n", ref.Namespace, "-o", "json") | |
| if err != nil { | |
| return nil, fmt.Errorf("getting Secret %s/%s: %w", ref.Namespace, ref.Name, err) | |
| } | |
| var secret corev1.Secret | |
| if err := json.Unmarshal([]byte(out), &secret); err != nil { | |
| return nil, fmt.Errorf("unmarshaling Secret %s/%s: %w", ref.Namespace, ref.Name, err) | |
| } | |
| data, ok := secret.Data[ref.Key] | |
| if !ok { | |
| return nil, fmt.Errorf("key %q not found in Secret %s/%s", ref.Key, ref.Namespace, ref.Name) | |
| } | |
| // Auto-detect gzip compression (magic bytes 0x1f 0x8b) | |
| if len(data) >= 2 && data[0] == 0x1f && data[1] == 0x8b { | |
| reader, err := gzip.NewReader(bytes.NewReader(data)) | |
| if err != nil { | |
| return nil, fmt.Errorf("creating gzip reader for key %q in Secret %s/%s: %w", ref.Key, ref.Namespace, ref.Name, err) | |
| } | |
| defer reader.Close() | |
| const maxDecompressedSize = 10 * 1024 * 1024 // 10 MiB | |
| limited := io.LimitReader(reader, maxDecompressedSize+1) | |
| decompressed, err := io.ReadAll(limited) | |
| if err != nil { | |
| return nil, fmt.Errorf("decompressing key %q in Secret %s/%s: %w", ref.Key, ref.Namespace, ref.Name, err) | |
| } | |
| if len(decompressed) > maxDecompressedSize { | |
| return nil, fmt.Errorf("decompressed data for key %q in Secret %s/%s exceeds maximum size (%d bytes)", ref.Key, ref.Namespace, ref.Name, maxDecompressedSize) | |
| } | |
| data = decompressed | |
| } | |
| obj := &unstructured.Unstructured{} | |
| if err := json.Unmarshal(data, &obj.Object); err != nil { | |
| return nil, fmt.Errorf("unmarshaling object from key %q in Secret %s/%s: %w", ref.Key, ref.Namespace, ref.Name, err) | |
| } | |
| return obj, nil | |
| // resolveObjectRef fetches an object from a Secret ref using kubectl. | |
| func resolveObjectRef(ref ocv1.ObjectSourceRef) (*unstructured.Unstructured, error) { | |
| namespace := ref.Namespace | |
| if namespace == "" { | |
| namespace = olmNamespace | |
| } | |
| out, err := k8sClient("get", "secret", ref.Name, "-n", namespace, "-o", "json") | |
| if err != nil { | |
| return nil, fmt.Errorf("getting Secret %s/%s: %w", namespace, ref.Name, err) | |
| } | |
| var secret corev1.Secret | |
| if err := json.Unmarshal([]byte(out), &secret); err != nil { | |
| return nil, fmt.Errorf("unmarshaling Secret %s/%s: %w", namespace, ref.Name, err) | |
| } | |
| data, ok := secret.Data[ref.Key] | |
| if !ok { | |
| return nil, fmt.Errorf("key %q not found in Secret %s/%s", ref.Key, namespace, ref.Name) | |
| } | |
| // Auto-detect gzip compression (magic bytes 0x1f 0x8b) | |
| if len(data) >= 2 && data[0] == 0x1f && data[1] == 0x8b { | |
| reader, err := gzip.NewReader(bytes.NewReader(data)) | |
| if err != nil { | |
| return nil, fmt.Errorf("creating gzip reader for key %q in Secret %s/%s: %w", ref.Key, namespace, ref.Name, err) | |
| } | |
| defer reader.Close() | |
| const maxDecompressedSize = 10 * 1024 * 1024 // 10 MiB | |
| limited := io.LimitReader(reader, maxDecompressedSize+1) | |
| decompressed, err := io.ReadAll(limited) | |
| if err != nil { | |
| return nil, fmt.Errorf("decompressing key %q in Secret %s/%s: %w", ref.Key, namespace, ref.Name, err) | |
| } | |
| if len(decompressed) > maxDecompressedSize { | |
| return nil, fmt.Errorf("decompressed data for key %q in Secret %s/%s exceeds maximum size (%d bytes)", ref.Key, namespace, ref.Name, maxDecompressedSize) | |
| } | |
| data = decompressed | |
| } | |
| obj := &unstructured.Unstructured{} | |
| if err := json.Unmarshal(data, &obj.Object); err != nil { | |
| return nil, fmt.Errorf("unmarshaling object from key %q in Secret %s/%s: %w", ref.Key, namespace, ref.Name, err) | |
| } | |
| return obj, nil | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/e2e/steps/steps.go` around lines 1613 - 1649, resolveObjectRef currently
always uses ref.Namespace verbatim when calling k8sClient and in error messages,
which breaks refs that omit namespace; update resolveObjectRef to compute a
resolvedNamespace = ref.Namespace; if empty set it to the OLM system/test
default namespace used elsewhere in the suite, then use resolvedNamespace in the
k8sClient(...) call and in all error messages and Secret lookups (keep
references to ObjectSourceRef.Namespace but replace uses with resolvedNamespace)
so ref-less refs correctly resolve to the OLM system namespace.
|
Updated with the extra commits from #687 |
|
/lgtm |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@openshift/tests-extension/test/olmv1-preflight.go`:
- Around line 188-190: The Role binding currently restricts finalizer updates to
a single resource name (ResourceNames: []string{ceName}) but Boxcutter
finalizers are revisioned (e.g., "<ceName>-1"), causing the permission to miss
the actual object; update the Role definition that sets Resources:
[]string{"clusterobjectsets/finalizers"} / ResourceNames to either remove the
ResourceNames restriction so it applies to all finalizers, or populate
ResourceNames with the revisioned name(s) dynamically (e.g., use the revisioned
ceName format), ensuring the code that constructs ResourceNames references the
ceName revisioning logic rather than the base ceName.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 788bca7f-9444-4ec7-84c1-f8019d8126fb
📒 Files selected for processing (7)
openshift/tests-extension/.openshift-tests-extension/openshift_payload_olmv1.jsonopenshift/tests-extension/pkg/bindata/qe/bindata.goopenshift/tests-extension/test/olmv1-incompatible.goopenshift/tests-extension/test/olmv1-preflight.goopenshift/tests-extension/test/qe/specs/olmv1_ce.goopenshift/tests-extension/test/qe/testdata/olm/sa-nginx-insufficient-operand-rbac-boxcutter.yamlopenshift/tests-extension/test/qe/testdata/olm/sa-nginx-limited-boxcutter.yaml
✅ Files skipped from review due to trivial changes (2)
- openshift/tests-extension/test/qe/testdata/olm/sa-nginx-insufficient-operand-rbac-boxcutter.yaml
- openshift/tests-extension/.openshift-tests-extension/openshift_payload_olmv1.json
| Resources: []string{"clusterobjectsets/finalizers"}, | ||
| Verbs: []string{"update"}, | ||
| ResourceNames: []string{ceName}, |
There was a problem hiding this comment.
Boxcutter finalizer ResourceNames likely targets the wrong object name format.
Line 190 still scopes to ceName, but boxcutter finalizer permissions are revisioned (<ceName>-1). This can make the base role invalid in all boxcutter scenarios, causing failures for the wrong reason.
💡 Proposed fix
{
APIGroups: []string{"olm.operatorframework.io"},
Resources: []string{"clusterobjectsets/finalizers"},
Verbs: []string{"update"},
- ResourceNames: []string{ceName},
+ ResourceNames: []string{fmt.Sprintf("%s-1", ceName)},
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Resources: []string{"clusterobjectsets/finalizers"}, | |
| Verbs: []string{"update"}, | |
| ResourceNames: []string{ceName}, | |
| Resources: []string{"clusterobjectsets/finalizers"}, | |
| Verbs: []string{"update"}, | |
| ResourceNames: []string{fmt.Sprintf("%s-1", ceName)}, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@openshift/tests-extension/test/olmv1-preflight.go` around lines 188 - 190,
The Role binding currently restricts finalizer updates to a single resource name
(ResourceNames: []string{ceName}) but Boxcutter finalizers are revisioned (e.g.,
"<ceName>-1"), causing the permission to miss the actual object; update the Role
definition that sets Resources: []string{"clusterobjectsets/finalizers"} /
ResourceNames to either remove the ResourceNames restriction so it applies to
all finalizers, or populate ResourceNames with the revisioned name(s)
dynamically (e.g., use the revisioned ceName format), ensuring the code that
constructs ResourceNames references the ceName revisioning logic rather than the
base ceName.
|
/label qe-approved |
|
@bandrade: This PR has been marked as verified by DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
0d486fc
into
openshift:main
|
@tmshort: Jira Issue OCPBUGS-78211: Some pull requests linked via external trackers have merged: The following pull request, linked via external tracker, has not merged: All associated pull requests must be merged or unlinked from the Jira bug in order for it to move to the next state. Once unlinked, request a bug refresh with Jira Issue OCPBUGS-78211 has not been moved to the MODIFIED state. This PR is marked as verified. If the remaining PRs listed above are marked as verified before merging, the issue will automatically be moved to VERIFIED after all of the changes from the PRs are available in an accepted nightly payload. Jira Issue OCPBUGS-78455: Some pull requests linked via external trackers have merged: The following pull request, linked via external tracker, has not merged: All associated pull requests must be merged or unlinked from the Jira bug in order for it to move to the next state. Once unlinked, request a bug refresh with Jira Issue OCPBUGS-78455 has not been moved to the MODIFIED state. This PR is marked as verified. If the remaining PRs listed above are marked as verified before merging, the issue will automatically be moved to VERIFIED after all of the changes from the PRs are available in an accepted nightly payload. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/jira refresh |
|
@kuiwang02: Jira Issue OCPBUGS-78211 is in an unrecognized state (MODIFIED) and will not be moved to the MODIFIED state. Jira Issue Verification Checks: Jira Issue OCPBUGS-78455 Jira Issue OCPBUGS-78455 has been moved to the MODIFIED state and will move to the VERIFIED state when the change is available in an accepted nightly payload. 🕓 DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/jira refresh |
|
@kuiwang02: Jira Issue Verification Checks: Jira Issue OCPBUGS-78211 Jira Issue OCPBUGS-78211 has been moved to the MODIFIED state and will move to the VERIFIED state when the change is available in an accepted nightly payload. 🕓 Jira Issue OCPBUGS-78455 is in an unrecognized state (MODIFIED) and will not be moved to the MODIFIED state. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/retitle OCPBUGS-76381,OCPBUGS-76383,OCPBUGS-78211,OCPBUGS-78455: Synchronize From Upstream Repositories |
|
@tmshort: Jira Issue Verification Checks: Jira Issue OCPBUGS-76381 Jira Issue OCPBUGS-76381 has been moved to the MODIFIED state and will move to the VERIFIED state when the change is available in an accepted nightly payload. 🕓 Jira Issue Verification Checks: Jira Issue OCPBUGS-76383 Jira Issue OCPBUGS-76383 has been moved to the MODIFIED state and will move to the VERIFIED state when the change is available in an accepted nightly payload. 🕓 Jira Issue OCPBUGS-78211 is in an unrecognized state (ON_QA) and will not be moved to the MODIFIED state. Jira Issue OCPBUGS-78455 is in an unrecognized state (ON_QA) and will not be moved to the MODIFIED state. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
The downstream repository has been updated with the following following upstream commits:
The
vendor/directory has been updated and the following commits were carried:@catalogd-updateThis pull request is expected to merge without any human intervention. If tests are failing here, changes must land upstream to fix any issues so that future downstreaming efforts succeed.
/label tide/merge-method-merge