This directory contains Kubernetes admission policies for enforcing supply chain security of NVSentinel container images in your cluster. These policies ensure that only verified NVSentinel images with valid SLSA Build Provenance attestations can be deployed.
- Issue: NVSentinel attestations are created by GitHub Actions
actions/attest-build-provenance@v3, which generates Sigstore bundle format v0.3 - Limitation: Sigstore Policy Controller 0.10.5 (current latest version) cannot read bundle format v0.3 - it only supports v0.1 and v0.2
- Impact: While attestations exist and are valid (verifiable manually with
cosignCLI), the Policy Controller cannot validate them in-cluster - Current Configuration: Policy runs in
mode: warn- logs validation warnings but allows all images to deploy - Future Plan: Policy will be switched to
mode: enforceonce Policy Controller adds support for bundle format v0.3
To track Policy Controller v0.3 support, see: sigstore/policy-controller
Important: These policies are designed to be used only in the nvsentinel namespace and only apply to official NVSentinel images from ghcr.io/nvidia/nvsentinel/**.
- ✅ Verified: Images matching
ghcr.io/nvidia/nvsentinel/**with valid attestations - ✅ Allowed: All other images (third-party dependencies, sidecar containers, etc.)
- ✅ Allowed: Development images (e.g.,
localhost:5001/*) - ❌ Blocked: NVSentinel images without valid SLSA attestations
This ensures the policy doesn't interfere with other workloads in the namespace while still protecting NVSentinel deployments.
These policies require Sigstore Policy Controller to be installed in your cluster. The Policy Controller version is managed centrally in .versions.yaml at the repository root.
# Install Policy Controller using the latest release
kubectl apply -f https://github.com/sigstore/policy-controller/releases/latest/download/policy-controller.yaml
# Verify installation
kubectl -n cosign-system get podsAlternatively, you can install using Helm (recommended for production):
helm repo add sigstore https://sigstore.github.io/helm-charts
helm repo update
# Get the version from .versions.yaml
POLICY_CONTROLLER_VERSION=$(yq eval '.cluster.policy_controller' .versions.yaml)
# Install specific version
helm install policy-controller sigstore/policy-controller \
-n cosign-system \
--create-namespace \
--version "${POLICY_CONTROLLER_VERSION}"By default, Policy Controller operates in opt-in mode. Label the nvsentinel namespace to enforce policies:
# Enable policy enforcement for the nvsentinel namespace
kubectl label namespace nvsentinel policy.sigstore.dev/include=trueImportant Configuration:
To ensure only NVSentinel images are subject to verification (allowing third-party images like databases, monitoring tools, etc.), configure the no-match-policy:
kubectl create configmap config-policy-controller \
-n cosign-system \
--from-literal=no-match-policy=allow \
--dry-run=client -o yaml | kubectl apply -f -This allows images that don't match any ClusterImagePolicy pattern to run without verification.
Two ClusterImagePolicy policies are provided to enforce different levels of image verification:
The must-have-slsa.yaml file verifies that NVSentinel container images have valid SLSA Build Provenance attestations:
- Scope: All pods using
ghcr.io/nvidia/nvsentinel/**images - Verification:
- Checks for SLSA v1 provenance attestations
- Validates attestations are signed by the official GitHub Actions workflow
- Ensures images are built from the official NVIDIA/NVSentinel repository
- Uses keyless signing with Sigstore (GitHub Actions OIDC tokens via Fulcio)
- Verifies signatures in Rekor transparency log
- Policy Language: Uses CUE for attestation validation
- Current Mode: Running in
mode: warn(see "Current Status" section above)
Key Features:
- Keyless Verification: Uses GitHub Actions OIDC identity without managing keys
- Transparency: All signatures recorded in Rekor public transparency log
- SLSA Provenance: Validates build metadata including repository, workflow, and build parameters
- Bundle Format: Attestations stored in Sigstore bundle format v0.3 with push-to-registry
- Regex Matching: Supports both branch refs (
refs/heads/*) and tag refs (refs/tags/*)
The must-have-sbom.yaml file verifies that NVSentinel container images have both:
- SLSA Build Provenance attestations (as above)
- SBOM (Software Bill of Materials) attestations in CycloneDX format
This policy provides additional supply chain security by ensuring all image components are documented.
Multi-platform Support:
- Both policies support multi-platform images (linux/amd64, linux/arm64)
- Each platform has its own attestations
- Policy Controller automatically verifies the platform-specific digest matching the node architecture
Apply one of the policies to your cluster:
# Apply SLSA-only policy
kubectl apply -f must-have-slsa.yaml
# OR apply SLSA + SBOM policy (more restrictive)
kubectl apply -f must-have-sbom.yamlVerify the policy is active:
kubectl get clusterimagepolicy
kubectl describe clusterimagepolicy verify-nvsentinel-image-attestationTo verify any NVSentinel image manually using Cosign CLI:
export IMAGE="ghcr.io/nvidia/nvsentinel/fault-quarantine"
export DIGEST="sha256:850e8fd35bc6b9436fc9441c055ba0f7e656fb438320e933b086a34d35d09fd6"
cosign verify-attestation "${IMAGE}@${DIGEST}" \
--type https://slsa.dev/provenance/v1 \
--certificate-identity-regexp '^https://github\.com/NVIDIA/NVSentinel/\.github/workflows/publish\.yml@refs/(heads|tags)/' \
--certificate-oidc-issuer https://token.actions.githubusercontent.com \
| jq -r '.payload' | base64 -d | jq .SLSA attestations generated by GitHub Actions actions/attest-build-provenance@v3 are stored with the image in the OCI registry as Sigstore bundle format v0.3. The above command will verify:
- ✅ Issuer: https://token.actions.githubusercontent.com
- ✅ Subject: The GitHub Actions workflow identity (NVIDIA/NVSentinel/.github/workflows/publish.yml)
- ✅ Transparency log: Uses Rekor for verification
- ✅ SLSA predicate: Validates the attestation content matches SLSA Provenance v1 format
- ✅ Build metadata: Verifies the build came from NVIDIA/NVSentinel repository
- ✅ Bundle format: Attestations stored in Sigstore bundle format v0.3 (readable by cosign CLI)
apiVersion: v1
kind: Pod
metadata:
name: test-nvsentinel-valid
namespace: nvsentinel # Must be labeled with policy.sigstore.dev/include=true
spec:
containers:
- name: fault-quarantine
image: ghcr.io/nvidia/nvsentinel/fault-quarantine@sha256:850e8fd35bc6b9436fc9441c055ba0f7e656fb438320e933b086a34d35d09fd6This should be allowed if the image has valid attestations signed by the official workflow.
apiVersion: v1
kind: Pod
metadata:
name: test-nvsentinel-invalid
namespace: nvsentinel-system
spec:
containers:
- name: fault-quarantine
image: ghcr.io/nvidia/nvsentinel/fault-quarantine:latestThis should be blocked with an error message about missing or invalid attestations.
The policy currently runs in warn mode due to bundle format v0.3 incompatibility (see "Current Status" section above). Images that fail verification are still deployed, but warnings are logged:
spec:
mode: warn # Current configuration
images:
- glob: "ghcr.io/nvidia/nvsentinel/**"In warn mode:
- Images that fail verification are still deployed
- Warning events are logged and visible in pod events
- Provides visibility into which images would be validated once v0.3 support is added
- Useful for monitoring without blocking deployments
Once Policy Controller adds support for bundle format v0.3, the policy will be switched to enforce mode to block any images that fail verification:
spec:
# mode: enforce # Will be enabled when v0.3 support is added
images:
- glob: "ghcr.io/nvidia/nvsentinel/**"
# ... rest of the policyIn enforce mode:
- Images without valid attestations are blocked
- Deployment attempts fail with detailed error messages
- Recommended for production environments once compatibility is resolved
Configure what happens when an image doesn't match any policy using the config-policy-controller ConfigMap:
kubectl create configmap config-policy-controller -n cosign-system \
--from-literal=no-match-policy=denyOptions:
deny(recommended): Block images that don't match any policywarn: Allow but log warnings for unmatched imagesallow: Allow all unmatched images (not recommended for production)
Enable verbose logging in Policy Controller:
kubectl set env deployment/policy-controller -n cosign-system POLICY_CONTROLLER_LOG_LEVEL=debugView detailed logs:
kubectl logs -n cosign-system deployment/policy-controller -fWhen you need to test without blocking images:
- Switch policy to warn mode temporarily - Edit the ClusterImagePolicy and add
mode: warn - Remove namespace label to disable enforcement -
kubectl label namespace nvsentinel policy.sigstore.dev/include- - Use a separate test namespace - Create a namespace without the enforcement label