Sub-Module 2.5: The Immutable Fortress — Zero-CVE with ACS Enforcement

Sub-Module Overview

Duration: ~25 minutes
Prerequisites: Completion of Sub-Module 2.4 (SELinux Policy CI/CD); ACS installed per Appendix B
Target Audience: Application Developers, DevSecOps Engineers, CI/CD Architects
Learning Objectives:

  • Understand how unused OS packages in legacy base images create an unsustainable vulnerability treadmill

  • Deploy a deliberately vulnerable Python REST API to experience the anti-pattern firsthand

  • Rewrite the application using a Hummingbird distroless multi-stage build with strict JSON exec form

  • Inspect cryptographic SBOMs and SLSA provenance attached to Hummingbird artifacts

  • Author declarative ACS policies that enforce zero fixable CVEs and mandatory SBOM presence

  • Prove the guardrails by observing ACS admission control block a non-compliant deployment

Introduction

Every organisation running containers on Kubernetes faces the same operational burden: a constant stream of CVE advisories against base image packages that the application never calls. Patching apt, systemd, or glibc inside a Python runtime image is not security — it is technical debt theatre. You are patching code your application does not use, on a schedule dictated by upstream maintainers you do not control.

This sub-module introduces a fundamentally different posture. Instead of reactively patching an ever-growing attack surface, you proactively eradicate the attack surface by switching to Project Hummingbird distroless images that ship only what the application actually needs. You then lock that posture in place using Red Hat Advanced Cluster Security (ACS) admission controllers so that no legacy, bloated image can re-enter your cluster.

What You Will Build

Phase 1: Deploy legacy Python API (python:3.11-buster)
  -> ACS scan reveals 100+ CVEs from unused OS packages

Phase 2: Rebuild with Hummingbird distroless multi-stage
  -> ACS scan reveals ZERO fixable CVEs

Phase 3: Author ACS enforcement policies
  -> Cluster rejects any image with fixable CVEs > 0
  -> Cluster rejects any image without a verifiable SBOM

Phase 4: Attempt to redeploy the legacy image
  -> ACS admission controller BLOCKS the deployment

Prerequisites

  • OpenShift 4.21 cluster with ACS (RHACS) operator installed and Central accessible (see Appendix B: ACS Setup)

  • oc CLI authenticated to the cluster (provided by the Showroom terminal)

  • skopeo and curl available (provided by the Showroom terminal)

  • roxctl CLI (installed to ~/bin in Exercise 1 Step 3 below)

  • syft and cosign (installed to ~/bin in Exercise 5 below)

Exercise 1: Prepare the Environment

Step 1: Create the Lab Namespace

oc new-project hummingbird-acs-lab
Expected output:
Now using project "hummingbird-acs-lab" on server "https://...".

Step 2: Set Registry Variables

Look up the on-cluster Quay registry route and configure your registry credentials. These variables are used throughout this module for building, pushing, and deploying images:

export WORKSHOP_REGISTRY="{quay_hostname}"
export REGISTRY_USER="{quay_user}"
export REGISTRY_PASSWORD="{quay_password}"

export REGISTRY_AUTH_FILE=$HOME/.config/containers/auth.json
mkdir -p $(dirname $REGISTRY_AUTH_FILE)

echo "WORKSHOP_REGISTRY=${WORKSHOP_REGISTRY}"
echo "REGISTRY_USER=${REGISTRY_USER}"

skopeo login -u "${REGISTRY_USER}" -p "${REGISTRY_PASSWORD}" "${WORKSHOP_REGISTRY}" --tls-verify=false
Expected output:
WORKSHOP_REGISTRY=<quay-hostname>
REGISTRY_USER=<quay-user>
Login Succeeded!

Step 3: Install the roxctl CLI

The roxctl CLI is used to interact with ACS Central from the command line. Download it directly from your ACS installation:

ACS_ROUTE=$(oc get route central -n stackrox -o jsonpath='{.spec.host}')
ACS_PASSWORD=$(oc get secret central-htpasswd -n stackrox -o jsonpath='{.data.password}' | base64 -d)

curl -sk -u "admin:${ACS_PASSWORD}" \
  "https://${ACS_ROUTE}/api/cli/download/roxctl-linux" -o /tmp/roxctl

chmod +x /tmp/roxctl
mkdir -p $HOME/bin
mv /tmp/roxctl $HOME/bin/roxctl
export PATH="$HOME/bin:$PATH"
roxctl version
Expected output:
roxctl: 4.x.x

The download endpoint requires authentication. We use the ACS admin password stored in the central-htpasswd secret. The binary is placed in ~/bin which is added to your PATH.

Step 4: Configure roxctl CLI

Set the ACS Central endpoint and API token so that roxctl can communicate with your ACS installation:

export ROX_ENDPOINT=$(oc get route central -n stackrox -o jsonpath='{.spec.host}'):443
ACS_PASSWORD=$(oc get secret central-htpasswd -n stackrox -o jsonpath='{.data.password}' | base64 -d)

export ROX_API_TOKEN=$(curl -sk -u "admin:${ACS_PASSWORD}" \
  "https://${ROX_ENDPOINT}/v1/apitokens/generate" \
  -X POST \
  -H 'Content-Type: application/json' \
  -d '{"name":"workshop-token-'$(date +%s)'","role":"Admin"}' \
  | jq -r '.token')

echo "ROX_ENDPOINT=${ROX_ENDPOINT}"
echo "ROX_API_TOKEN=${ROX_API_TOKEN:0:20}..."

Verify connectivity:

roxctl central whoami --insecure-skip-tls-verify
Expected output:
User:
  auth-provider: ...
  Roles: Admin
  Access: ...

If you open a new terminal or your session disconnects, all export variables (WORKSHOP_REGISTRY, REGISTRY_USER, REGISTRY_PASSWORD, REGISTRY_AUTH_FILE, ROX_ENDPOINT, ROX_API_TOKEN) are lost. Re-run Step 2 and Step 4 before continuing. You will also need to re-export your PATH: export PATH="$HOME/bin:$PATH". A quick way to tell: if roxctl complains about localhost:8443, it means ROX_ENDPOINT is not set; if roxctl: command not found, re-export the PATH; if skopeo fails with permission denied, re-export REGISTRY_AUTH_FILE.

Step 5: Download Helper Scripts

Several exercises use helper scripts to parse JSON output from roxctl and the ACS REST API. Create them now so you can reference them with simple one-line commands later.

Script 1 — Scan summary (parses roxctl image scan JSON output):

mkdir -p ~/acs-fortress-lab && cd ~/acs-fortress-lab

cat > acs-scan-summary.py << 'PYEOF'
#!/usr/bin/env python3
import json, sys
data = json.load(sys.stdin)
summary = data.get("result", {}).get("summary", {})
vulns = data.get("result", {}).get("vulnerabilities", [])
fixable = sum(1 for v in vulns if v.get("componentFixedVersion"))
if "--brief" in sys.argv:
    print(f"  Components: {summary.get('TOTAL-COMPONENTS', 'N/A')}"
          f"  |  Vulnerabilities: {summary.get('TOTAL-VULNERABILITIES', 0)}"
          f"  |  Fixable: {fixable}")
else:
    print(f"Total components:       {summary.get('TOTAL-COMPONENTS', 'N/A')}")
    print(f"Total vulnerabilities:  {summary.get('TOTAL-VULNERABILITIES', 0)}")
    print(f"Critical:               {summary.get('CRITICAL', 0)}")
    print(f"Important:              {summary.get('IMPORTANT', 0)}")
    print(f"Moderate:               {summary.get('MODERATE', 0)}")
    print(f"Low:                    {summary.get('LOW', 0)}")
    print(f"Fixable vulnerabilities: {fixable}")
PYEOF

Script 2 — Admission controller check (reads ACS /v1/clusters API):

cat > acs-check-admission.py << 'PYEOF'
#!/usr/bin/env python3
import json, sys
data = json.load(sys.stdin)
cluster = data["clusters"][0]
ac = cluster.get("dynamicConfig", {}).get("admissionControllerConfig", {})
print(f"Cluster:              {cluster['name']}")
print(f"Admission controller: {ac.get('enabled', False)}")
print(f"Scan inline:          {ac.get('scanInline', False)}")
print(f"Enforce on updates:   {ac.get('enforceOnUpdates', False)}")
PYEOF

Script 3 — Provenance inspector (reads skopeo inspect --raw manifest):

cat > acs-inspect-provenance.py << 'PYEOF'
#!/usr/bin/env python3
import json, sys
try:
    manifest = json.load(sys.stdin)
except json.JSONDecodeError as e:
    print(f"Error: could not parse manifest JSON ({e})")
    sys.exit(1)
if manifest.get("mediaType") == "application/vnd.oci.image.index.v1+json":
    found = False
    for m in manifest.get("manifests", []):
        ann = m.get("annotations", {})
        artifact_type = m.get("artifactType", "")
        if "vnd.sigstore" in json.dumps(ann) or "sbom" in artifact_type.lower():
            found = True
            print(f"  Artifact: {artifact_type or 'unknown'}")
            print(f"  Digest:   {m.get('digest', 'unknown')}")
            print()
    if not found:
        print("No SBOM or sigstore attestations found in the manifest index.")
else:
    print("Single-arch manifest (inspect child manifests for attestations)")
PYEOF

Script 4 — Policy creation (creates enforcement policies via ACS API):

cat > acs-create-policies.sh << 'SHEOF'
#!/usr/bin/env bash
set -euo pipefail
: "${ROX_API_TOKEN:?ROX_API_TOKEN is not set}"
: "${ACS_ROUTE:?ACS_ROUTE is not set}"
API="https://${ACS_ROUTE}/v1/policies"
AUTH="Authorization: Bearer ${ROX_API_TOKEN}"
create_policy() {
  local json_file="$1"
  local name
  name=$(python3 -c "import json; print(json.load(open('${json_file}'))['name'])")
  existing=$(curl -sk -H "${AUTH}" "${API}" | \
    python3 -c "import json,sys; ids=[p['id'] for p in json.load(sys.stdin).get('policies',[]) if p['name']=='${name}']; print(ids[0] if ids else '')" 2>/dev/null)
  if [ -n "${existing}" ]; then
    echo "Policy '${name}' already exists (id: ${existing}). Ensuring enforcement..."
    curl -sk -H "${AUTH}" "${API}/${existing}" | \
      python3 -c "
import json, sys
p = json.load(sys.stdin)
desired = ['SCALE_TO_ZERO_ENFORCEMENT']
if p.get('enforcementActions') == desired:
    print('  Enforcement already set.')
else:
    p['enforcementActions'] = desired
    json.dump(p, open('/tmp/policy-update.json','w'))
    print('  Updating enforcement...')
" 2>/dev/null
    if [ -f /tmp/policy-update.json ]; then
      curl -sk -X PUT -H "${AUTH}" -H "Content-Type: application/json" \
        "${API}/${existing}" -d @/tmp/policy-update.json > /dev/null
      rm -f /tmp/policy-update.json
      echo "  Enforcement enabled."
    fi
    return 0
  fi
  response=$(curl -sk -X POST -H "${AUTH}" -H "Content-Type: application/json" "${API}" -d @"${json_file}")
  echo "${response}" | python3 -c "
import json, sys
d = json.load(sys.stdin)
if 'id' in d:
    print(f\"Created policy: {d.get('name', 'unknown')} (id: {d['id']})\")
else:
    print(f\"Error: {d.get('message', d)}\")
"
}
cat > /tmp/acs-policy-zero-cve.json << 'EOF'
{"name":"Zero Fixable CVEs Required","description":"Reject deployments where the image has any fixable CVE.","severity":"CRITICAL_SEVERITY","categories":["Vulnerability Management"],"lifecycleStages":["DEPLOY"],"enforcementActions":["SCALE_TO_ZERO_ENFORCEMENT"],"eventSource":"NOT_APPLICABLE","disabled":false,"scope":[{"namespace":"hummingbird-acs-lab"}],"policySections":[{"sectionName":"Fixable CVE threshold","policyGroups":[{"fieldName":"Fixed By","booleanOperator":"OR","negate":false,"values":[{"value":".*"}]},{"fieldName":"CVSS","booleanOperator":"OR","negate":false,"values":[{"value":">= 0.000000"}]}]}]}
EOF
echo "=== Creating policy: Zero Fixable CVEs Required ==="
create_policy /tmp/acs-policy-zero-cve.json
cat > /tmp/acs-policy-scan-required.json << 'EOF'
{"name":"Image Scan Required","description":"Reject deployments where the image has not been scanned by ACS.","severity":"HIGH_SEVERITY","categories":["DevOps Best Practices"],"lifecycleStages":["DEPLOY"],"enforcementActions":["SCALE_TO_ZERO_ENFORCEMENT"],"eventSource":"NOT_APPLICABLE","disabled":false,"scope":[{"namespace":"hummingbird-acs-lab"}],"policySections":[{"sectionName":"Scan status","policyGroups":[{"fieldName":"Unscanned Image","booleanOperator":"OR","negate":false,"values":[{"value":"true"}]}]}]}
EOF
echo "=== Creating policy: Image Scan Required ==="
create_policy /tmp/acs-policy-scan-required.json
echo ""
echo "Done. Verify in ACS UI: Platform Configuration -> Policy Management"
SHEOF

Verify the scripts were created:

ls -1 acs-*
Expected output:
acs-check-admission.py
acs-create-policies.sh
acs-inspect-provenance.py
acs-scan-summary.py

These scripts avoid multi-line copy-paste problems when running commands from the browser. Each uses a quoted heredoc (<< 'EOF') so the content is written exactly as shown.

Exercise 2: Deploy the Anti-Pattern (Legacy Python API)

This exercise deliberately deploys a vulnerable application. The goal is to make the problem visceral before introducing the solution.

Step 1: Create the Python REST API

mkdir -p ~/acs-fortress-lab && cd ~/acs-fortress-lab

cat > app.py << 'PYEOF'
from http.server import HTTPServer, BaseHTTPRequestHandler
import json, os

class APIHandler(BaseHTTPRequestHandler):
    def do_GET(self):
        if self.path == "/healthz":
            self._respond(200, {"status": "healthy"})
        elif self.path == "/api/v1/greeting":
            self._respond(200, {"message": "Hello from the Immutable Fortress lab"})
        else:
            self._respond(404, {"error": "not found"})

    def _respond(self, code, body):
        self.send_response(code)
        self.send_header("Content-Type", "application/json")
        self.end_headers()
        self.wfile.write(json.dumps(body).encode())

if __name__ == "__main__":
    port = int(os.environ.get("PORT", "8080"))
    server = HTTPServer(("0.0.0.0", port), APIHandler)
    print(f"Listening on :{port}")
    server.serve_forever()
PYEOF

Step 2: Write the Legacy Containerfile

This Containerfile uses python:3.11-buster, a full Debian-based image packed with OS-level packages the application will never use:

cat > Containerfile.legacy << 'EOF'
FROM quay.io/takinosh/python3.11-buster:latest
WORKDIR /app
COPY app.py .
RUN chmod 644 app.py && chown 1001:0 app.py
EXPOSE 8080
USER 1001
CMD ["python3", "app.py"]
EOF

OpenShift SCC Compatibility: The chmod 644 and chown 1001:0 ensure the file is readable when OpenShift’s restricted-v2 Security Context Constraint remaps the container UID to the namespace’s allocated range (e.g., 1000890000). Without this, the Python interpreter cannot read app.py and the container will crash with Permission denied.

Step 3: Create Registry Secret

The Quay repository is private. OpenShift needs credentials both to push images during builds and to pull images at deployment time:

oc create secret docker-registry quay-pull-secret \
  --docker-server=${WORKSHOP_REGISTRY} \
  --docker-username=${REGISTRY_USER} \
  --docker-password=${REGISTRY_PASSWORD} \
  -n hummingbird-acs-lab 2>/dev/null || echo "Secret already exists"

oc secrets link default quay-pull-secret --for=pull -n hummingbird-acs-lab
oc secrets link builder quay-pull-secret -n hummingbird-acs-lab

Step 4: Build and Push the Legacy Image

The Showroom terminal does not include podman, so we use OpenShift’s built-in Binary Build strategy. This uploads your local files and builds the Containerfile on-cluster using a builder pod, then pushes directly to Quay.

oc new-build --strategy=docker --binary=true \
  --name=legacy-python-api \
  --to-docker --to="${WORKSHOP_REGISTRY}/${REGISTRY_USER}/legacy-python-api:vulnerable" \
  -n hummingbird-acs-lab &>/dev/null || true

oc set build-secret --push bc/legacy-python-api quay-pull-secret -n hummingbird-acs-lab 2>/dev/null || true

cp Containerfile.legacy Dockerfile
oc start-build legacy-python-api --from-dir=. --follow --wait -n hummingbird-acs-lab
rm -f Dockerfile

oc new-build --strategy=docker --binary creates a BuildConfig that accepts uploaded source. The --to-docker flag directs the output to an external Docker registry (Quay) instead of the internal registry. The cp Containerfile.legacy Dockerfile step is needed because OpenShift’s Docker strategy looks for a file named Dockerfile by default.

Verify the image appears in Quay with its security scan:

Quay Repository Tags for legacy-python-api

Click on the security scan column to view the CVE details. The python:3.11-buster base image carries multiple fixable vulnerabilities:

Clair Security Scan showing 10 fixable vulnerabilities in legacy-python-api

Step 5: Deploy to OpenShift

cat << EOF | oc apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: legacy-python-api
  namespace: hummingbird-acs-lab
  labels:
    app: legacy-python-api
    tier: legacy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: legacy-python-api
  template:
    metadata:
      labels:
        app: legacy-python-api
        tier: legacy
    spec:
      containers:
      - name: api
        image: ${WORKSHOP_REGISTRY}/${REGISTRY_USER}/legacy-python-api:vulnerable
        ports:
        - containerPort: 8080
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 3
---
apiVersion: v1
kind: Service
metadata:
  name: legacy-python-api
  namespace: hummingbird-acs-lab
spec:
  selector:
    app: legacy-python-api
  ports:
  - port: 8080
    targetPort: 8080
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: legacy-python-api
  namespace: hummingbird-acs-lab
spec:
  to:
    kind: Service
    name: legacy-python-api
  port:
    targetPort: 8080
  tls:
    termination: edge
EOF

Step 6: Verify the Legacy Deployment

Wait for the deployment to become ready:

oc rollout status deployment/legacy-python-api -n hummingbird-acs-lab --timeout=120s

Get the application route:

LEGACY_ROUTE=$(oc get route legacy-python-api -n hummingbird-acs-lab -o jsonpath='{.spec.host}')
echo "Legacy API URL: https://${LEGACY_ROUTE}"

Query the API endpoints:

echo "=== Greeting Endpoint ==="
curl -sk "https://${LEGACY_ROUTE}/api/v1/greeting" | python3 -m json.tool

echo ""
echo "=== Health Check ==="
curl -sk "https://${LEGACY_ROUTE}/healthz" | python3 -m json.tool

echo ""
echo "=== 404 Handler (unknown path) ==="
curl -sk "https://${LEGACY_ROUTE}/api/v1/nonexistent" | python3 -m json.tool
Expected output:
=== Greeting Endpoint ===
{
    "message": "Hello from the Immutable Fortress lab"
}

=== Health Check ===
{
    "status": "healthy"
}

=== 404 Handler (unknown path) ===
{
    "error": "not found"
}

You can also open the API in your browser:

The API works perfectly. Now let’s see what is lurking beneath the surface.

Exercise 3: Scan with ACS — Exposing Hidden Technical Debt

Step 1: Scan the Legacy Image with roxctl

roxctl image scan \
  --image=${WORKSHOP_REGISTRY}/${REGISTRY_USER}/legacy-python-api:vulnerable \
  --insecure-skip-tls-verify \
  --output=table
Expected output (abbreviated — your counts will vary):
COMPONENT        VERSION   CVE               SEVERITY   FIXED BY
apt              2.2.4     CVE-2024-XXXXX    CRITICAL   2.2.5
libsystemd0     247.3-7   CVE-2023-XXXXX    HIGH       ...
glibc            2.31-13   CVE-2024-XXXXX    HIGH       ...
openssl          1.1.1n    CVE-2024-XXXXX    HIGH       ...
...

TOTAL: XX critical, XX high, XX medium, XX low

The key insight: you will typically see 50 to 200+ CVEs in this scan. Every single one of them comes from Debian Buster OS packages — apt, dpkg, systemd, glibc, openssl, bash, coreutils, and dozens more. Your Python API uses none of them. It needs only the Python interpreter and the standard library.

You are carrying the security burden of an entire general-purpose operating system for an application that requires only a language runtime.

Step 2: Check the Image in ACS Dashboard

Open the RHACS Central dashboard in your browser. First, retrieve the URL and admin credentials:

ACS_ROUTE=$(oc get route central -n stackrox -o jsonpath='{.spec.host}')
ACS_PASSWORD=$(oc get secret central-htpasswd -n stackrox -o jsonpath='{.data.password}' | base64 -d)

echo "ACS Dashboard: https://${ACS_ROUTE}"
echo "Username:      admin"
echo "Password:      ${ACS_PASSWORD}"

Log in to the ACS Central dashboard:

  1. Open the URL printed above in your browser

  2. Accept the self-signed certificate warning if prompted

  3. Select Login with username/password and log in with username admin and the password shown above

ACS Central Login Page

After logging in, you will see the ACS Dashboard showing cluster-wide security metrics — policy violations, images at most risk, and deployments at most risk:

ACS Central Dashboard showing security overview

Now navigate to the legacy image vulnerabilities:

  1. Click Vulnerability ManagementResults in the left sidebar

  2. Search for legacy-python-api in the Image Name filter

  3. You will see 60 CVEs across the image, including critical and high severity findings:

ACS Vulnerability findings showing 60 CVEs in legacy-python-api

Filter to show only fixable Critical and Important CVEs to see the most actionable findings:

ACS filtered view showing 3 fixable Critical/Important CVEs

All of these CVEs come from OS-level packages in the python:3.11-buster base image — not from your application code. This is the security debt you carry by using a full OS-based container image.

Step 3: Count the Fixable CVEs

roxctl image scan \
  --image=${WORKSHOP_REGISTRY}/${REGISTRY_USER}/legacy-python-api:vulnerable \
  --insecure-skip-tls-verify \
  --output=json | python3 acs-scan-summary.py
Expected output (your counts will vary based on the image scan date):
Total components:       <N>
Total vulnerabilities:  <N>
Critical:               <N>
Important:              <N>
Moderate:               <N>
Low:                    <N>
Fixable vulnerabilities: <N>

The exact numbers depend on when the base image was last updated and the current state of the CVE databases. The key takeaway is that there are multiple fixable vulnerabilities — all from OS-level packages your application never uses.

Remember these numbers. You will compare them against the Hummingbird image shortly.

Exercise 4: Build the Immutable Fortress (Hummingbird Multi-Stage)

Step 1: Write the Distroless Containerfile

cd ~/acs-fortress-lab

cat > Containerfile.fortress << 'EOF'
# Stage 1: Builder -- install dependencies in a full image
FROM quay.io/takinosh/python3.11-slim:latest AS builder
WORKDIR /build
COPY app.py .
RUN chmod 644 app.py

# Stage 2: Runtime -- copy only what the app needs into Hummingbird
FROM quay.io/hummingbird-hatchling/python:latest
COPY --from=builder --chmod=644 /build/app.py /app/app.py
WORKDIR /app
EXPOSE 8080
USER 1001
ENTRYPOINT ["python3", "/app/app.py"]
EOF

Why JSON array syntax for ENTRYPOINT/CMD?

Hummingbird images are distroless — they contain no shell (/bin/sh, /bin/bash). If you write:

CMD python3 /app/app.py

Docker/Podman interprets this as sh -c "python3 /app/app.py", which fails because /bin/sh does not exist. You must use the JSON exec form:

ENTRYPOINT ["python3", "/app/app.py"]

This invokes the Python interpreter directly via the kernel’s execve syscall, bypassing the need for a shell entirely.

Step 2: Build and Push the Hummingbird Image

oc new-build --strategy=docker --binary=true \
  --name=fortress-python-api \
  --to-docker --to="${WORKSHOP_REGISTRY}/${REGISTRY_USER}/fortress-python-api:zero-cve" \
  -n hummingbird-acs-lab &>/dev/null || true

oc set build-secret --push bc/fortress-python-api quay-pull-secret -n hummingbird-acs-lab 2>/dev/null || true

cp Containerfile.fortress Dockerfile
oc start-build fortress-python-api --from-dir=. --follow --wait -n hummingbird-acs-lab
rm -f Dockerfile

Verify the fortress image in Quay. Notice the zero-cve tag with a Passed security scan and a dramatically smaller size (38 MB vs 339 MB):

Quay Repository Tags for fortress-python-api showing Passed security scan

Click the security scan to confirm — zero vulnerabilities detected:

Clair Security Scanner showing no vulnerabilities in fortress-python-api

Step 3: Compare Image Sizes

Since images were built on-cluster (not locally), we query the registry for size information:

echo "=== Image Size Comparison ==="
echo ""
echo "Legacy (python:3.11-buster):"
skopeo inspect --tls-verify=false \
  --creds="${REGISTRY_USER}:${REGISTRY_PASSWORD}" \
  docker://${WORKSHOP_REGISTRY}/${REGISTRY_USER}/legacy-python-api:vulnerable 2>/dev/null | python3 -c "
import json, sys
try:
    d = json.load(sys.stdin)
    layers = d.get('Layers', d.get('LayersData', []))
    print(f'  Layers: {len(layers)}')
    print(f'  Created: {d.get(\"Created\", \"N/A\")[:19]}')
except (json.JSONDecodeError, KeyError):
    print('  (inspect failed -- check registry credentials)')
"

echo ""
echo "Fortress (Hummingbird distroless):"
skopeo inspect --tls-verify=false \
  --creds="${REGISTRY_USER}:${REGISTRY_PASSWORD}" \
  docker://${WORKSHOP_REGISTRY}/${REGISTRY_USER}/fortress-python-api:zero-cve 2>/dev/null | python3 -c "
import json, sys
try:
    d = json.load(sys.stdin)
    layers = d.get('Layers', d.get('LayersData', []))
    print(f'  Layers: {len(layers)}')
    print(f'  Created: {d.get(\"Created\", \"N/A\")[:19]}')
except (json.JSONDecodeError, KeyError):
    print('  (inspect failed -- check registry credentials)')
"
Expected output (layer counts will vary):
=== Image Size Comparison ===

Legacy (python:3.11-buster):
  Layers: ~10+
  Created: <timestamp>

Fortress (Hummingbird distroless):
  Layers: ~3-5
  Created: <timestamp>

The Hummingbird image is roughly 9x smaller because it contains only the Python interpreter, shared libraries it actually links against, and TLS root certificates — nothing else. No package manager, no shell, no coreutils, no man pages. The exact sizes depend on the base image versions, but the order-of-magnitude difference is consistent.

Exercise 5: Inspect SBOM and Provenance

Hummingbird images built through the Konflux software factory ship with cryptographically signed SBOMs and SLSA provenance metadata. Even when building locally, you can generate and compare SBOMs.

Step 0: Install syft

The syft CLI generates SBOMs (Software Bills of Materials) from container images. Install it to your local ~/bin:

curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b $HOME/bin
export PATH="$HOME/bin:$PATH"
syft version

Step 1: Generate SBOMs for Both Images

Since images were built on-cluster, we scan them directly from the Quay registry. The SYFT_REGISTRY_INSECURE_SKIP_TLS_VERIFY variable handles the self-signed certificate:

export SYFT_REGISTRY_INSECURE_SKIP_TLS_VERIFY=true

echo "=== Legacy image SBOM ==="
syft registry:${WORKSHOP_REGISTRY}/${REGISTRY_USER}/legacy-python-api:vulnerable -o table | head -30
echo ""
LEGACY_COUNT=$(syft registry:${WORKSHOP_REGISTRY}/${REGISTRY_USER}/legacy-python-api:vulnerable -o json 2>/dev/null | python3 -c "import json,sys; print(len(json.load(sys.stdin).get('artifacts',[])))")
echo "Total packages in legacy image: $LEGACY_COUNT"
echo "=== Hummingbird image SBOM ==="
syft registry:${WORKSHOP_REGISTRY}/${REGISTRY_USER}/fortress-python-api:zero-cve -o table | head -30
echo ""
FORTRESS_COUNT=$(syft registry:${WORKSHOP_REGISTRY}/${REGISTRY_USER}/fortress-python-api:zero-cve -o json 2>/dev/null | python3 -c "import json,sys; print(len(json.load(sys.stdin).get('artifacts',[])))")
echo "Total packages in Hummingbird image: $FORTRESS_COUNT"
Expected comparison:
Legacy image:      ~430 packages (Debian Buster + Python)
Hummingbird image: ~15-25 packages (Python runtime only)

Step 2: Install cosign and Inspect Hummingbird Upstream Provenance

Hummingbird images published by Red Hat through the Konflux pipeline include attached SBOM attestations, signatures, and SLSA provenance. First, install the cosign CLI, then inspect the upstream image supply chain.

Install cosign from the RHTAS cli-server (if available), otherwise from GitHub:

mkdir -p $HOME/bin

DOWNLOADS_URL=$(oc get route -n trusted-artifact-signer \
  -l app.kubernetes.io/name=cli-server \
  -o jsonpath='{.items[0].spec.host}' 2>/dev/null || \
  oc get route -n trusted-artifact-signer \
  -l app=trusted-artifact-signer-clientserver \
  -o jsonpath='{.items[0].spec.host}' 2>/dev/null)

if [ -n "$DOWNLOADS_URL" ]; then
  echo "Downloading cosign from RHTAS cli-server..."
  curl -sSL "https://${DOWNLOADS_URL}/clients/linux/cosign-amd64.gz" | gunzip > $HOME/bin/cosign
else
  echo "Downloading cosign from GitHub..."
  COSIGN_VERSION=v2.4.1
  curl -sL "https://github.com/sigstore/cosign/releases/download/${COSIGN_VERSION}/cosign-linux-amd64" \
    -o $HOME/bin/cosign
fi

chmod +x $HOME/bin/cosign
export PATH="$HOME/bin:$PATH"
cosign version

Now inspect the Hummingbird image supply chain:

cosign tree quay.io/hummingbird/python:latest
Expected output:
📦 Supply Chain Security Related artifacts for an image: quay.io/hummingbird-hatchling/python:latest
└── 💾 Attestations for an image tag: ...python:sha256-<digest>.att
   └── 🍒 sha256:<attestation-digest>
└── 🔐 Signatures for an image tag: ...python:sha256-<digest>.sig
   └── 🍒 sha256:<signature-digest>
└── 📦 SBOMs for an image tag: ...python:sha256-<digest>.sbom
   └── 🍒 sha256:<sbom-digest>

This shows three types of supply chain artifacts attached to the image:

  • Attestations (.att) — SLSA build provenance recording the source commit, build system, and builder identity

  • Signatures (.sig) — Cosign keyless signatures verifying the image was built by a trusted pipeline

  • SBOMs (.sbom) — Software Bill of Materials listing every component in the image

If cosign tree is not available, you can fall back to skopeo:

skopeo inspect --raw docker://quay.io/hummingbird/python:latest | \
  python3 acs-inspect-provenance.py

Note that skopeo only inspects the top-level manifest index. Cosign-style attestations are stored as separate tagged artifacts and may not appear in the raw manifest.

SLSA Provenance records exactly which source commit, build system, and builder identity produced the image. Combined with the SBOM, this provides a complete, cryptographically verifiable chain from source code to deployed artifact. The Konflux software factory automates this for every Hummingbird release.

Exercise 6: The Zero-CVE Moment

Step 1: Deploy the Hummingbird Image

cat << EOF | oc apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: fortress-python-api
  namespace: hummingbird-acs-lab
  labels:
    app: fortress-python-api
    tier: fortress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: fortress-python-api
  template:
    metadata:
      labels:
        app: fortress-python-api
        tier: fortress
    spec:
      containers:
      - name: api
        image: ${WORKSHOP_REGISTRY}/${REGISTRY_USER}/fortress-python-api:zero-cve
        ports:
        - containerPort: 8080
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 3
        securityContext:
          allowPrivilegeEscalation: false
          runAsNonRoot: true
          capabilities:
            drop: ["ALL"]
          seccompProfile:
            type: RuntimeDefault
---
apiVersion: v1
kind: Service
metadata:
  name: fortress-python-api
  namespace: hummingbird-acs-lab
spec:
  selector:
    app: fortress-python-api
  ports:
  - port: 8080
    targetPort: 8080
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: fortress-python-api
  namespace: hummingbird-acs-lab
spec:
  to:
    kind: Service
    name: fortress-python-api
  port:
    targetPort: 8080
  tls:
    termination: edge
EOF

oc rollout status deployment/fortress-python-api -n hummingbird-acs-lab --timeout=120s

Step 2: Verify the Application

ROUTE=$(oc get route fortress-python-api -n hummingbird-acs-lab -o jsonpath='{.spec.host}')
curl -sk "https://${ROUTE}/api/v1/greeting"
Expected output:
{"message": "Hello from the Immutable Fortress lab"}

Identical functionality. Radically different security posture.

Step 3: Scan the Hummingbird Image with ACS

roxctl image scan \
  --image=${WORKSHOP_REGISTRY}/${REGISTRY_USER}/fortress-python-api:zero-cve \
  --insecure-skip-tls-verify \
  --output=table
Expected output:
Scan results for image: .../fortress-python-api:zero-cve
(TOTAL-COMPONENTS: 0, TOTAL-VULNERABILITIES: 0, LOW: 0, MODERATE: 0, IMPORTANT: 0, CRITICAL: 0)

+-----------+---------+-----+----------+------+------+---------------+----------+---------------+
| COMPONENT | VERSION | CVE | SEVERITY | CVSS | LINK | FIXED VERSION | ADVISORY | ADVISORY LINK |
+-----------+---------+-----+----------+------+------+---------------+----------+---------------+

Zero components. Zero vulnerabilities. The attack surface has been eradicated, not managed.

Verify in the ACS dashboard — search for fortress-python-api:zero-cve under Vulnerability ManagementResults. You should see 0 CVEs, 0 Images affected:

ACS Vulnerability Management showing zero CVEs for fortress-python-api:zero-cve
echo "=== Side-by-Side Comparison ==="
echo ""
echo "Legacy (python:3.11-buster):"
roxctl image scan \
  --image=${WORKSHOP_REGISTRY}/${REGISTRY_USER}/legacy-python-api:vulnerable \
  --insecure-skip-tls-verify \
  --output=json 2>/dev/null | python3 acs-scan-summary.py --brief

echo ""
echo "Fortress (Hummingbird distroless):"
roxctl image scan \
  --image=${WORKSHOP_REGISTRY}/${REGISTRY_USER}/fortress-python-api:zero-cve \
  --insecure-skip-tls-verify \
  --output=json 2>/dev/null | python3 acs-scan-summary.py --brief

This is the paradigm shift. The vulnerability count did not drop because you patched faster — it dropped because the vulnerable code no longer exists in the image. There is nothing to patch. The attack surface has been eradicated, not managed.

Exercise 7: Enforce the Posture with ACS Policies

A zero-CVE scan is only as valuable as the policy that prevents regression. In this exercise you create two ACS policies that make the zero-CVE posture mandatory at the admission control level.

Step 1: Verify ACS Admission Controller

Check that the admission controller webhook is active:

oc get ValidatingWebhookConfiguration -l app.kubernetes.io/name=stackrox
Expected output:
NAME       WEBHOOKS   AGE
stackrox   2          ...

Verify the admission controller configuration via the ACS API:

ACS_ROUTE=$(oc get route central -n stackrox -o jsonpath='{.spec.host}')

curl -sk -H "Authorization: Bearer ${ROX_API_TOKEN}" \
  "https://${ACS_ROUTE}/v1/clusters" | python3 acs-check-admission.py
Expected output:
Cluster:              workshop-cluster
Admission controller: True
Scan inline:          True
Enforce on updates:   True

If the admission controller shows False, ask your workshop instructor to enable it via the ACS Central dashboard: Platform ConfigurationClusters → select your cluster → enable Admission Controller settings.

Steps 2-3: Create Enforcement Policies

This script creates two policies via the ACS REST API:

  • Zero Fixable CVEs Required (Critical) — blocks any deployment where the image has fixable CVEs

  • Image Scan Required (High) — blocks any deployment where the image has not been scanned

Run the helper script:

ACS_ROUTE=$(oc get route central -n stackrox -o jsonpath='{.spec.host}')
export ACS_ROUTE ROX_API_TOKEN

bash acs-create-policies.sh
Expected output (first run):
=== Creating policy: Zero Fixable CVEs Required ===
Created policy: Zero Fixable CVEs Required (id: <uuid>)
=== Creating policy: Image Scan Required ===
Created policy: Image Scan Required (id: <uuid>)

Done. Verify in ACS UI: Platform Configuration -> Policy Management
Expected output (if policies already exist):
=== Creating policy: Zero Fixable CVEs Required ===
Policy 'Zero Fixable CVEs Required' already exists (id: <uuid>). Ensuring enforcement...
  Enforcement enabled.
=== Creating policy: Image Scan Required ===
Policy 'Image Scan Required' already exists (id: <uuid>). Ensuring enforcement...
  Enforcement enabled.

Done. Verify in ACS UI: Platform Configuration -> Policy Management

If the policies already exist, the script ensures enforcement is enabled (SCALE_TO_ZERO_ENFORCEMENT). This handles the case where built-in ACS policies exist but default to INFORM mode without enforcement.

Verify in the ACS dashboard under Platform ConfigurationPolicy Management. Search for Zero Fixable CVEs Required:

ACS Policy Management showing Zero Fixable CVEs Required policy with Critical severity

Click the policy to see its details — severity, description, lifecycle stage, and enforcement action:

ACS Policy detail view for Zero Fixable CVEs Required showing Critical severity and Deploy lifecycle

Enforcement Action: Both policies use SCALE_TO_ZERO_ENFORCEMENT for deploy-time enforcement. This is the enforcement type that the ACS admission controller webhook recognises for blocking deployments at admission time. The admission webhook denies the Kubernetes API request before the pod is created.

UI alternative: You can also create these policies manually in the ACS dashboard:

Navigate to Platform ConfigurationPolicy ManagementCreate Policy:

Policy 1:

  • Name: Zero Fixable CVEs Required

  • Severity: Critical

  • Lifecycle: Deploy

  • Response: Inform and enforce

  • Policy Criteria: Fixed By matches .* AND CVSS >= 0

  • Policy Scope: namespace hummingbird-acs-lab

Policy 2:

  • Name: Image Scan Required

  • Severity: High

  • Lifecycle: Deploy

  • Response: Inform and enforce

  • Policy Criteria: Unscanned Image = true

  • Policy Scope: namespace hummingbird-acs-lab

Step 4: Verify Policies Are Active

ACS_ROUTE=$(oc get route central -n stackrox -o jsonpath='{.spec.host}')

for POLICY_NAME in "Zero Fixable CVEs Required" "Image Scan Required"; do
  POLICY_ID=$(curl -sk -H "Authorization: Bearer ${ROX_API_TOKEN}" \
    "https://${ACS_ROUTE}/v1/policies" | \
    python3 -c "import json,sys; [print(p['id']) for p in json.load(sys.stdin).get('policies',[]) if p['name']=='${POLICY_NAME}']" 2>/dev/null)
  curl -sk -H "Authorization: Bearer ${ROX_API_TOKEN}" \
    "https://${ACS_ROUTE}/v1/policies/${POLICY_ID}" | \
    python3 -c "
import json, sys
p = json.load(sys.stdin)
enf = [e for e in p.get('enforcementActions',[]) if e != 'UNSET_ENFORCEMENT']
print(f'{p[\"name\"]:<35} {p[\"severity\"]:<20} {\", \".join(enf) or \"INFORM\"}')"
done
Expected output:
Zero Fixable CVEs Required          CRITICAL_SEVERITY    SCALE_TO_ZERO_ENFORCEMENT
Image Scan Required                 HIGH_SEVERITY        SCALE_TO_ZERO_ENFORCEMENT

Both policies are now active and will be evaluated by the admission controller on every deployment create or update in the hummingbird-acs-lab namespace.

You can also verify in the ACS UI: navigate to Platform ConfigurationPolicy Management and search for Zero Fixable or Image Scan Required.

Exercise 8: Prove the Guardrails (Simulated Pipeline Failure)

This is the definitive test. You will attempt to redeploy the original vulnerable image. The ACS admission controller must block it.

Step 1: Attempt to Update an Existing Deployment

Try swapping the fortress image for the vulnerable legacy image. The admission controller should block it:

echo "=== Attempting to deploy vulnerable legacy image ==="
echo "This SHOULD fail if ACS policies are enforced correctly."
echo ""

oc set image deployment/fortress-python-api \
  api=${WORKSHOP_REGISTRY}/${REGISTRY_USER}/legacy-python-api:vulnerable \
  -n hummingbird-acs-lab 2>&1

echo ""
echo "Exit code: $?"
Expected output:
=== Attempting to deploy vulnerable legacy image ===
This SHOULD fail if ACS policies are enforced correctly.

error: failed to patch image update to pod template: admission webhook
"policyeval.stackrox.io" denied the request:
The attempted operation violated 1 enforced policy, described below:

Policy: Zero Fixable CVEs Required
- Description:
    ↳ Reject deployments where the image has any fixable CVE. Enforces proactive
      attack surface eradication over reactive patching.
- Violations:
    - Fixable CVE-2024-6345 (CVSS 8.8) (severity Important) found in component
      'setuptools' (version 65.5.1) in container 'api', resolved by version 70.0.0
    - Fixable CVE-2023-5752 (CVSS 5.5) (severity Moderate) found in component
      'pip' (version 23.1.2) in container 'api', resolved by version 23.3
    ... (additional CVEs listed) ...

Step 2: Attempt to Create a New Deployment

Also try creating an entirely new deployment with the vulnerable image. This confirms the policy blocks both updates and new deployments:

cat << EOF | oc apply -f - 2>&1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: legacy-regression-test
  namespace: hummingbird-acs-lab
  labels:
    app: legacy-regression-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: legacy-regression-test
  template:
    metadata:
      labels:
        app: legacy-regression-test
    spec:
      containers:
      - name: api
        image: ${WORKSHOP_REGISTRY}/${REGISTRY_USER}/legacy-python-api:vulnerable
        ports:
        - containerPort: 8080
EOF

echo ""
echo "Exit code: $?"
Expected output:
Error from server (Failed currently enforced policies from RHACS):
error when creating "STDIN": admission webhook "policyeval.stackrox.io"
denied the request:
The attempted operation violated 1 enforced policy, described below:

Policy: Zero Fixable CVEs Required
- Description:
    ↳ Reject deployments where the image has any fixable CVE. Enforces proactive
      attack surface eradication over reactive patching.
- Violations:
    - Fixable CVE-2024-6345 (CVSS 8.8) (severity Important) found in component
      'setuptools' (version 65.5.1) in container 'api', resolved by version 70.0.0
    - Fixable CVE-2023-5752 (CVSS 5.5) (severity Moderate) found in component
      'pip' (version 23.1.2) in container 'api', resolved by version 23.3
    ... (additional CVEs listed) ...

In case of emergency, add the annotation
{"admission.stackrox.io/break-glass": "ticket-1234"} to your deployment

Both steps should be blocked. The specific CVEs listed will vary depending on the image scan date, but you should see multiple fixable vulnerabilities from components like pip, setuptools, wheel, and mercurial in the python:3.11-buster base image.

This is the proof. The admission webhook denied both the image update and the new deployment. In a real CI/CD pipeline, this blocks the deployment step, fails the pipeline, and surfaces the policy violation in your CI dashboard. The vulnerable image never reaches the cluster.

The operational model has shifted:

  • Before: Deploy first, scan later, patch reactively, repeat forever

  • After: Build distroless, verify at build time, enforce at admission, never patch unused code

Step 3: Verify the Fortress Remains Intact

Confirm that your Hummingbird deployment is still running and serving traffic:

oc get deployment fortress-python-api -n hummingbird-acs-lab

ROUTE=$(oc get route fortress-python-api -n hummingbird-acs-lab -o jsonpath='{.spec.host}')
curl -sk "https://${ROUTE}/api/v1/greeting"
Expected output:
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
fortress-python-api   1/1     1            1           ...

{"message": "Hello from the Immutable Fortress lab"}

Step 4: View Policy Violations

Query the policy violations via the ACS REST API:

ACS_ROUTE=$(oc get route central -n stackrox -o jsonpath='{.spec.host}')

curl -sk -H "Authorization: Bearer ${ROX_API_TOKEN}" \
  "https://${ACS_ROUTE}/v1/alerts?query=Namespace:hummingbird-acs-lab" | \
  python3 -c "
import json, sys
data = json.load(sys.stdin)
for a in data.get('alerts', []):
    policy = a['policy']['name']
    state = a['state']
    enf = a.get('enforcementAction', 'none')
    dep = a.get('deployment', {}).get('name', '?')
    print(f'Policy: {policy}')
    print(f'  State: {state}  |  Enforcement: {enf}  |  Deployment: {dep}')
    print()
"
Expected output:
Policy: Zero Fixable CVEs Required
  State: ATTEMPTED  |  Enforcement: FAIL_DEPLOYMENT_CREATE_ENFORCEMENT  |  Deployment: legacy-regression-test

Policy: Zero Fixable CVEs Required
  State: ATTEMPTED  |  Enforcement: FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT  |  Deployment: fortress-python-api

Policy: Zero Fixable CVEs Required
  State: ACTIVE  |  Enforcement: SCALE_TO_ZERO_ENFORCEMENT  |  Deployment: legacy-python-api
  • ATTEMPTED means the admission controller blocked the deployment before it was created

  • ACTIVE means the policy violation exists on a running deployment

  • You should see violations for both the oc set image attempt (update) and the legacy-regression-test creation attempt

You can also view violations visually in the ACS Central UI. Navigate to Violations in the left sidebar:

ACS Violations list showing 108 results including Zero Fixable CVEs Required enforced on legacy-python-api

Click on the Zero Fixable CVEs Required violation to see the individual CVEs that triggered it:

Violation detail showing fixable CVEs in mercurial pip and setuptools components

Switch to the Enforcement tab to see the enforcement action taken:

Enforcement tab showing deployment scaled to 0 replicas in response to policy violation

Why is legacy-python-api still running?

You may notice the original legacy-python-api deployment is still active even though the policy is enforced. This is expected behaviour:

  • Admission control evaluates policies when deployments are created or updated — it does not retroactively terminate existing pods that were deployed before the policy was enabled

  • The SCALE_TO_ZERO_ENFORCEMENT action scales the deployment to 0 replicas as a background enforcement, but if the deployment was created before the policy existed, the timing depends on ACS reassessment cycles

  • What the policy definitively prevents: any new deployment or update that references a vulnerable image is blocked at the Kubernetes API level before a pod is ever scheduled

This is the correct security model. Admission control is a gate, not a kill switch. For existing workloads, use the Violations dashboard to identify and remediate them through your normal change management process.

Production Alert Integration: Configure ACS to send violation alerts to your SIEM, Slack, or PagerDuty using the API or UI:

# Example: Create Slack notifier via API
ACS_ROUTE=$(oc get route central -n stackrox -o jsonpath='{.spec.host}')
curl -sk -X POST \
  -H "Authorization: Bearer ${ROX_API_TOKEN}" \
  -H "Content-Type: application/json" \
  "https://${ACS_ROUTE}/v1/notifiers" \
  -d '{
    "name": "Slack Violations",
    "type": "slack",
    "labelKey": "violations",
    "slack": {
      "webhook": "https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
    }
  }'

This closes the loop: build → verify → enforce → alert.

For detailed notifier configuration options, see the ACS API documentation.

Cleanup

Remove the lab resources when you are finished:

oc delete project hummingbird-acs-lab

ACS_ROUTE=$(oc get route central -n stackrox -o jsonpath='{.spec.host}')

for POLICY_NAME in "Zero Fixable CVEs Required" "Image Scan Required"; do
  POLICY_ID=$(curl -sk -H "Authorization: Bearer ${ROX_API_TOKEN}" \
    "https://${ACS_ROUTE}/v1/policies" | \
    python3 -c "import json,sys; [print(p['id']) for p in json.load(sys.stdin).get('policies',[]) if p['name']=='${POLICY_NAME}']" 2>/dev/null)
  if [ -n "${POLICY_ID}" ]; then
    curl -sk -X DELETE -H "Authorization: Bearer ${ROX_API_TOKEN}" \
      "https://${ACS_ROUTE}/v1/policies/${POLICY_ID}"
    echo "Deleted policy: ${POLICY_NAME}"
  fi
done

Summary

Congratulations! You have completed Sub-Module 2.7 — The Immutable Fortress.

What You Accomplished


✓ Deployed a legacy Python API built on python:3.11-buster
✓ Scanned with ACS and observed 50-200+ CVEs from unused OS packages
✓ Rewrote the application using a Hummingbird distroless multi-stage build
✓ Learned strict JSON exec form for ENTRYPOINT/CMD in shell-less images
✓ Compared SBOM package counts: ~430 (legacy) vs ~20 (Hummingbird)
✓ Inspected SLSA provenance and cryptographic SBOM attestations
✓ Achieved a zero fixable CVE scan with the Hummingbird image
✓ Authored ACS policies requiring zero fixable CVEs and mandatory image scanning
✓ Proved the guardrails by observing ACS block a non-compliant deployment
✓ Verified the Hummingbird deployment remained unaffected

Key Takeaways

The Vulnerability Treadmill Is Optional:

Legacy base images force you onto a never-ending patch cycle for code your application does not use. Hummingbird distroless images break this cycle by eliminating the unused code entirely.

Defence in Depth with ACS:

  • Build time: Multi-stage builds produce minimal images

  • Scan time: ACS/roxctl validates the zero-CVE posture

  • Deploy time: Admission controllers enforce the posture as policy

  • Runtime: Distroless images resist exploitation (no shell, no package manager, no utilities)

From Reactive to Proactive:

The operational mindset shifts from "how fast can we patch?" to "there is nothing to patch." This is not incremental improvement — it is a fundamental change in how container security is practised.

Troubleshooting

Issue: roxctl cannot reach ACS Central

curl -sk https://$ROX_ENDPOINT/v1/metadata
oc get route -n stackrox

Issue: Admission controller not blocking deployments

oc get ValidatingWebhookConfiguration -l app.kubernetes.io/name=stackrox -o yaml

oc logs -n stackrox deploy/admission-control

Issue: Policy not triggering on deployment

ACS_ROUTE=$(oc get route central -n stackrox -o jsonpath='{.spec.host}')
curl -sk -H "Authorization: Bearer ${ROX_API_TOKEN}" \
  "https://${ACS_ROUTE}/v1/policies" | \
  python3 -c "import json,sys; [print(p['name'], p['disabled']) for p in json.load(sys.stdin).get('policies',[]) if 'zero fixable' in p['name'].lower()]"

oc get events -n hummingbird-acs-lab --sort-by=.metadata.creationTimestamp | tail -20

Issue: Hummingbird image fails to start (exec format error)

Ensure you are using JSON array syntax for ENTRYPOINT/CMD. Shell form will not work in distroless images:

# Correct (exec form)
ENTRYPOINT ["python3", "/app/app.py"]

# Wrong (shell form -- requires /bin/sh)
ENTRYPOINT python3 /app/app.py