Lab setup and introduction

Module goals

  • Access all of the applications in the lab environment

  • Deploy our default insecure applications

  • Ensuring OpenShift pipelines are properly setup

Accessing the workshop

Let’s start by ensuring you have access to all the necessary resources to complete this lab.

Always open the URL by opening in a new tab. For example on a Mac <cmd> click

Access the Red Hat® OpenShift Container Platform (OCP) web console

First, make sure you can access the Red Hat® OCP console web console.

Procedure

  1. Log into the OCP console at {web_console_url}

  2. Click the rhsso option

If the variables on the screen are unavailable, please inform your workshop administrator.
OpenShift console
  1. Enter the OCP credentials

User:

{openshift_admin_user}

Password:

{openshift_admin_password}

  1. Enter the OpenShift username {openshift_admin_user} and password: {openshift_admin_password}

OpenShift console login

Access the Red Hat® Advanced Cluster Security (RHACS) web console

Red Hat Advanced Cluster Security for Kubernetes is a Kubernetes-native security platform that equips you to build, deploy, and run cloud-native applications with more security. The solution helps protect containerized Kubernetes workloads in all major clouds and hybrid platforms, including Red Hat OpenShift, Amazon Elastic Kubernetes Service (EKS), Microsoft Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE).


Let’s ensure that you have access to the RHACS web console.

Procedure

  1. Log into the RHACS console at {acs_route}

  2. Click the "Advanced" button in your browser

RHACS login not private
  1. Click "Proceed to {acs_route}"

RHACS login proceed
  1. Enter the RHACS credentials

RHACS Console Username:

{acs_portal_username}

RHACS Console Password:

{acs_portal_password}

RHACS console
RHACS console

OpenShift admin access verification

OpenShift admin access verification involves ensuring that users have the appropriate permissions and roles assigned to them for managing the OpenShift cluster. This can be done by checking the user roles and bindings within the cluster. You’ll be verifying your permissions using the oc command-line tool.

There are TWO clusters we need to verify access too.

Verify access to the EKS cluster

oc config use-context eks-admin
[lab-user@bastion ~]$oc config use-context eks-admin
Switched to context "eks-admin".
oc whoami
oc get nodes -A
[lab-user@bastion ~]$ oc whoami
oc get nodes -A
Error from server (NotFound): the server could not find the requested resource (get users.user.openshift.io ~)
NAME                                            STATUS   ROLES    AGE    VERSION
ip-<IP_ADDRESS>.us-east-2.compute.internal   Ready    <none>   163m   v1.28.8-eks-ae9a62a
ip-<IP_ADDRESS>.us-east-2.compute.internal   Ready    <none>   163m   v1.28.8-eks-ae9a62a
ip-<IP_ADDRESS>.us-east-2.compute.internal   Ready    <none>   163m   v1.28.8-eks-ae9a62a
We should not have access with the oc command as it is an OpenShift command but you can see the EKS nodes and their information.

Verify access to the OpenShift cluster

Next, let’s switch to the OpenShift cluster running and do our work (for now) in the OpenShift cluster

Procedure

  1. Run the following command

oc config use-context admin
[lab-user@bastion ~]$oc config use-context eks-admin
Switched to context "admin".
  1. Verify access to the OpenShift cluster and the available nodes

oc whoami
oc get nodes -A
[lab-user@bastion ~]$ oc whoami
oc get nodes -A
system:admin
NAME                                        STATUS   ROLES                  AGE    VERSION
<OCP_IP>0.us-east-2.compute.internal   Ready    worker                 4h1m   v1.27.11+749fe1d
<OCP_IP>.us-east-2.compute.internal    Ready    control-plane,master   4h7m   v1.27.11+749fe1d
<OCP_IP>.us-east-2.compute.internal    Ready    control-plane,master   4h7m   v1.27.11+749fe1d
<OCP_IP>.us-east-2.compute.internal    Ready    worker                 4h2m   v1.27.11+749fe1d
<OCP_IP>.us-east-2.compute.internal    Ready    control-plane,master   4h7m   v1.27.11+749fe1d
<OCP_IP>.us-east-2.compute.internal    Ready    worker                 4h2m   v1.27.11+749fe1d

You will now see the OCP role using the oc command, as we are currently working on the OpenShift cluster

We will be working with the OpenShift cluster in all modules unless otherwise specified.

roxctl CLI verification

Next, verify that we have access to the RHACS Central Service.

Procedure

  1. Run the following command.

This command uses variables saved in the ~/.bashrc file to authenticate with the RHACS Central Service.

roxctl --insecure-skip-tls-verify -e "$ROX_CENTRAL_ADDRESS:443" central whoami
UserID:
	auth-token:718744a9-9548-488b-a8b9-07b2c59ea5e6
User name:
	anonymous bearer token "pipelines-ci-token" with roles [Admin] (jti: 718744a9-9548-488b-a8b9-07b2c59ea5e6, expires: 2025-04-03T15:15:06Z)
Roles:
	- Admin
Access:
	rw Access
	rw Administration
	rw Alert
	rw CVE
	rw Cluster
	rw Compliance
	rw Deployment
	rw DeploymentExtension
	rw Detection
	rw Image
	rw Integration
	rw K8sRole
	rw K8sRoleBinding
	rw K8sSubject
	rw Namespace
	rw NetworkGraph
	rw NetworkPolicy
	rw Node
	rw Secret
	rw ServiceAccount
	rw VulnerabilityManagementApprovals
	rw VulnerabilityManagementRequests
	rw WatchedImage
	rw WorkflowAdministration
This output is showing that you have unrestricted access to the RHACS product. these permissions can be seen in the RHACS Access Control tab that we will review later.
RHACS access control

Setup our workshop applications

Great job!

You now have access to the core apps. Next, you’ll deploy insecure apps into the OpenShift cluster, including Quay. These apps will be the main use cases we look at during the workshop.

Build a container image

In this section, we will download the "Java app" give it a new tag, and push the image to Quay. Later, we’ll deploy the image to the OpenShift Cluster and use it in future modules.

Let’s export a few variables to make things easier. These variables will stay in the .bashrc file so they’re saved in case you need to refresh the terminal.

With the variables saved in the ~/.bashrc file you will not have to declare them again in the future.

Procedure

  1. Run the following command.

echo export QUAY_USER={quay_admin_username} >> ~/.bashrc
QUAY_USER={quay_admin_username}
  1. Set the Quay URL variable

echo export QUAY_URL=$(oc -n quay-enterprise get route quay-quay -o jsonpath='{.spec.host}') >> ~/.bashrc
QUAY_URL=$(oc -n quay-enterprise get route quay-quay -o jsonpath='{.spec.host}')
Verify that the variables are correct
echo $QUAY_USER
echo $QUAY_URL
[lab-user@bastion ~]$ echo $QUAY_USER
echo $QUAY_URL
quayadmin
quay-bq65l.apps.cluster-bq65l.bq65l.sandbox209.opentlc.com
  1. Using the terminal on the bastion host, login to quay using the Podman CLI as shown below:

podman login $QUAY_URL
Use the quay admin credentials to sign in.

Quay Console Username:

{quay_admin_username}

Quay Console Password:

{quay_admin_password}

Username: quayadmin
Password:
Login Succeeded!
  1. Pull the Java container image with the following CLI command:

podman pull quay.io/jechoisec/ctf-web-to-system-01
Trying to pull quay.io/jechoisec/ctf-web-to-system-01:latest...
Getting image source signatures
Copying blob 37aaf24cf781 done
...
...
Copying config 1cbb2b7908 done
Writing manifest to image destination
1cbb2b79086961e34d06f301b2fa15d2a7e359e49cfe67c06b6227f6f0005149
  1. Now that you have a copy of the Java container image locally. You must tag the image before pushing it to Quay.

podman tag quay.io/jechoisec/ctf-web-to-system-01 $QUAY_URL/$QUAY_USER/ctf-web-to-system:1.0
Quay will automatically create a private registry to store our Java application since we have admin access. To deploy the app, you’ll need to make the repository public so you can pull the image without credentials.
  1. The last step is to push the image to Quay.

podman push $QUAY_URL/$QUAY_USER/ctf-web-to-system:1.0 --remove-signatures
Copying blob 3113fb957b33 done
...
...
Copying config 1cbb2b7908 done
Writing manifest to image destination

Perfect!

Red Hat Quay

Red Hat Quay is an enterprise-quality registry for building, securing and serving container images. It provides secure storage, distribution, governance of containers and cloud-native artifacts on any infrastructure.

To get started, make sure that you are logged in to Red Hat Quay and have access to the newly created quayadmin/ctf-web-to-system repository

Red Hat® Quay web console

In the next steps, you’ll go through a basic overview of Quay, review the Java app, and make the container image repository public. This way, you can successfully deploy the container image into the OpenShift cluster.

Procedure

  1. Log into the Quay console at {quay_console_url}

  2. Enter the Quay credentials.

Quay Console Username:

{quay_admin_username}

Quay Console Password:

{quay_admin_password}

quay login
quay console

Browse the registry

So far in the setup module we downloaded built and pushed an insecure java application called ctf-web-to-system. Now it’s time to deploy it to the OpenShift Cluster. To do this we will need to make the registry that we created public.

Procedure

Let’s take a look at our application in the registry.

  1. First, ensure that the ctf-web-to-system repository is available. If there are no repositories available please redo the Build a container image section to ensure that the image was correctly pushed to the repository.

quay login
  1. Next click on the ctf-web-to-system repository.

quay repo

On the left hand side of the window you should see the following icons labeled in order from top to bottom,

  • Information

  • Tags

  • Tag History

  • Usage Logs

  • Settings

quay sidebar

The information tab shows you information such as;

  • Podman and Docker commands

  • Repository activity

  • The repository description.

quay information
  1. Click on the Tags icon.

quay tags

This tab displays all of the images and tags that have been upladed, providing information such as fixable vulnerabilities, the image size and allows for bulk changes to images based on the security posture.

quay tags security
  1. Click on the Tags History icon. This tab simply displays the container images history over time.

quay tags history
  1. Click on the Usage Logs icon.

This tab displays the usage over time along with details about who/how the images were pushed to the cluster.

quay usage logs
You can see that you, the "quayadmin", pushed an image tagged 1.0 to the repository today.
  1. Lastly click on the Settings icon.

In this tab you can add/remove users and update permissions, alter the privacy of the repository, and even schedule alerts based on found vulnerabilities.

quay settings
  1. Make your repository public before deploying our application in the next step by clicking the Make Public button under Repository Visibility

quay make public
Make sure to make the repository public. Otherwise we will not be able to deploy the application in the next step.
  1. Click OK

quay make public ok

You should be able to see the repository icon lose its red lock symbol now that it is public. This will allow you to deploy the image directly from the local Quay repository.

Vulnerability Scanning with Quay

Red Hat Quay can also help with securing our environments by performing a security scan on any images added to our registry, and advise which ones are potentially fixable.

Use the following procedure to check the security scan results for our Java container image you have uploaded.

Procedure

  1. Click on the Tags icon on the left side of the screen like before.

You may need to click the checkbox near the image you would would like more information on, but the column for Security Scan should populate.
quay tags
  1. By default, the security scan color codes the vulnerabilities, you can hover over the security scan for more information.

The Java container image we are using in this lab shows 21 vulnerabilities, with 5! critical vulnerabilities. This number will change with time and will be different between container scanners for a variety of reasons such as reporting mechanisms, vulnerability feeds and operating system support.
quay scan hover
  1. Click on the list of vulnerabilities to see a more detailed view.

quay security detailed
  1. Click on a vulnerabile package on the left menu to get more information about the vulnerability and see what you have to do to fix the issue.

quay vuln detailed
Toggling for fixable/unfixable vulnerabilities is an excellent way for developers to understand what is within their responsibility for fixing. For example, since we are using an older version of Java, many fixes are available for these common issues.

Congratulations, you now know how to examine images in your registry for potential vulnerabilities before deploying into your environment.

Deploy the workshop applications

In the final part of this module, you’ll deploy several insecure applications to the OpenShift cluster. You’ll scan a few of these containers using the roxctl CLI to understand what you’re deploying and what to expect when you dive into RHACS.

Make sure the variables are set before running the following commands. If not, go back to the Quay section to redo the previous commands.

Run the following in the terminal and ensure you get the corrrect outputs.

echo $QUAY_USER
echo $QUAY_URL

Our insecure demo applications come from a variety of public GitHub repositories and sources. Including the Java application that you just pushed to Quay. Let’s deploy them into our cluster.

Procedure

  1. Run the following commands in the terminal, one after the other.

This command downloads a bunch of Kubernetes manifests to deploy to OpenShift.

git clone https://github.com/mfosterrox/demo-apps.git demo-apps
export TUTORIAL_HOME="$(pwd)/demo-apps"

This command updates the ctf-w2s.yml file with your local Quay repository information. The updated Quay URL means OpenShift will pull from your local Quay instance instead of the previous Quay.io location.

sed -i "s|CHANGEME|$QUAY_URL/$QUAY_USER/ctf-web-to-system:1.0|g" $TUTORIAL_HOME/kubernetes-manifests/ctf-web-to-system/ctf-w2s.yml

This command applies the manifests to OpenShift. It includes both OpenShift Pipelines manifests and typical application manifests.|

oc apply -f $TUTORIAL_HOME/kubernetes-manifests/ --recursive
oc apply -f $TUTORIAL_HOME/openshift-pipelines/ --recursive
You should see warnings such as: Warning: would violate PodSecurity "restricted:latest": unrestricted capabilities (container "Java" must set securityContext.capabilities.drop=["ALL"]). This is because we are deploying flawed container configurations and vulnerable container applications into the OpenShift cluster.

The following command triggers a vulnerability scan by RHACS, updates the vulnerability results, and filters them into a table showing only the critical vulnerabilities.

roxctl --insecure-skip-tls-verify -e "$ROX_CENTRAL_ADDRESS:443" image scan --image=$QUAY_URL/$QUAY_USER/ctf-web-to-system:1.0 --force -o table --severity=CRITICAL
The following output can be configured using flags. You can configure different outputs (table, CSV, JSON, and sarif.) and filter for specific severities.
[lab-user@bastion ~]$ roxctl --insecure-skip-tls-verify -e "$ROX_CENTRAL_ADDRESS:443" image scan --image=$QUAY_URL/$QUAY_USER/ctf-web-to-system:1.0 --force -o table --severity=CRITICAL
Scan results for image: quay-8h2gf.apps.cluster-8h2gf.sandbox182.opentlc.com/quayadmin/ctf-web-to-system:1.0
(TOTAL-COMPONENTS: 9, TOTAL-VULNERABILITIES: 17, LOW: 0, MODERATE: 0, IMPORTANT: 0, CRITICAL: 17)

-------------------------------------------------------------------------------------------------------------------------------------------------+
|  COMPONENT  | VERSION |       CVE        | SEVERITY |                                      LINK                                      | FIXED VERSION |
-------------------------------------------------------------------------------------------------------------------------------------------------+
|     ejs     |  2.7.4  |  CVE-2022-29078  | CRITICAL |                https://nvd.nist.gov/vuln/detail/CVE-2022-29078                 |     3.1.7     |
...
WARN:   A total of 17 unique vulnerabilities were found in 9 components
  1. For the last verification step. Run the following command to ensure that the applications are up and running.

oc get deployments -l demo=roadshow -A
[lab-user@bastion ~]$ oc get deployments -l demo=roadshow -A
NAMESPACE    NAME                READY   UP-TO-DATE   AVAILABLE   AGE
backend      api-server          1/1     1            1           4m54s
default      api-server          1/1     1            1           4m19s
default      ctf-web-to-system   1/1     1            1           5m
default      frontend            1/1     1            1           4m13s
default      juice-shop          1/1     1            1           4m57s
default      rce                 1/1     1            1           4m16s
default      reporting           1/1     1            1           4m22s
frontend     asset-cache         1/1     1            1           4m47s
medical      reporting           1/1     1            1           4m37s
operations   jump-host           1/1     1            1           4m30s
payments     visa-processor      1/1     1            1           4m26s
Please ensure the deploy application are deployed to your cluster before moving onto the next module especially the ctf-web-to-system application.

A task to complete on your own.

Here is your mission

giphy

Should you choose to accept it

Run roxctl against a few of your favorite container images. Try pulling from docker hub or quay.io. Try modifying the command below to include your image of choice.

For example:

[lab-user@bastion ~]$ MYIMAGE=docker.io/ubuntu
[lab-user@bastion ~]$ roxctl --insecure-skip-tls-verify -e "$ROX_CENTRAL_ADDRESS:443" image scan --image=$MYIMAGE --force -o table --severity=CRITICAL
Scan results for image: docker.io/ubuntu
(TOTAL-COMPONENTS: 0, TOTAL-VULNERABILITIES: 0, LOW: 0, MODERATE: 0, IMPORTANT: 0, CRITICAL: 0)

--------------------------------------------------------+
| COMPONENT | VERSION | CVE | SEVERITY | LINK | FIXED VERSION |
--------------------------------------------------------+
--------------------------------------------------------+

Showing that the latest version of Ubuntu from Docker.io has 0 critical vulnerabilities.

Your turn

MYIMAGE=
roxctl --insecure-skip-tls-verify -e "$ROX_CENTRAL_ADDRESS:443" image scan --image $MYIMAGE --force -o table --severity=CRITICAL

Summary

giphy

Beautiful!

In this module, you got access to all of the lab UI’s and interfaces including the Showroom lab enviroment (Where you are reading this sentence). You downloaded and deployed some very insecure applications and setup the lab full of examples to dive into.

On to Visibility and Navigation!!