Environment details and setup
Presenter note: Review this section before your demo to familiarize yourself with the environment components and access details. Not all components are used in every section — check which section(s) you are presenting.
Demo personas
This demo uses two SSO (Keycloak) personas. Both use the common password {common_password}. Use these credentials for the OpenShift console, GitLab, Dev Spaces, and Developer Hub.
-
dev1— Developer persona ("Dave Developer"). Use this when demonstrating the developer experience: DevSpaces, writing code, pushing commits, viewing pipeline results. -
pe1— Platform engineer persona ("Paul Platform Engineer"). Use this when demonstrating platform configuration: DevSpaces operator, Service Mesh, production deployments.
When signing into the OpenShift console, will first be prompted to select a login type. Use the developers identity provider for the developer and platform engineer personas and enter one of the above usernames and password. If you need to login as the cluster admin, use the built-in htpasswd option with the admin credentials (admin/{openshift_cluster_admin_password}).
|
To avoid constant sign-out/sign-in noise during the demo, keep each persona logged in using a separate browser. For example, use Chrome for the developer persona and Firefox for the platform engineer persona (or use an incognito/private window for the second persona). This lets you switch between personas instantly by switching browser windows. |
Other services (SonarQube, Argo CD, Vault) require their own admin credentials — see the access details below.
Running oc commands
In some places, it’s easiest to use oc to fetch dynamic data like passwords or URLs. You can run oc commands via the OpenShift console’s built-in terminal. To open the console terminal, click the "_>" icon in the top right of the console header. This opens a terminal pane with a command prompt that has oc already configured and logged in as your user.
Environment namespaces
Throughout the modules you’ll work on applications belonging to a fictitious company named Parasol. The following lists explain each application namespace and why it exists. Notice that we use new namespaces in each module to avoid conflicts and start from a clean slate, or to separate personal developer environments from shared environments.
Section 1: Foundational application platform
-
parasol-insurance-dev— Development environment for the Parasol application. Receives deployments from the CI/CD pipeline on every code push. -
parasol-insurance-prod— Production environment. Deployments are controlled through GitOps (Argo CD) with manual approval gates via GitLab merge requests. -
parasol-insurance-build— Contains Tekton pipeline definitions, EventListeners, and build tasks. Pipeline runs execute here, not in the dev/prod namespaces.
Section 2: Secured supply chain
-
parasol-insurance-secured-dev— Development environment with the full secure supply chain — ACS vulnerability scanning, image signing (TAS), SBOM generation, and SLSA attestation via Tekton Chains. -
parasol-insurance-secured-prod— Secured production environment. Promotion requires two manual gates: a code merge request approval + tag creation, and a GitOps merge request approval. -
parasol-insurance-secured-build— Secured build pipeline namespace. Contains the main EventListener and pipelines for both feature branch builds and production tag promotions.
Per-user namespaces
-
<username>-devspaces— Auto-provisioned by DevSpaces for each developer’s cloud IDE workspaces and associated PVCs. -
parasol-insurance-secured-<username>— Created via the Developer Hub "Parasol Insurance Secured Development" golden path template. Each developer gets an isolated namespace with its own pipeline and Argo CD application for their feature branch.
Environment components
Section 1: Foundational application platform
-
Red Hat OpenShift Container Platform 4.21 - The foundational platform
-
Migration Toolkit for Applications 7.2 - Application modernization and migration analysis (discussed in Module 1, not shown live)
-
Red Hat OpenShift Dev Spaces 3.26 - Cloud development environments with AI code assistance
-
Red Hat build of Quarkus 3.27 - Cloud native Java runtime (featured application)
-
Red Hat OpenShift Pipelines 1.21 - Tekton-based CI/CD pipelines
-
SonarQube - Static analysis and code quality enforcement
-
Red Hat OpenShift GitOps 1.19 - Argo CD for GitOps delivery
-
Red Hat OpenShift Service Mesh 3.2 - Istio-based traffic management and security
-
Kiali - Service mesh observability console (the OpenShift console refers to this as the "Service Mesh" menu)
-
Red Hat OpenShift monitoring stack 4.21 - Prometheus, Grafana, and AlertManager
-
HashiCorp Vault - External secrets management
-
External Secrets Operator - Syncs secrets from Vault into Kubernetes
Section 2: Advanced developer services
-
Red Hat Developer Hub 1.8 - Developer portal with catalog, templates, and self-service
-
Developer Lightspeed - AI-assisted development within Developer Hub
-
Red Hat Advanced Cluster Security for Kubernetes 4.8 - Vulnerability scanning and policy enforcement
-
Red Hat Trusted Artifact Signer 1.3.2 - Image signing, verification
-
Tekton Chains 0.24 - Automated image signing, SBOM generation, and SLSA attestation
-
Red Hat Trusted Profile Analyzer 1.1 - SBOM management and vulnerability tracking
-
Dependency Analytics - IDE plugin for real-time dependency vulnerability scanning
-
Apache Kafka - Event streaming for business data (pre-existing in the demo environment) (also used in section 3)
Section 3: Intelligent applications
-
LLM endpoint — Hosted on the Red Hat Demo Platform’s LiteMaaS instance[https://litellm-prod-frontend.apps.maas.redhatworkshops.io/models]. In production, organizations can use Red Hat AI on-premises or any hosted model provider
-
Apache Kafka - Event streaming for business data (pre-existing in the demo environment)
Access details
OpenShift console
-
URL: {openshift_cluster_console_url}[{openshift_cluster_console_url}^]
-
Developer persona:
dev1/{common_password}(use the "developers" login option) -
Platform engineer persona:
pe1/{common_password}(use the "developers" login option) -
Cluster Admin:
admin/{openshift_cluster_admin_password}(use the "htpasswd" login option)
GitLab (Git server)
-
URL: https://gitlab-gitlab.{openshift_cluster_ingress_domain}
-
Login as
dev1/{common_password}(developer persona) orpe1/{common_password}(platform engineer persona)
SonarQube
-
URL: https://sonarqube-sonarqube.{openshift_cluster_ingress_domain}
-
Login as
admin -
Password: retrieve from the
sonarqube-adminsecret in thesonarqubenamespace:oc get secret sonarqube-admin -n sonarqube -o jsonpath='{.data.password}' | base64 -d
Argo CD
There are two Argo CD instances in this environment:
rhdh-gitops (application delivery — use this for the demo)
-
URL: https://rhdh-gitops-server-rhdh-gitops.{openshift_cluster_ingress_domain}
-
This is where the Parasol application is deployed through GitOps. Demoers will use this instance when showing continuous deployment in Modules 2 and 3.
-
Login as
admin -
Password: retrieve from the
rhdh-gitops-clustersecret in therhdh-gitopsnamespace:oc get secret rhdh-gitops-cluster -n rhdh-gitops -o jsonpath='{.data.admin\.password}' | base64 -d
openshift-gitops (bootstrap — for debugging the demo environment only)
-
URL: https://openshift-gitops-server-openshift-gitops.{openshift_cluster_ingress_domain}
-
This instance manages the environment itself (app-of-apps pattern). Useful for troubleshooting if demo components are not deploying correctly. Not shown during the demo.
-
Login as
admin -
Password: retrieve from the
openshift-gitops-clustersecret in theopenshift-gitopsnamespace:oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d
Service Mesh console (Kiali)
-
Accessible through the Service Mesh top-level menu in the OpenShift console
-
Login to the OpenShift console as
pe1/{common_password}(platform engineer persona) or cluster admin
Tekton pipelines
-
Accessible through the OpenShift console under Pipelines in the
parasol-insurance-buildnamespace (and their secured counterparts in section 2) -
Login as
dev1/{common_password}orpe1/{common_password}
Red Hat Developer Hub (Sections 2-3)
-
URL: https://backstage-developer-hub-rhdh.{openshift_cluster_ingress_domain}
-
Login as
dev1/{common_password}(developer persona) orpe1/{common_password}(platform engineer persona)
Trusted Profile Analyzer (Section 2)
-
URL: https://server-trusted-profile-analyzer.{openshift_cluster_ingress_domain}
-
Login with OpenShift credentials
Vault (Section 1)
-
Secrets browser: https://vault-vault.{openshift_cluster_ingress_domain}/ui/vault/secrets/kv/list/secrets/
-
Login with the root token. Retrieve it from the
vault-tokensecret in thevaultnamespace:oc get secret vault-token -n vault -o jsonpath='{.data.token}' | base64 -d
Demo application
The demo application is the Parasol Insurance web application, a Quarkus-based microservices application that handles policy management, claims processing, and customer interactions. Parasol migrated this application from a legacy Java EE platform to Quarkus on OpenShift using the Migration Toolkit for Applications (MTA).
In section 1 (foundational), the application has two environments:
-
Dev:
parasol-insurance-devnamespace — parasol-insurance-parasol-insurance-dev.{openshift_cluster_ingress_domain} -
Prod:
parasol-insurance-prodnamespace — parasol-insurance-parasol-insurance-prod.{openshift_cluster_ingress_domain}
The Tekton pipeline and build resources are in the parasol-insurance-build namespace.
In Section 2, the application and all its components are registered in the Red Hat Developer Hub catalog as a "system" with linked components (frontend, backend, Kafka, database). They land in parasol-insurance-secured-dev and parasol-insurance-secured-prod namespaces, with a secure build pipeline that includes ACS, TAS, and Tekton Chains but you don’t necessarily need to visit these namespaces, as RHDH acts as the 'single pane of glass', for visualizing topologies, pipelines, argo apps, and more.
In Section 3, the application is extended with an AI-enhanced component that consumes the LLM endpoint and processes business data from Kafka.
Pre-demo checklist
|
Resurrected clusters: If you stopped a running demo cluster and later restarted it, the Istio ambient service mesh may fail to re-establish
The commands will take about 1-2 minutes to complete. Wait for all rollouts to finish before proceeding with the demo. |
Section 1
-
Log into the OpenShift console at {openshift_cluster_console_url}[{openshift_cluster_console_url}^] as
dev1/{common_password} -
Navigate to Workloads → Topology and select the
parasol-insurance-devnamespace. Confirm you see the Parasol application topology -
Open Dev Spaces by clicking the "Edit Source Code" link on the Parasol application — confirm the dashboard loads
-
Verify the AI code assistant works: Once DevSpaces opens, wait for the Continue plugin to initialize (this may take a few minutes on first launch). Open it give it a good "Hello" prompt.
-
Open SonarQube at https://sonarqube-sonarqube.{openshift_cluster_ingress_domain} and log in as
admin(retrieve password fromsonarqube-adminsecret in thesonarqubenamespace) -
Open the
rhdh-gitopsArgo CD at https://rhdh-gitops-server-rhdh-gitops.{openshift_cluster_ingress_domain} and log in asadmin(retrieve password fromrhdh-gitops-clustersecret in therhdh-gitopsnamespace) (explained above). It is expected that the-securedArgo CD applications may not be fully healthy at this point — they will resolve once the secured pipeline runs for the first time in Section 2 -
Verify the Tekton pipeline is visible in the OpenShift console: navigate to Pipelines in the
parasol-insurance-buildnamespace -
Log into the OpenShift console as
pe1/{common_password}and open Service Mesh → Overview to confirm it loads, then check Service Mesh → Traffic Graph to confirm the traffic graph renders (you’ll need to select the{parasol-dev_ns}andkafkanamespace and click "Apply" to see the Parasol services). Turn on Traffic Animation in the Display menu for added visual effect (since nothing is currently happening in the app it won’t show much). -
Verify Vault is accessible at https://vault-vault.{openshift_cluster_ingress_domain}:
-
Retrieve the root token with
oc get secret vault-token -n vault -o jsonpath='\{.data.token}' | base64 -d -
Log in with the root token and verify Parasol secrets are configured at https://vault-vault.{openshift_cluster_ingress_domain}/ui/vault/secrets/kv/list/secrets/
-
Section 2 (in addition to Section 1)
-
Open Red Hat Developer Hub at https://backstage-developer-hub-rhdh.{openshift_cluster_ingress_domain} and log in as
dev1/{common_password}— confirm the catalog loads -
Verify the Parasol system and its components are registered in the catalog
-
Confirm software templates are available in the "Create" section (look for the Parasol Insurance Secured Development template)
-
Open TPA at https://server-trusted-profile-analyzer.{openshift_cluster_ingress_domain} — the SBOMs tab will be empty at this point, which is expected since no pipelines have run yet. Instead, check the Vulnerabilities tab to confirm vulnerability data has been downloaded, and the Importers tab to confirm the
osv-githubimporter is running (> 0% progress) -
Open GitLab at https://gitlab-gitlab.{openshift_cluster_ingress_domain} and log in as
dev1/{common_password}— confirm repository access
Section 3 (in addition to Sections 1-2)
-
Verify the LLM endpoint is serving by running the following command in the OpenShift console terminal (if your code assistant in DevSpaces worked above, you can skip this):
curl -s -H "Authorization: Bearer {litellm_virtual_key}" {litellm_api_base_url}/models | jq .You should see a JSON object with a list of available models if the LLM endpoint is working correctly.
|
Presenter tip: Open all the URLs in separate browser tabs before starting. This avoids waiting for pages to load during the live demo. Retrieve all passwords (SonarQube, Argo CD, Vault) ahead of time and keep them in a text file. For Section 2, consider pre-loading the pipeline view. For Section 3, verify the LLM endpoint is responding before starting. |