Module 1: Exploring the platform

You have just joined the Parasol Insurance development team. Your manager has given you access to the company’s OpenShift-based application platform and asked you to get up to speed on the tools and workflows. Before you start writing code, you need to understand what has been set up for you: your namespaces, the running application, the CI/CD pipeline, and where your source code lives.

In this module you will log into OpenShift, explore the environment that has been provisioned for you, and get familiar with the tools you will use throughout the lab.

Learning objectives

By the end of this module, you will be able to:

  • Navigate the OpenShift web console and switch between Administrator and Developer perspectives

  • Identify your build, dev, and prod namespaces and understand their purpose

  • Access the running Parasol Insurance application and observe its current behavior

  • Explain how ExternalSecrets sync credentials from Vault without manual management

  • Locate your CI/CD pipelines and understand their stages

  • Find your forked source code and deployment manifests in GitLab

  • Use Argo CD to observe and trigger GitOps-driven deployments

Exercise 1: Log into OpenShift and find your namespaces

Your first task is to log into the OpenShift web console and verify that your environment is ready.

  1. Open the OpenShift Console using the tab on the right side of this page, or navigate to {openshift_console_url}

  2. Click rhsso as the login provider

  3. Enter your credentials:

    • Username: {user_name}

    • Password: {user_password}

  4. Navigate to Home > Projects in the OpenShift Console

    The various web consoles used throughout this lab have menus on the left. These menus are expanded in the screenshots shown in this lab guide. Depending on the size of your screen, these menus might be collapsed. Use the icons in top-left of each UI to expand them, e.g the OpenShift Console has an icon composed of three horizontal lines that opens/closes the side menu.
  5. You should see 3 projects:

    • {user_name}-build — where your CI/CD pipelines run

    • {user_name}-dev — your development environment

    • {user_name}-prod — your production environment

Your list should resemble the following screenshot:

User namespaces as seen by a lab user
Figure 1. Your three namespaces in the OpenShift console

Your access is governed by a custom tenant-user ClusterRole, i.e Kubernetes RBAC. This gives you admin-like permissions in your namespaces with one important restriction: you cannot view Secrets nor read their values. This is by design. Secrets like database credentials, LLM API keys, and Kafka connection strings are managed and made available by the platform team through ExternalSecrets and stored in HashiCorp Vault.

You’re using a set of user-specific namespaces created for this lab experience. In reality, an application like Parasol Insurance would have namespaces similar to parasol-build, parasol-dev, and parasol-prod that multiple developers can view.

Verify

Confirm the following before moving on:

  • You are logged into the OpenShift web console

  • You can see all three projects: {user_name}-build, {user_name}-dev, and {user_name}-prod

Exercise 2: Explore the running application

The Parasol Insurance application is already deployed in your dev namespace. Let’s take a look at what it does today.

  1. Click on the {user_name}-dev project to open it

  2. Switch to the Topology view using the Workloads > Topology link on the left

    Topology view for the dev namespace
    Figure 2. Topology view of the dev namespace
  3. Click the small route icon (the arrow in the top-right corner of the parasol-insurance application circle) to open the Parasol Insurance application in a new tab

  4. In the Parasol application, navigate to the Inbox page

    The Parasol Insurance application’s Inbox page
    Figure 3. Parasol Insurance inbox with keyword-based classification
  5. Observe the emails in the inbox. Notice that some emails are classified as REVIEW REQUIRED. This is because the current implementation uses simple keyword matching. Any email the keywords cannot confidently classify gets flagged for manual review. You will fix this in Module 3.

You’ve accessed the OpenShift Console, found the Topology view, and learned how to access an application’s public endpoint. OpenShift provides a Route resource to define and manage public endpoints.

Verify

Confirm the following:

  • The Topology view shows the Parasol Insurance application components in the {user_name}-dev namespace

  • You can open the Parasol Insurance application via the route icon

  • The Inbox page displays emails, and some are flagged as REVIEW REQUIRED

Exercise 3: Examine ExternalSecrets

The Parasol Insurance application connects to Apache Kafka and consumes messages from an email Topic to perform routing to specific departments. This requires credentials to connect to the Kafka cluster. The platform team at Parasol has standardized on HashiCorp Vault for secret management. The ExternalSecrets Operator connects to the Vault instance and distributes those secrets to applications.

  1. Navigate to Home > Search

  2. In the Resources dropdown, type ExternalSecret and select it

  3. Make sure you are viewing the {user_name}-dev namespace

  4. You should see two ExternalSecrets:

    • litellm-credentials — LLM API endpoint and key, synced from Vault

    • kafka-credentials — Kafka connection details, synced from Vault

The ExternalSecrets for Parasol Insurance’s dev namespace
Figure 4. ExternalSecrets in the dev namespace

These ExternalSecrets automatically create Kubernetes Secrets in your namespace by pulling values from HashiCorp Vault. The platform team configured this so that sensitive credentials never need to be committed to source code or manually managed.

  1. Click on one of the ExternalSecrets and note the Status showing it is synced. This means that the kafka-credentials Secret requested by the ExternalSecret of the same name will be available for the Parasol Insurance application to consume and use to connect to the Apache Kafka cluster.

If you’re wondering how these resources are created, how they’re managed, and where they come from, that will be answered soon. Right now, we’re just taking a tour and getting familiar with OpenShift and the application you’ll work on during this lab.

Verify

Confirm the following:

  • Two ExternalSecrets are visible in the {user_name}-dev namespace: litellm-credentials and kafka-credentials

  • The Status of each ExternalSecret shows it is synced

Exercise 4: Review the CI/CD pipelines

Your build namespace contains Tekton pipelines that build, test, scan, and deploy your application. An initial pipeline run has already completed to pre-build your application image.

  1. Switch to the {user_name}-build namespace using the project dropdown at the top

  2. Click Pipelines > Pipelines in the left sidebar. You should see push and tag-promote pipelines.

    OpenShift Pipelines View
    Figure 5. Tekton pipelines in the build namespace
  3. A push PipelineRun has already completed. This was triggered automatically when your environment was provisioned to pre-build and pre-deploy the Parasol Insurance application. You viewed this in the {user_name}-dev environment earlier.

  4. Click on the link in the Last run column to view the PipelineRun details.

    Completed Push PipelineRun
    Figure 6. Completed push PipelineRun with all stages passed
  5. Examine the pipeline stages, known as Tasks:

    • clone — clones your source code from GitLab

    • maven-build — compiles the Quarkus application and runs tests

    • sonar-scan — runs SonarQube static analysis for code quality

    • build-and-push — builds the container image and pushes it to Quay

    • rollout-restart — triggers a rolling restart of the deployment in the dev namespace

At this point you’ve observed that OpenShift provides multiple application-platform capabilities, including the ability to run CI/CD natively on the platform itself.

As an optional exercise, return to the Home > Search page and view the ExternalSecrets defined in your {user_name}-build namespace. You’ll notice that multiple Secrets required to securely access code, push/pull images, and interact with SonarQube are defined. Platform administrators can rotate these credentials in Vault at any time, and the ExternalSecrets will automatically synchronize and make sure the latest values are available to your applications and CI/CD pipelines.

Verify

Confirm the following:

  • Two pipelines are visible in the {user_name}-build namespace: push and tag-promote

  • A completed PipelineRun is visible for the push pipeline

  • You can click into the PipelineRun and see 5 stages: clone, maven-build, sonar-scan, build-and-push, rollout-restart

  • All stages show a successful (green) status

Exercise 5: Find your source code in GitLab

Your source code is hosted in GitLab. Each lab participant gets their own fork of the Parasol Insurance repositories.

  1. Open the GitLab tab on the right side of this page, or navigate to https://gitlab-gitlab.{openshift_cluster_ingress_domain}/

  2. Log in with your credentials:

    • Username: {user_name}

    • Password: {user_password}

  3. After logging in, you should see two repositories forked into your Projects list:

    • parasol-insurance — the Parasol Insurance application’s Quarkus source code

    • parasol-insurance-manifests — the build and deployment manifests (Helm Charts) with Kubernetes CRs that are deployed and managed by OpenShift GitOps

Review the deployment manifests

Take a look at the manifests first. These define the build and deployment environments, using an Infrastructure as Code (IaC) approach.

  1. Click the parasol-insurance-manifests repository. A directory listing is shown.

  2. Open the app/templates/ folder. Here you can see the ExternalSecrets (YAML files with es- prefix) and other CRs.

    GitLab Parasol Manifests
    Figure 7. Deployment manifests in GitLab
  3. Click deployment.yaml and locate the DEV_KAFKA_PASSWORD. Notice that it references kafka-credentials defined by the ExternalSecret.

The manifests are deployed to your namespaces on OpenShift using OpenShift GitOps. You’ll see this in-depth in the next exercise.

Review the application source code

  1. Click the GitLab icon in the top-left of the GitLab UI to return to your Projects list.

  2. Click into the parasol-insurance repository and explore the project structure. This is a standard Quarkus application with src/main/java for source code and src/main/resources for configuration.

  3. Open src/main/resources/application.properties and take note of the DEV_KAFKA_PASSWORD environment variable. Recall that this is defined by the deployment.yaml Deployment that reads the es-kafka.yaml ExternalSecret. At no point is the value of the credential stored in plaintext.

  4. You’ll also notice mp.messaging.incoming.emails-in.topic in application.properties defines the Kafka Topic that the application consumes emails from.

    Parasol Insurance application.properties file in GitLab
    Figure 8. Application properties showing Kafka and credential configuration
  5. Take a look at the code in src/main/java/com/parasol/EmailRouter.java and you’ll see the consume method references the emails-in message channel that was defined in application.properties.

Now you know how the application source code, and build and deployment manifests come together to form a complete application deployment on OpenShift. This application is designed with the Twelve-Factor principles in mind.

Verify

Confirm the following:

  • You can log into GitLab with your credentials

  • Two repositories are visible in your GitLab namespace: parasol-insurance and parasol-insurance-manifests

  • The deployment.yaml in the manifests repo references the kafka-credentials ExternalSecret

  • The application.properties in the source repo references the same credential via environment variables

Exercise 6: Explore OpenShift GitOps

The manifests you explored in the parasol-insurance-manifests repository are used to manage all aspects of the application build and deployment on OpenShift via OpenShift GitOps (Argo CD). Argo CD watches your manifest repository for changes and continuously reconciles the state of the manifests on the cluster with those in the repository. This prevents configuration drift and ensures applications are in a known state.

View your applications

Argo CD uses Application CRs to manage manifests on your OpenShift cluster.

  1. Use the Argo CD tab, or visit the Argo CD UI in a new tab.

    Do not click the Log in via OpenShift button.

  2. Enter your username and password to login:

    • Username: {user_name}

    • Password: {user_password}

  3. You should see 3 Application tiles displayed. These represent your build, development, and production environments.

    User Argo CD Applications
    Figure 9. Your three Argo CD Applications

These applications were created for you when your lab environment was provisioned. Each one corresponds to one of your three namespaces.

Update a manifest in GitLab

Let’s see GitOps in action by making a small change to a manifest and watching Argo CD reconcile it.

  1. Return to your parasol-insurance-manifests repository in GitLab.

  2. Open the app/templates/deployment.yaml and select Edit > Edit single file.

  3. In the metadata.labels section, add the following label:

    app.openshift.io/runtime: quarkus
  4. Scroll down. Enter a commit message:

    fix: show the quarkus icon in the openshift topology view
  5. The result will be similar to the following image:

    GitLab Deployment Manifest Update
    Figure 10. Adding a label to the deployment manifest in GitLab
  6. Click Commit changes.

Synchronize the application in Argo CD

  1. Return to the {user_name}-dev Application in Argo CD, and click Refresh.

    Clicking Refresh is not strictly required. Argo CD polls for changes every 3 minutes by default, and webhooks can be used to eliminate polling entirely.

  2. Once the refresh completes, you’ll briefly see an OutOfSync message.

    Argo CD OutOfSync
    Figure 11. Argo CD detecting an out-of-sync state
  3. The OutOfSync status will quickly change to Syncing since auto-sync is enabled.

  4. Wait for the application to report Synced.

Return to the Workloads > Topology view in the OpenShift Console. Make sure the {user_name}-dev Project is selected. You should see that the Quarkus icon is now displayed on the Parasol Insurance deployment.

Updated Topology View
Figure 12. Topology view with the Quarkus icon after GitOps sync

Verify

Confirm the following:

  • You can log into Argo CD and see three Applications (build, dev, prod)

  • After committing the label change in GitLab, Argo CD detects the change and syncs

  • The Topology view in OpenShift now shows the Quarkus icon on the Parasol Insurance deployment

Learning outcomes

By completing this module, you should now understand:

  • How OpenShift organizes workloads into namespaces with role-based access control

  • How the tenant-user role provides developer access while keeping secrets protected

  • How ExternalSecrets and Vault automate credential management without developer intervention

  • How Tekton pipelines can enforce a consistent build, test, scan, and deploy workflow

  • How application source code and deployment manifests work together following Twelve-Factor principles

  • How Argo CD continuously reconciles cluster state with Git, preventing configuration drift

Module summary

You have explored the platform and familiarized yourself with the tools you will use throughout this lab.

What you accomplished:

  • Logged into OpenShift and identified your build, dev, and prod namespaces

  • Opened the running Parasol Insurance application and observed the keyword-based email classification

  • Examined ExternalSecrets that sync credentials from Vault

  • Reviewed the CI/CD pipelines and a completed PipelineRun with five stages

  • Traced how credentials flow from Vault through ExternalSecrets into application configuration

  • Made a manifest change in GitLab and watched Argo CD reconcile it to the cluster

Next steps:

In Module 2 you will launch a cloud development environment, make a code change with an AI coding assistant, and watch it flow through the automated CI/CD pipeline.