3.1 Setup & Pipeline Import

In this section, you will connect to the workshop environment and import the PDF processing pipeline that powers the event-driven demo. This is a one-time setup before you can trigger the automated workflow.

Getting Connected

For this walkthrough, you will need access to both the Red Hat OpenShift AI Dashboard and the OpenShift Container Platform Console. Each attendee has a unique user account.

Environment Information

If you are viewing these instructions in the deployed lab environment, the values below will be correctly rendered for you. If viewing from a static source like GitHub, placeholder values will appear.

Table 1. Lab Environment Access

Username

userX

Password

openshift

Login Procedure

  1. Click the Login with OpenShift button at the OpenShift AI Dashboard.

    The main login screen for Red Hat OpenShift AI
  2. Enter your user credentials (userX and openshift) provided above.

    Your browser might display a security warning. It is safe to ignore this message for the lab environment.

  3. After you authenticate, you will land on the Red Hat OpenShift AI dashboard.

    The main dashboard of Red Hat OpenShift AI after logging in

Congratulations, you are now connected!

Accessing the Terminal

This workshop provides a standalone Terminal for running oc commands against the cluster. You can access it from the tabs on the right-hand side of this workshop guide.

  1. Select the Terminal tab at the top of the right-hand panel.

    The standalone Terminal tab in the workshop guide
  2. Log in to the OpenShift Container Platform cluster with your provided credentials:

    oc login --insecure-skip-tls-verify=false -u userX -p openshift https://api.MYCLUSTER.com:6443"

    When prompted, answer y to accept the insecure connection:

    The server uses a certificate signed by an unknown authority.
    You can bypass the certificate check, but any data you send to the server could be intercepted by others.
    Use insecure connections? (y/n): y
    
    WARNING: Using insecure TLS client config. Setting this option is not supported!
    
    Login successful.
    
    You have one project on this server: "user1"
    
    Using project "user1".
    Welcome! See 'oc help' to get started.
  3. Make sure you are working in the userX namespace, which has been pre-created for you.

    oc project userX

    You should see an output similar to the following:

    Now using project "userX" on server "https://api.cluster-cxfls.dynamic.redhatworkshops.io:6443".

Import the PDF Processing Pipeline

Before you can trigger a pipeline run, you need to import the PDF processing pipeline into your Data Science Project. This is a one-time setup step.

Download the Pipeline File from Your Workbench

  1. In the Red Hat OpenShift AI Dashboard, navigate to your project and launch your Workbench (named "My Workbench"), or switch to it if it is already open.

    Launching the Jupyter Workbench from the OpenShift AI dashboard
  2. Once JupyterLab opens, use the file browser on the left to navigate to:

    hello-chris-rag-pipeline/lab-content/2.1/
The main dashboard of Red Hat OpenShift AI after logging in
  1. Inside this directory, you will find the file simple-pdf-processing-pipeline.yaml. Right-click on the file and select Download from the context menu. This will save the file to your local computer.

  2. You can now close the JupyterLab browser tab.

Upload the Pipeline to OpenShift AI

  1. In the Red Hat OpenShift AI Dashboard, navigate to the Pipelines section within your Data Science Project.

  2. Click the Import pipeline button.

    The Import pipeline button in the Pipelines section
  3. Give your pipeline this exact name:

    simple-pdf-processing-pipeline

    The name must match exactly. The event-driven trigger service looks up the pipeline by this display name.

    The pipeline name field filled in with simple-pdf-processing-pipeline
  4. Click the Upload box and select the simple-pdf-processing-pipeline.yaml file that you just downloaded.

    The pipeline upload dialog with the file selected
  5. Click Import pipeline to finish.

Once imported, you are ready to trigger the event-driven flow in the next section.

Summary

  • Connected to the Red Hat OpenShift AI Dashboard, the OpenShift Container Platform Console, and the cluster command line via the standalone Terminal

  • Imported a pre-compiled Kubeflow Pipeline definition into your Data Science Project’s pipeline server

  • Registered the pipeline by display name so the event-driven trigger can discover and launch it automatically