Module 1: The agentic app and why observability matters

Agentic AI apps don’t fail silently; they fail distributedly. In this module, you’ll explore the pre-deployed mortgage-ai multi-agent application and understand why traditional monitoring approaches fall short for these complex systems.

But first, what exactly is an AI agent? An AI agent is a system that uses a Large Language Model (LLM) to reason about a task, decide which tools to call, and take autonomous actions, going beyond simple request/response patterns like traditional APIs or basic chatbots. When multiple agents collaborate, each with its own tools and responsibilities, you get a multi-agent system, which is capable but significantly harder to observe and debug.

Fed Aura Capital needs end-to-end visibility into their multi-agent workflows, and you’ve been tasked with evaluating how observability can help. Before implementing solutions, you need to understand what you’re working with.

Learning objectives

By the end of this module, you’ll be able to:

  • Describe the architecture of the multi-agent mortgage lending system

  • Identify the 5 distinct agent personas and their responsibilities

  • Explain why distributed AI systems require specialized observability approaches

  • Recognize the key failure modes in multi-agent architectures

Exercise 1: Access the application UI

You can access the mortgage-ai application directly from the Mortgage AI App tab at the top of this page, without opening a separate browser window.
  1. First, log in to the OpenShift cluster from the terminal:

    oc login --insecure-skip-tls-verify $(oc whoami --show-server) -u user1 -p openshift
  2. Open the OpenShift Console using the OCP Console tab at the top of this page, or navigate directly to https://console-openshift-console.apps.cluster.example.com. Log in with the same credentials you just used: username user1, password openshift:

    OpenShift SSO login page
  3. Once logged in, verify the project selector (top-left) shows wksp-user1. Navigate to Workloads > Pods to see the mortgage-ai components running in your namespace:

    OpenShift Console Pods view showing mortgage-ai components running

    You will explore these components in Exercise 2.

  4. Notice the application launcher (waffle icon) in the top-right corner. This provides quick access to integrated services like MLflow, Red Hat OpenShift AI, and Grafana. You will use it in Modules 4, 5, and 6.

  5. The left sidebar also contains an Observe section with Logs, Metrics, and Alerts. You will use Observe > Logs in Module 3 to investigate application logs.

Throughout this workshop, you can switch between CLI and Console freely. Some tasks are faster in the terminal; others are easier to understand visually in the Console.
  1. Get the UI route:

    MORTGAGE_UI=$(oc get route mortgage-ai-ui-route -n wksp-user1 -o jsonpath='{.spec.host}')
    echo "Mortgage AI UI: https://${MORTGAGE_UI}"

    You can also find this route in the OpenShift Console you explored earlier. Navigate to Networking > Routes in the wksp-user1 project and filter by Name mortgage-ai-ui:

    OpenShift console Routes page showing mortgage-ai-ui-route in the wksp-user project
  2. Open the URL in your browser. You’ll land on the Fed Aura Capital mortgage-ai interface:

    Fed Aura Capital mortgage-ai application interface
  3. Try a sample conversation clicking the Explore Products button:

    Explore Products button on the Fed Aura Capital landing page

The Prospect Agent should respond with mortgage product options:

Mortgage-AI chat interface with Prospect Agent responding with loan options
The LLM backend has rate limiting and may occasionally time out. If the agent does not respond or you see an error, reload the page and ask the question again.
Because the application uses generative AI, responses are non-deterministic. The exact wording, formatting, and details may differ from the screenshots shown in this workshop.

Exercise 2: Explore the multi-agent architecture

Now that you’ve seen the application in action, let’s explore what’s running under the hood. The mortgage-ai system serves 5 distinct agent personas through a single API deployment.

The deployed components

Component Purpose

mortgage-ai-api

FastAPI backend serving all 5 agent personas via WebSocket endpoints

mortgage-ai-ui

React frontend for interacting with agents

mortgage-ai-db

PostgreSQL with pgvector for document storage and vector embeddings

keycloak

Identity provider (disabled for this workshop)

minio

S3-compatible object storage for documents

Mortgage AI system architecture showing React frontend connecting via WebSocket to FastAPI backend with 5 agent personas and PostgreSQL with pgvector and MinIO object storage
  1. View the pods and their status:

    oc get pods -n wksp-user1
  2. Check the API service health:

    MORTGAGE_HEALTH=$(oc get route mortgage-ai-api-health-route -n wksp-user1 -o jsonpath='{.spec.host}')
    curl -sk https://${MORTGAGE_HEALTH}/health/ | jq -r .

    Expected output:

    [
      {
        "name": "API",
        "status": "healthy",
        "message": "API is running",
        "version": "0.1.0",
        "start_time": "2026-04-07T22:21:27.657181+00:00"
      },
      {
        "name": "Database",
        "status": "healthy",
        "message": "PostgreSQL connection successful",
        "version": "0.1.0",
        "start_time": "2026-04-07T22:21:19.877102+00:00"
      }
    ]
  3. Each persona has its own chat endpoint:

    /api/prospect/chat    - Prospect Agent (initial inquiries)
    /api/borrower/chat    - Borrower Agent (application intake)
    /api/loanofficer/chat - Loan Officer Agent (pipeline management)
    /api/underwriter/chat - Underwriter Agent (risk assessment)
    /api/ceo/chat         - Executive Agent (analytics)

The 5 agent personas

Persona Role Agent Key Capabilities

Prospect

Unauthenticated

Public Assistant

Product info, affordability estimates

Borrower

borrower

Borrower Assistant

Application intake, document upload, status tracking, condition response

Loan Officer

loan_officer

LO Assistant

Pipeline management, application review, communication drafting, knowledge base search

Underwriter

underwriter

Underwriter Assistant

Risk assessment, compliance checks, condition management, decisions

CEO

ceo

CEO Assistant

Pipeline analytics, audit trail, decision trace, model monitoring

The 5 agent personas showing Prospect and Borrower and Loan Officer and Underwriter and CEO with their roles and capabilities routing into a single FastAPI deployment via WebSocket

Key AI patterns

This application demonstrates production-ready AI patterns for regulated industries:

  • Multi-agent orchestration - 5 LangGraph (a framework for building stateful, multi-agent AI workflows) agents with role-scoped tools and RBAC (Role-Based Access Control) enforcement

  • Compliance knowledge base - Retrieval-Augmented Generation (RAG) using pgvector (a PostgreSQL extension for vector similarity search), with tiered boosting (federal regulations > agency guidelines > internal policies)

  • Model routing - Complexity-based routing between fast and capable LLM tiers

  • Comprehensive audit trail - Hash-chained, append-only audit events with MLflow trace correlation

  • PII masking - Middleware-based masking for executive roles (SSN, DOB, account numbers)

  • Safety shields - Input and output content filters with escalation pattern detection

Exercise 3: Experience the observability gap

Now that you understand the 5 agent personas and how they collaborate, let’s experience firsthand the observability challenge that comes with distributed AI systems.

You’ll interact with the application as a CEO and ask the assistant a question. The response will look correct, but you’ll have no way of knowing what happened behind the scenes.

  1. From the application landing page, click Sign In. In the sign-in dialog, use the Persona Demo Login section at the bottom and select the CEO persona:

    Sign in dialog with Persona Demo Login showing CEO selection
  2. After signing in, you’ll land on the Executive Dashboard, the CEO persona’s view of portfolio health and operations:

    CEO Executive Dashboard showing Pipeline Overview and Denial Analysis
  3. In the Your Assistant chat panel on the right, type the following question:

    What's the portfolio health?
  4. The assistant responds with a detailed portfolio health overview: active applications, stage breakdown, pull-through rate, and average days to close:

    CEO Assistant responding with portfolio health overview

Build a conversation for later analysis

Before we move on, let’s generate a richer session with multiple turns. This will give us more data to explore when we get to tracing in Module 4.

  1. In the same chat, ask a follow-up question:

    What are the denial trends?
  2. Then ask one more:

    What is the Loan Officer Performance?

Each question triggers a different tool call behind the scenes. By the time you finish, your CEO session will have 3 turns spanning portfolio health, denial analysis, and officer performance. In Module 4, you’ll trace every step of this conversation end-to-end.

The observability challenge

The response looks great. But stop and think about what just happened behind the scenes:

  • Which agent processed your question? Was it a single agent, or did multiple agents collaborate?

  • Which tools did the agent invoke to gather pipeline data, denial rates, and performance metrics?

  • How long did each step take? Was the LLM call fast, or did a tool call add latency?

  • What if the response was wrong? How would you trace back to the root cause?

  • What if a tool call failed silently? Would you even know?

Without proper observability instrumentation, these questions are unanswerable. The application gives you a polished response, but the entire decision-making process (the agent routing, tool invocations, LLM calls, and data retrieval) remains a black box.

Module summary

What you accomplished:

  • Explored the 5 agent personas and the multi-agent architecture

  • Interacted with agents via the application UI

  • Experienced the observability gap in distributed AI systems

Key takeaways:

  • Multi-agent systems distribute decision-making across multiple components

  • A single API can serve multiple agent personas with different capabilities

  • Specialized observability approaches are required for AgentOps

Next steps:

Module 2 will introduce the 3 pillars of observability (metrics, logs, and traces) and how different personas (SRE/Platform Engineering vs. AI Developer/Engineer) approach monitoring these systems.