AgentOps Observability with Red Hat AI

Welcome to the AgentOps Observability with Red Hat AI workshop!

What you’ll learn

In this workshop, you will:

  • Monitor the Stack: Utilize Red Hat AI’s out-of-the-box observability stack to track key metrics and logs of your Agentic App

  • Trace Multi-Agent Executions: Track requests across multi-agent frameworks and MCP tools to understand the complete decision-making path

  • LLM Evaluations: Combine tracing with LLM evaluations (Evals) in MLflow to ensure your agents maintain high-quality outputs

  • From Development to Production: Move from manual notebook evaluations to automated AI pipelines that run continuously on the platform

Who this is for

This workshop is designed for SREs, Platform Engineers, and AI Developers/Engineers who want to implement end-to-end observability for multi-agent AI systems.

Prerequisites

This workshop assumes basic familiarity with Kubernetes and AI/ML concepts.

Workshop environment

You will work with a multi-agent mortgage lending application built with LangGraph and integrated with Model Context Protocol (MCP) tools.

Log into OpenShift with the following credentials:

  • Username: user1

  • Password: openshift

All environment details and credentials are available in the lab interface.

Estimated time

This workshop will take approximately 1 hour 30 minutes to complete.

You can work at your own pace.

Let’s get started!

Click on Workshop Overview in the navigation to begin your learning journey.