AgentOps Observability with Red Hat AI
Welcome to the AgentOps Observability with Red Hat AI workshop!
What you’ll learn
In this workshop, you will:
-
Monitor the Stack: Utilize Red Hat AI’s out-of-the-box observability stack to track key metrics and logs of your Agentic App
-
Trace Multi-Agent Executions: Track requests across multi-agent frameworks and MCP tools to understand the complete decision-making path
-
LLM Evaluations: Combine tracing with LLM evaluations (Evals) in MLflow to ensure your agents maintain high-quality outputs
-
From Development to Production: Move from manual notebook evaluations to automated AI pipelines that run continuously on the platform
Who this is for
This workshop is designed for SREs, Platform Engineers, and AI Developers/Engineers who want to implement end-to-end observability for multi-agent AI systems. See the Workshop Overview for detailed prerequisites and target audience.
Workshop environment
You will work with a multi-agent mortgage lending application built with LangGraph and integrated with Model Context Protocol (MCP) tools.
You will have access to:
-
OpenShift cluster: https://console-openshift-console.apps.cluster.example.com
-
Username: user1
-
Password: openshift
| All environment details and credentials are available in the lab interface. |
Estimated time
This workshop will take approximately 1 hour 30 minutes to complete. An abbreviated 45-minute path is also available, covering Modules 1, 3, and 4.
You can work at your own pace.