Conclusion
You started this workshop as a senior SRE at Meridian Financial, drowning in failed Ansible jobs and compliance deadlines. You end it with a fully operational AI incident response team — built on open source models, running on OpenShift, extensible without writing code, and auditable enough for regulators.
What you accomplished
| Module | What you did |
|---|---|
1 — The Problem Domain |
Explored AAP2, Kira, and Rocket.Chat; traced an AI-generated ticket back to the failed job that caused it; understood the Deep Agents pipeline |
2 — Your First Failure |
Launched a job yourself, watched it fail, and examined the full pipeline from webhook to ticket to notification; broke down the |
3 — Your Agentic AIOps Team |
Built and deployed a new |
4 — Open Source Models |
Swapped every specialist agent from |
5 — Full OSS Control & Autonomy |
Switched the |
OpenShift is the natural home for Agentic AI
Throughout this workshop, you used OpenShift primitives to manage an AI system:
-
ConfigMaps to define and update agent configurations —
oc editoroc applyto add a new specialist -
PersistentVolumeClaims to store and extend skills —
oc cpto add domain knowledge -
Environment variables to switch models —
oc set envto go from frontier to OSS in one command -
Rolling restarts to deploy changes —
oc rollout restartfor zero-downtime updates -
Resource limits and quotas to control infrastructure costs
-
Routes and Services for secure internal communication between agents and downstream systems
These are the same tools your team uses for any application on OpenShift. AI agents aren’t special infrastructure — they’re applications that happen to call LLMs. Treating them as such makes them manageable, observable, and governable.
This applies beyond AIOps. The same pattern works for any agentic AI workload: customer service agents, code review assistants, compliance monitors, data pipeline orchestrators. If it runs on OpenShift, it inherits the operational maturity of the platform.
Red Hat OpenShift AI MaaS — why it matters
You used MaaS throughout this workshop without installing, configuring, or managing a single GPU. Here’s what that gave you:
-
Model portfolio — access to both frontier (Claude, Gemini) and open source (Qwen, LLaMA Scout, Minimax) models through a single API endpoint
-
No per-token billing — OSS models run on Red Hat’s infrastructure at a fixed cost; no usage surprises at scale
-
Data sovereignty — your job logs, error messages, and system topology never left the platform; critical for regulated environments like Meridian Financial
-
Security — no external API keys to manage or risk leaking; all inference traffic stays internal
-
Model flexibility — switch models per agent, per task, or per risk level with a single configuration change
-
GPU abstraction — models run across Nvidia and Intel GPUs utilizing Red Hat OpenShift AI and vLLM; you consume an API, not hardware
The combination of MaaS + OpenShift means you can run a production AI operations team where the models, the agents, and the data all stay within your controlled environment.
Key concepts to carry forward
The harness matters more than the model
A well-designed agentic harness like Deep Agents — with focused context, specialist delegation, structured skills, and quality review — gets excellent results from open source models. The framework compensates for the capability gap.
Skills are institutional knowledge made actionable
The sre_package_management skill you wrote in Module 3 encoded Meridian’s specific Satellite content view architecture and CRB/EPEL requirements. That knowledge was previously trapped in a runbook, a wiki page, or someone’s head. Now it’s part of the agent’s reasoning — and it works with any model.
Autonomy is a spectrum, not a switch
Human-in, human-on, human-out — these are distinct patterns for distinct risk levels. The engineering work is building the pipeline. The governance work is deciding how much autonomy to grant at each risk level.
Configuration, not code
You added a new specialist agent, switched models twice, and deployed institutional knowledge — all without writing Python. YAML, markdown, and oc commands. That’s the extensibility story.
References
Thank you
Tony Kay, Red Hat — AI Lead, Consulting Technical Marketing Manager Burr Sutter, Red Hat — Senior Director, Developer Experience Ragu Banda, Red Hat — Associate Principal AI Specialist SA Ishu Verma, Red Hat — Senior Principal Technical Marketing Manager
LB 2645 — Red Hat Summit 2026
Built with: LangChain Deep Agents, Red Hat OpenShift AI MaaS, Ansible Automation Platform 2.6, OpenShift 4.20, Kira, Rocket.Chat