AI/ML Lifecycle Automation at the Edge
Welcome to the AI/ML Lifecycle Automation at the Edge workshop!
Explore how AI and edge computing work together to solve real-world industrial automation challenges with Red Hat’s edge platforms and AI/ML technologies.
About this workshop
This hands-on workshop demonstrates an end-to-end AI solution for battery health monitoring in autonomous transportation robots operating in industrial facilities. You’ll explore the following technologies working together to enable intelligent automation at the edge:
-
Red Hat Device Edge (MicroShift) - Running on an autonomous transportation robot
-
Single Node OpenShift (SNO) - Plant edge server for model training
-
Red Hat OpenShift AI - Complete AI/ML platform on SNO
The lab covers GitOps deployment, AI inference, model serving and automated model training via pipelines. You’ll see how AI models can run at the edge while continuously improving without manual intervention.
| Part of the infrastructure is pre-configured. You’ll review configurations and execute hands-on tasks to understand the complete AI/ML lifecycle. |
Who this is for
This workshop is designed for platform engineers, ML engineers, DevOps teams, and data scientists who want to:
-
Deploy AI/ML workloads on edge platforms with limited resources
-
Implement automated MLOps workflows for continuous model improvement
-
Work with Kubernetes-based edge infrastructure (MicroShift, Single Node OpenShift)
-
Use Red Hat OpenShift AI for the complete ML lifecycle
-
Build solutions for IoT, industrial automation, or edge AI use cases
What you’ll learn
By the end of this workshop, you will understand how to:
-
Deploy AI/ML workloads on edge platforms (MicroShift, SNO)
-
Use Red Hat OpenShift AI components (workbenches, model serving, pipelines)
-
Implement automated MLOps workflows for continuous model improvement
-
Configure model serving architectures for edge inference
-
Enable pipeline automation for hands-off model retraining
-
Configure alerting and monitoring for critical edge components
Prerequisites
Before starting this workshop, you should have:
-
Basic Kubernetes/OpenShift concepts (pods, deployments, services)
-
Basic Python and Jupyter notebooks knowledge
-
Basic machine learning concepts (training, inference, model validation)
-
Experience with command-line tools (ssh, oc)
Lab modules
This workshop consists of 6 hands-on modules:
Introduction - Business scenario and solution architecture
Technical Details - Environment specs and lab access
-
Robot infrastructure setup
-
MinIO storage deployment
-
Model serving deployment
-
Battery monitoring system deployment
-
Review Red Hat OpenShift AI installation
-
Review data science project
-
Review data connection to robot storage
-
Review training workbench
-
Import training notebooks
-
Train battery health models
-
Deploy stress detection model server
-
Deploy time-to-failure model server
-
Query inference endpoints
-
Review pipeline server
-
Execute automated retraining pipeline
-
Schedule continuous automation
-
Review BMS dashboard
-
Test battery health alerts
Conclusion and next steps - Summary and next steps
Let’s get started!
Ready to explore AI/ML automation at the edge?
Navigate to Introduction to begin the workshop.