User Provisioning

Per-order setup — runs every time someone orders the lab

For: Lab developers writing AgV common.yaml and AgnosticD workload roles

On this page
  1. Understanding the three layers
  2. What is user provisioning?
  3. Approach A: Scheduler-Only
  4. Approach B: OCP Sandbox API
  5. Namespace configuration and quotas
  6. Variables from Sandbox API

What is User Provisioning?

Everything that happens when a lab order is placed — Sandbox API selects a cluster, injects variables, and your AgnosticD workload roles create a tenant environment. Per-order resources include:

Why Ansible and GitOps — both used together You could do all of this via GitOps or all via Ansible — both are technically possible. The reason for the split is reuse: well-tested AgnosticD roles already exist for foundational work (RHBK user, namespaces, Gitea instance, LiteMaaS key), so Ansible is used there. For the workload layer (LibreChat, MCP servers, agent), GitOps via ArgoCD gives declarative management, self-healing, and a clear audit trail in git — so that is used instead of writing more Ansible. Use what already exists; use GitOps where it adds value.
Order lifecycle: provision → running → destroy Every RHDP order goes through three phases: provision (create resources), running (attendee works), destroy (remove resources). The destroy playbook must mirror provision — every resource created must be cleaned up, or you leak resources on the shared cluster.

Understanding the Three Layers — Lab Developer Perspective

Before writing a single line of AgV, understand who owns each layer and what goes where. The separation is about ownership, not resource type.

LayerOwnerWhat goes hereExample — MCP labExample — Service Mesh lab
Infra Developer Generic cluster infrastructure — operators, ArgoCD, base platform capabilities. Not specific to any one lab. RHBK, Gitea operator, ArgoCD, Tekton, ToolHive Same — RHBK, ArgoCD, Service Mesh operator
Platform Lab owner (you) Cluster-wide resources specific to your lab. Not per-user — shared across all tenants on the cluster. What your lab needs at the cluster level that infra doesn't provide. User workload monitoring ConfigMap Shared Service Mesh Gateway, mesh control plane config
Tenant Lab owner (you) Per-user resources — created fresh for every order, destroyed when the order ends. RHBK user, namespaces, Gitea instance, LiteMaaS key, LibreChat, MCP servers User namespace, app deployment, mesh membership
Platform + Tenant are both owned by you — the lab developer The same persona writes both layers. Platform is cluster-wide (runs once when the cluster is provisioned for your lab). Tenant is per-user (runs every order). Infra is written by a developer — typically someone setting up the cluster for the first time. All three layers are developer work.

Approach A: Scheduler-Only Summit 2026

Sandbox API selects a cluster and injects credentials — your Ansible roles do everything else. Full lifecycle control with more role complexity; requires a cluster provisioner.

Key pattern: users and namespaces are pre-created by Ansible before ArgoCD runs In the MCP lab example below, Ansible roles run in this order before ArgoCD touches anything:
  1. ocp4_workload_tenant_keycloak_user — creates the RHBK user in the shared realm
  2. ocp4_workload_tenant_namespace — creates all namespaces with quotas and RBAC
  3. ocp4_workload_tenant_gitea — deploys the per-tenant Gitea instance
Only then does ocp4_workload_gitops_bootstrap create the ArgoCD Application, which deploys workloads into those existing namespaces using CreateNamespace=false. Ansible owns the foundation, ArgoCD owns the workloads.

The AgV common.yaml (Scheduler-Only)

Complete annotated common.yaml for the MCP with OpenShift GitOps lab. Every field is explained.

# ============================================================
# AgnosticV common.yaml — MCP with OpenShift GitOps Lab
# Pattern: Scheduler-Only (Summit 2026)
# ============================================================

# ── Deployer config ───────────────────────────────────────────
__meta__:
  deployer:
    type: agnosticd             # use AgnosticD as the provisioning engine

  # Sandbox: one OcpSandbox entry — scheduler-only mode.
  # Sandbox API picks a cluster matching these tags and injects
  # credentials. It does NOT create namespaces or users.
  sandboxes:
  - kind: OcpSandbox
    alias: cluster            # logical name; variables are prefixed sandbox_{alias}_
    cloud_selector:
      cloud: cnv-dedicated-shared
      demo: mcp-with-openshift
      purpose: prod
      keycloak: "yes"          # only clusters with RHBK pre-installed
    quota:
      cpu: "16"               # total CPU request across all tenant namespaces
      memory: "32Gi"          # total memory request across all tenant namespaces
      pods: "50"              # total pod count across all tenant namespaces

# ── Variables ─────────────────────────────────────────────────

# Namespace naming: always use guid as a prefix so concurrent
# orders on the same cluster never collide.
tenant_namespace: "sandbox-{{ guid }}-user"
tenant_tools_namespace: "sandbox-{{ guid }}-tools"

# Keycloak realm where users will be created.
# This realm must already exist (created by cluster provisioner).
rhbk_realm_name: mcp-lab
rhbk_namespace: keycloak

# Gitea: the shared instance URL and admin user.
# Admin credentials come from a cluster Secret (set during provisioning).
gitea_hostname: "gitea.{{ sandbox_openshift_ingress_domain }}"
gitea_admin_user: gitea-admin

# Lab repos to mirror into the per-tenant Gitea org.
# Each repo gets mirrored under: gitea/{tenant_username}/{repo_name}
gitea_repos_to_mirror:
- name: mcp-lab-gitops
  upstream: "https://github.com/rhpds/mcp-lab-gitops.git"
- name: mcp-lab-apps
  upstream: "https://github.com/rhpds/mcp-lab-apps.git"

# LiteMaaS: model API endpoint for the lab.
# Virtual keys are scoped to this model and rate-limited per tenant.
litemaas_model: granite-3.3-8b-instruct
litemaas_api_url: "https://litemaas.example.com"
litemaas_key_prefix: "mcp-lab-{{ guid }}"
litemaas_tokens_per_minute: 100000

# ArgoCD: the shared ArgoCD instance that manages tenant GitOps apps.
# The bootstrap role creates an AppProject + Application for this tenant.
argocd_namespace: openshift-gitops
argocd_app_name: "mcp-lab-{{ guid }}"
argocd_gitops_repo: "https://{{ gitea_hostname }}/{{ guid }}/mcp-lab-gitops.git"
argocd_gitops_path: bootstrap/

# Showroom: lab guide and user-facing URLs.
# All URLs use sandbox_openshift_ingress_domain (injected by Sandbox API).
showroom_namespace: "sandbox-{{ guid }}-showroom"
showroom_git_repo: "https://github.com/rhpds/showroom-mcp-with-openshift-gitops"
showroom_extra_vars:
  console_url: "{{ sandbox_openshift_console_url }}"
  keycloak_url: "https://keycloak.{{ sandbox_openshift_ingress_domain }}/realms/mcp-lab"
  gitea_url: "https://{{ gitea_hostname }}"
  argocd_url: "https://openshift-gitops-server-openshift-gitops.{{ sandbox_openshift_ingress_domain }}"
  username: "{{ guid }}"
  password: "{{ tenant_password }}"

# ── Workload roles ─────────────────────────────────────────────
# Roles run in order. Each role does one job.
# All roles receive sandbox_ variables automatically from Sandbox API.
agnosticd_roles:

# 1. Create the RHBK user in the mcp-lab realm.
#    Sets: tenant_username, tenant_password (generated), tenant_email
- role: ocp4_workload_tenant_keycloak_user
  vars:
    tenant_keycloak_user_realm: "{{ rhbk_realm_name }}"
    tenant_keycloak_user_namespace: "{{ rhbk_namespace }}"
    tenant_keycloak_user_username: "{{ guid }}"

# 2. Create per-tenant namespaces with ResourceQuota and LimitRange.
#    Creates: sandbox-{guid}-user, sandbox-{guid}-tools
#    Also creates the ServiceAccount the ArgoCD app deploys as.
- role: ocp4_workload_tenant_namespace
  vars:
    tenant_namespace_namespaces:
    - name: "{{ tenant_namespace }}"
      display_name: "MCP Lab — {{ guid }}"
      quota_cpu_requests: "8"
      quota_memory_requests: "16Gi"
      quota_pods: "30"
    - name: "{{ tenant_tools_namespace }}"
      display_name: "MCP Lab Tools — {{ guid }}"
      quota_cpu_requests: "4"
      quota_memory_requests: "8Gi"
      quota_pods: "20"

# 3. Create Gitea org and mirror repos.
#    Creates org named after guid, mirrors upstream repos into it.
#    Sets: gitea_user_token (for ArgoCD to pull from)
- role: ocp4_workload_tenant_gitea
  vars:
    tenant_gitea_hostname: "{{ gitea_hostname }}"
    tenant_gitea_org: "{{ guid }}"
    tenant_gitea_repos: "{{ gitea_repos_to_mirror }}"

# 4. Create LiteMaaS virtual key for this tenant.
#    Sets: litemaas_virtual_key (API key for the attendee to use)
- role: ocp4_workload_litellm_virtual_keys
  vars:
    litellm_virtual_keys_api_url: "{{ litemaas_api_url }}"
    litellm_virtual_keys_key_name: "{{ litemaas_key_prefix }}"
    litellm_virtual_keys_model: "{{ litemaas_model }}"
    litellm_virtual_keys_tpm: "{{ litemaas_tokens_per_minute }}"

# 5. Bootstrap ArgoCD: create AppProject + Application.
#    The Application points at the tenant's Gitea repo (bootstrap/ path).
#    ArgoCD syncs lab content into the tenant namespaces automatically.
- role: ocp4_workload_gitops_bootstrap
  vars:
    gitops_bootstrap_argocd_namespace: "{{ argocd_namespace }}"
    gitops_bootstrap_app_name: "{{ argocd_app_name }}"
    gitops_bootstrap_repo_url: "{{ argocd_gitops_repo }}"
    gitops_bootstrap_path: "{{ argocd_gitops_path }}"
    gitops_bootstrap_target_namespace: "{{ tenant_namespace }}"

# 6. Deploy Showroom with all per-tenant URLs injected.
#    Showroom renders the lab guide and passes user credentials.
- role: ocp4_workload_showroom
  vars:
    showroom_namespace: "{{ showroom_namespace }}"
    showroom_git_repo: "{{ showroom_git_repo }}"
    showroom_extra_vars: "{{ showroom_extra_vars | combine({'litemaas_api_key': litemaas_virtual_key}) }}"
Role ordering is deliberate Keycloak user must be created first (it generates tenant_password). Gitea must be created before ArgoCD bootstrap (ArgoCD pulls from the tenant Gitea repo). LiteMaaS key must be created before Showroom (Showroom injects the key into the lab guide). If you change the order, later roles will fail with undefined variables.

Next: OCP Sandbox API →