Scheduler-Only → Roles + Variables

Variables You Receive from Sandbox API

Injected automatically — reference directly in your AgV common.yaml:

Variable Example value How it is used in common.yaml
sandbox_openshift_api_url https://api.cluster-drw4x.example.com:6443 Passed to any role that needs to authenticate with the cluster API. AgnosticD uses this automatically for k8s module connections.
sandbox_openshift_ingress_domain apps.cluster-drw4x.example.com Used to build all service URLs: Keycloak host, Gitea host, Showroom URLs, LibreChat URL. Example: "keycloak-keycloak.{{ sandbox_openshift_ingress_domain }}"
sandbox_openshift_console_url https://console-openshift-console.apps.cluster-drw4x.example.com Passed to Showroom user_data so lab attendees see a direct link to the OCP console tab.
cluster_admin_agnosticd_sa_token eyJhbGciOiJSUzI1NiIs... Used internally by AgnosticD for all Kubernetes API calls. Never reference this in your role vars or log it. Never commit it.
sandbox_openshift_namespace is NOT provided In the Scheduler-Only pattern, sandbox_openshift_namespace is undefined. Sandbox API did not create a namespace. Your ocp4_workload_tenant_namespace role creates namespaces using ocp4_workload_tenant_namespace_suffixes. Do not reference sandbox_openshift_namespace anywhere in your config.
Cluster provisioner runs a separate playbook The cluster-provision.yml playbook is run by a developer when a new cluster is added to the pool. It is not part of AgnosticV and is not triggered per order.

Role-by-Role Walkthrough

Role 1 — ocp4_workload_tenant_keycloak_user

Collection: agnosticd.namespaced_workloads

Creates one RHBK user in the existing realm on the cluster (installed by cluster provisioner).

Key vars: ocp4_workload_tenant_keycloak_user_rhbk_host, ocp4_workload_tenant_keycloak_user_realm, ocp4_workload_tenant_keycloak_username, common_password

Role 2 — ocp4_workload_tenant_namespace

Collection: agnosticd.namespaced_workloads

Creates one OCP namespace per entry in ocp4_workload_tenant_namespace_suffixes (named <username>-<suffix>). Also:

Key vars: ocp4_workload_tenant_namespace_suffixes, ocp4_workload_tenant_keycloak_username

Role 3 — ocp4_workload_tenant_gitea

Collection: agnosticd.namespaced_workloads

Configures a Gitea org and repos in the shared Gitea instance:

The mirrored repo is what ArgoCD watches — changes pushed to the tenant's fork are deployed automatically.

Key vars: ocp4_workload_tenant_gitea_host, ocp4_workload_tenant_gitea_admin_user, ocp4_workload_tenant_gitea_repos

Role 4 — ocp4_workload_litellm_virtual_keys

Collection: agnosticd.ai_workloads

Creates a rate-limited AI API virtual key via the LiteMaaS admin API (unified OpenAI-compatible endpoint):

catch_all: false — always Setting ocp4_workload_litellm_virtual_keys_catch_all: true causes the destroy playbook to delete ALL virtual keys on the LiteMaaS instance that are not tagged to this tenant. This will break every other concurrent lab's AI access. Keep this false.

Role 5 — ocp4_workload_gitops_bootstrap

Collection: agnosticd.core_workloads

Creates an ArgoCD "bootstrap" Application (app-of-apps pattern) pointing to a Helm chart in the tenant's Gitea repo, which defines all child Applications for this tenant:

Key vars: ocp4_workload_gitops_bootstrap_revision, ocp4_workload_gitops_bootstrap_path, ocp4_workload_gitops_bootstrap_helm_values, litellm_virtual_key (fact from role 4)
How namespace targeting works See GitOps Pattern — How ArgoCD picks up pre-created namespaces for a full explanation.

Role 6 — ocp4_workload_showroom

Collection: agnosticd.showroom

Deploys the Showroom tab UI (lab guide + OCP console + app URLs) driven by an Antora/AsciiDoc Git repo:


Destroy Behavior

AgnosticD calls remove_workload.yml of each role in remove_workloads: order — the reverse of provisioning:

Step Role What is deleted Notes
1 ocp4_workload_showroom Showroom Deployment, Service, Route UI goes down first. Users see the session as ended.
2 ocp4_workload_litellm_virtual_keys LiteMaaS virtual key for this tenant Tenant's AI API access revoked. Budget tracking ends.
3 ocp4_workload_gitops_bootstrap ArgoCD bootstrap Application + all child apps (cascade delete) ArgoCD's cascade delete removes all Kubernetes resources deployed by ArgoCD in the tenant namespaces. This is the most powerful step — it cleans up all workloads without needing individual role teardowns.
4 ocp4_workload_tenant_gitea Gitea org, repos, and tenant user account Tenant's Git repositories deleted from shared Gitea.
5 ocp4_workload_tenant_namespace All OCP namespaces created for this tenant Any remaining in-namespace resources (not cleaned by ArgoCD cascade) are deleted with the namespace.
6 ocp4_workload_tenant_keycloak_user RHBK user account for this tenant SSO identity removed. Tenant cannot log in after this step.
7 (Sandbox API) Cluster-admin SA token revoked; cluster returned to pool Automatic — Sandbox API handles this after all remove_workloads complete. The cluster is re-provisioned or returned to available inventory.
ArgoCD cascade delete is your cleanup friend By deleting the ArgoCD bootstrap Application with cascade enabled (the default), ArgoCD removes every Kubernetes resource it manages across all tenant namespaces. This means your namespace deletion in step 5 is straightforward — most resources are already gone. Any resources created directly by Ansible (not via ArgoCD) must be handled by the relevant role's remove_workload.yml.
Shared cluster services are not deleted The destroy playbook does not touch RHBK, Gitea (the server), ArgoCD (the operator), or LiteMaaS themselves — those are shared cluster services managed by the cluster provisioner. Only per-tenant resources (user accounts, orgs, namespaces, keys) are deleted.

← Previous: Scheduler-Only — common.yaml Next: Scheduler-Only — Mistakes →