User Provisioning → Namespace Quotas + Variables
Namespace Configuration and Quotas
Quotas prevent any single tenant from exhausting cluster resources. In Scheduler-Only, ocp4_workload_tenant_namespace applies them via ResourceQuota and LimitRange. In OCP Sandbox API, set them in the __meta__.sandboxes quota block.
ResourceQuota created by ocp4_workload_tenant_namespace
# ResourceQuota applied to each tenant namespace
apiVersion: v1
kind: ResourceQuota
metadata:
name: tenant-quota
namespace: "sandbox-{{ guid }}-user"
spec:
hard:
requests.cpu: "{{ quota_cpu_requests }}"
requests.memory: "{{ quota_memory_requests }}"
limits.cpu: "{{ quota_cpu_limits | default(quota_cpu_requests) }}"
limits.memory: "{{ quota_memory_limits | default(quota_memory_requests) }}"
pods: "{{ quota_pods }}"
persistentvolumeclaims: "{{ quota_pvcs | default('10') }}"
services.loadbalancers: "0" # no LoadBalancer services on shared clusters
services.nodeports: "0" # no NodePort services on shared clusters
LimitRange created per namespace
# LimitRange sets default resource requests/limits for containers
# that do not specify their own. Prevents unbounded containers.
apiVersion: v1
kind: LimitRange
metadata:
name: tenant-limits
namespace: "sandbox-{{ guid }}-user"
spec:
limits:
- type: Container
default:
cpu: 500m
memory: 512Mi
defaultRequest:
cpu: 100m
memory: 128Mi
max:
cpu: "4" # no single container can request more than 4 CPU
memory: 8Gi
Recommended quota values for MCP lab
| Namespace | CPU Requests | Memory Requests | Pods | Rationale |
|---|---|---|---|---|
sandbox-{guid}-user |
8 cores | 16 Gi | 30 | Main workload namespace; MCP server + application pods |
sandbox-{guid}-tools |
4 cores | 8 Gi | 20 | ToolHive MCP servers (each is a separate pod) |
sandbox-{guid}-showroom |
500m | 1 Gi | 5 | Showroom lab guide — lightweight Nginx + content |
NodePort and LoadBalancer services are blocked
On shared clusters,
services.loadbalancers: "0" and services.nodeports: "0" are always set. All external access must go through OpenShift Routes (kind: Route). If your lab content tries to create a LoadBalancer service, it will fail. Use Routes instead.
Variables from Sandbox API
Injected into every AgnosticD role automatically — declare nothing, use directly:
| Variable | Pattern | Description | Example |
|---|---|---|---|
sandbox_openshift_api_url |
Both | Cluster API server URL. Pass to k8s_auth or oc login |
https://api.cluster-xyz.example.com:6443 |
sandbox_openshift_ingress_domain |
Both | Most important. Wildcard apps domain. Build all Route hostnames from this | apps.cluster-xyz.example.com |
sandbox_openshift_console_url |
Both | OpenShift web console URL. Pass to Showroom for the console tab | https://console-openshift-console.apps.cluster-xyz... |
cluster_admin_agnosticd_sa_token |
Both | Cluster-admin ServiceAccount token. Used by all roles to auth. Never log or commit. | eyJ... (JWT, long string) |
sandbox_openshift_namespace |
OCP only | Primary namespace created by Sandbox API. Format: sandbox-{guid}-{suffix} |
sandbox-abc12-user |
sandbox_username |
OCP + keycloak:yes | RHSSO username created by Sandbox API | sandbox-abc12 |
sandbox_password |
OCP + keycloak:yes | RHSSO user password (generated). Never log or commit. | generated string |
guid |
Both | Globally unique order identifier. Use as prefix for all per-tenant resource names | abc12 |
sandbox_name |
Both | Full sandbox name in the pool. Useful for debugging | sandbox-abc12 |
How to use ingress_domain in a role
# Building a Route hostname from sandbox_openshift_ingress_domain
---
- name: Create Route for MCP server
kubernetes.core.k8s:
state: present
definition:
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: mcp-server
namespace: "{{ tenant_namespace }}"
spec:
host: "mcp-server-{{ tenant_namespace }}.{{ sandbox_openshift_ingress_domain }}"
to:
kind: Service
name: mcp-server
tls:
termination: edge
insecureEdgeTerminationPolicy: Redirect
Pattern: always include namespace in Route hostname
On a shared cluster, multiple tenants have routes in the same wildcard domain. Always include the namespace (which is guid-scoped) in the Route hostname to prevent collisions:
mcp-server-sandbox-{guid}-user.apps.cluster.example.com
How roles authenticate to the cluster
# All AgnosticD roles authenticate using the injected SA token.
# This is handled automatically if you use the standard k8s_auth pattern.
---
- name: Set cluster credentials fact
ansible.builtin.set_fact:
k8s_auth_api_key: "{{ cluster_admin_agnosticd_sa_token }}"
k8s_auth_host: "{{ sandbox_openshift_api_url }}"
k8s_auth_verify_ssl: false
- name: Create namespace
kubernetes.core.k8s:
api_key: "{{ k8s_auth_api_key }}"
host: "{{ k8s_auth_host }}"
validate_certs: false
state: present
definition:
apiVersion: v1
kind: Namespace
metadata:
name: "{{ tenant_namespace }}"
Red Hat Demo Platform (RHDP) — Internal developer reference — GitHub