MCP registry
In the previous module, you deployed and configured individual MCP servers — adding tool filters, enabling telemetry, and extending AI capabilities with new servers. As organizations scale their AI initiatives, a new challenge emerges: how do you manage dozens or hundreds of MCP servers across teams, environments, and use cases?
This module introduces the MCP Registry — a centralized catalog for discovering, governing, and managing MCP servers at enterprise scale. You will learn how to deploy your own registry, populate it with approved servers, and explore the governance patterns that make AI tool management tractable.
What is an MCP registry?
An MCP Registry is a centralized catalog that indexes all available MCP servers in your organization. Think of it as an "app store" for AI capabilities — a searchable, documented, governed collection of tools that AI agents can discover and use.
The registry provides:
- Centralized Catalog
-
A single source of truth listing every MCP server available in your organization, with descriptions, capabilities, and connection information.
- Standardized API
-
A REST API that follows the MCP Registry specification, enabling programmatic discovery by tools, automation, and AI clients. This means that while this lab is using ToolHive the skills you learn are transferable to any other MCP registry that follows this specification.
- Rich Metadata
-
Each server entry includes detailed information — what tools it provides, what environment variables it needs, what transport it uses, and its governance status.
- Dynamic Discovery
-
The registry can aggregate both manually-curated server entries and automatically-discovered running instances, keeping the catalog current.
Business value of MCP registries
The value of a registry becomes clear as MCP adoption grows. Without centralized management, organizations face fragmented tool access, duplicated efforts, and governance blind spots.
For decision makers
- Centralized Governance
-
A single point of control for approving or deprecating AI tool access. Security and compliance teams can review and approve servers before they appear in the catalog.
- Complete Audit Trail
-
Know exactly what AI capabilities are deployed across your organization. When auditors ask "what can your AI systems access?", you have a definitive answer.
- Risk Classification
-
Classify servers by tier (Official, Community, Experimental) and status (Active, Deprecated). Control which categories are permitted in production environments.
- Compliance Enablement
-
Demonstrate to auditors that AI tool access is controlled, documented, and follows your organization’s change management processes.
For platform engineers
- Reduced Configuration Drift
-
The registry is the single source of truth for server endpoints and configuration. No more hunting through wikis or Slack channels for connection details.
- Simplified Lifecycle Management
-
Update server metadata in one place. When a server version changes or an endpoint moves, update the registry — clients discover the change automatically.
- Multi-Cluster Visibility
-
While each cluster can have its own registry, the standardized API enables aggregation and federated views across environments.
- Observability Foundation
-
Registry metadata can enrich monitoring dashboards — track which servers are available, their versions, and usage patterns.
For developers
- Self-Service Discovery
-
Browse available AI tools without asking colleagues. The registry documents what’s available, what each server does, and how to connect.
- Consistent Configuration
-
Get the right connection details, required environment variables, and configuration parameters from a single authoritative source.
- Faster Integration
-
The standard API means tools can auto-configure. Future AI clients will query the registry and present available servers automatically.
- Reduced Duplication
-
Before building a new MCP server, check the registry. Someone may have already created what you need.
Setting up your MCP registry
Now let’s deploy a working MCP Registry in your OpenShift environment.
Database backend
The MCP Registry needs persistent storage to maintain the server catalog, support efficient queries, and enable future features like usage analytics. We use PostgreSQL, deployed via the CloudNativePG operator — a production-grade solution that handles clustering, automated backups, and failover.
The database initialization includes an important security design: separate users for different purposes. The db_migrator user has schema modification privileges (for database upgrades), while db_app is the runtime user with minimal privileges. This separation follows the principle of least privilege — even if the runtime application is compromised, attackers cannot modify the database schema.
-
Switch back to the Gitea tab. Again we are taking the same shortcut to create all new resources in the Gitea MCP Server Helm chart.
Create a PostgreSQL database by adding file
psql.yamlunder thetemplatesdirectory of the Gitea MCP Server Helm chart:--- apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: mcp-registry-db namespace: {{ .Values.namespace }} annotations: argocd.argoproj.io/sync-wave: "1" spec: instances: 1 storage: size: 1Gi storageClass: ocs-external-storagecluster-ceph-rbd bootstrap: initdb: database: registry postInitApplicationSQL: - | BEGIN; DO $body$ DECLARE migrator_user TEXT := 'db_migrator'; migrator_password TEXT := 'migrator_password'; app_user TEXT := 'db_app'; app_password TEXT := 'app_password'; db_name TEXT := 'registry'; BEGIN EXECUTE format('CREATE USER %I WITH PASSWORD %L', migrator_user, migrator_password); EXECUTE format('GRANT CONNECT ON DATABASE %I TO %I', db_name, migrator_user); EXECUTE format('GRANT CREATE ON SCHEMA public TO %I', migrator_user); EXECUTE format('GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO %I', migrator_user); CREATE ROLE toolhive_registry_server; EXECUTE format('CREATE USER %I WITH PASSWORD %L', app_user, app_password); EXECUTE format('GRANT toolhive_registry_server TO %I', app_user); EXECUTE format('GRANT CONNECT ON DATABASE %I TO %I', db_name, app_user); END; $body$; COMMIT;
Preparing the server catalog
The registry needs a ConfigMap to store its server catalog. We’ll start with an empty catalog and let ToolHive’s automatic discovery populate it with running servers.
-
Create an empty
ConfigMapto hold the MCP Registry configuration. Add fileconfigmap-mcp-registry-config.yamlto thetemplatesdirectory:--- apiVersion: v1 kind: ConfigMap metadata: name: mcp-registry-config namespace: {{ .Values.namespace}} annotations: argocd.argoproj.io/sync-wave: "1" data: registry.json: | { "$schema": "https://raw.githubusercontent.com/stacklok/toolhive/main/pkg/registry/data/schema.json", "version": "1.0.0", "last_updated": "2025-01-14T00:00:00Z", "servers": {} }Notice the
serversobject is empty. We’ll populate it later using automatic server discovery.
Managing credentials securely
Database credentials are stored as a Kubernetes secret. In Kubernetes, secrets are base64-encoded (not encrypted by default, though OpenShift encrypts them at rest in etcd). The key principle: never store credentials in ConfigMaps or commit them to Git in plain text.
|
In production environments, integrate with dedicated secret management solutions:
For this lab, we use a simple secret for convenience, but the security reminder in the YAML comments is important. |
-
Create a secret with the User ID and Password for the PostgreSQL database. Add file
secret-mcp-registry-db-password.yamlin thetemplatesdirectory:--- apiVersion: v1 kind: Secret metadata: name: mcp-registry-db-password namespace: {{ .Values.namespace }} annotations: argocd.argoproj.io/sync-wave: "1" type: Opaque stringData: user-password: app_password migrator-password: migrator_password
Creating the registry service
The MCPRegistry custom resource is the heart of the deployment. When you create this resource, the ToolHive operator:
-
Deploys the registry API service — A REST API that implements the MCP Registry specification
-
Configures database connectivity — Connects to PostgreSQL using the credentials you provided
-
Loads the server catalog — Reads your ConfigMap and populates the registry database
-
Sets up synchronization — Periodically re-reads the source to pick up changes
Key configuration options:
-
registries[].configMapRef: Points to your server catalog ConfigMap -
syncPolicy.interval: How often to sync (enables updates without pod restarts) -
databaseConfig: Connection details including separate credentials for runtime and migrations
-
Create the MCP Registry by adding file
mcpregistry.yamlto thetemplatesdirectory:--- apiVersion: toolhive.stacklok.dev/v1alpha1 kind: MCPRegistry metadata: name: mcp-registry namespace: {{ .Values.namespace }} annotations: argocd.argoproj.io/sync-wave: "2" spec: displayName: "{{ .Values.user }} MCP Registry" registries: - name: mcp-registry format: toolhive configMapRef: name: mcp-registry-config key: registry.json syncPolicy: interval: "2m" authConfig: mode: anonymous # Use "oauth" for production databaseConfig: host: mcp-registry-db-rw.{{ .Values.namespace }}.svc.cluster.local port: 5432 database: registry sslMode: require user: db_app migrationUser: db_migrator dbAppUserPasswordSecretRef: name: mcp-registry-db-password key: user-password dbMigrationUserPasswordSecretRef: name: mcp-registry-db-password key: migrator-password
Exposing the registry API
-
In order to be able to test the registry add a
Routeso that we can access the registry from outside our cluster. Add fileroute-mcpregistry.yamlto thetemplatesdirectory:--- apiVersion: route.openshift.io/v1 kind: Route metadata: name: mcp-registry namespace: {{ .Values.namespace }} annotations: argocd.argoproj.io/sync-wave: "2" spec: port: targetPort: http tls: termination: edge to: kind: Service name: mcp-registry-api weight: 100 wildcardPolicy: None
The route makes the registry API accessible from outside the OpenShift cluster. This enables several use cases:
-
Developer laptops can query available servers without cluster access
-
CI/CD pipelines can discover server endpoints programmatically
-
External AI clients can auto-configure based on registry contents (future capability)
|
For production deployments, consider these additional security measures:
|
Verifying the empty registry
Once all resources are deployed, you can query the registry API to verify it’s working correctly.
-
Once everything has synced to your namespace, query the registry to confirm it’s empty:
curl -s https://mcp-registry-mcp-gitea-{user}.{openshift_cluster_ingress_domain}/registry/mcp-registry/v0.1/servers | jqYou should see an empty servers array:
{ "servers": [], "metadata": { "count": 0 } }
The registry is running but has no servers registered yet. Next, we’ll use ToolHive’s automatic server registration feature to populate it.
Registering MCP servers with the registry
ToolHive supports automatic server registration through annotations on MCPServer resources. When an MCPServer has the toolhive.stacklok.dev/registry-export: "true" annotation, the registry controller automatically discovers and registers it. The automatic registration will always happen in the default registry though and not in the registry that we configured. This makes sense because that way the configured registry will not clash with auto-discovered MCP servers.
The registration annotations are:
-
toolhive.stacklok.dev/registry-export: "true"— Enable auto-registration for this server -
toolhive.stacklok.dev/registry-description— Human-readable description for the registry -
toolhive.stacklok.dev/registry-url— The registry URL to register with
Adding registry annotations to MCP servers
Let’s register two of our MCP servers with the registry using ToolHive’s automatic registration annotations.
|
The OpenShift MCP server is deployed in a separate namespace (
|
-
Update the Gitea MCP server definition (
/mcp/helm/mcp-gitea/templates/mcpserver.yaml) with registry annotations:apiVersion: toolhive.stacklok.dev/v1alpha1 kind: MCPServer metadata: name: gitea namespace: {{ .Values.namespace }} annotations: toolhive.stacklok.dev/registry-export: "true" toolhive.stacklok.dev/registry-description: "Gitea MCP Server for Gitea repository operations" toolhive.stacklok.dev/registry-url: http://mcp-registry-api.mcp-gitea-{{ .Values.user }}.svc.cluster.local:8080 spec: # ... existing spec unchanged -
Update the Fetch MCP server definition (
/mcp/helm/mcp-gitea/templates/mcpserver-fetch.yaml) with registry annotations:apiVersion: toolhive.stacklok.dev/v1alpha1 kind: MCPServer metadata: name: fetch namespace: {{ .Values.namespace }} annotations: toolhive.stacklok.dev/registry-export: "true" toolhive.stacklok.dev/registry-description: "Fetch MCP Server for web content retrieval" toolhive.stacklok.dev/registry-url: http://mcp-registry-api.mcp-gitea-{{ .Values.user }}.svc.cluster.local:8080 spec: # ... existing spec unchanged
-
Commit and push your changes. Wait for ArgoCD to sync the updated MCPServer resources.
Verifying registered servers
-
Query the registry to see the auto-registered servers (note that you are using the
defaultregistry, not our configuredmcp-registry):curl -s https://mcp-registry-mcp-gitea-{user}.{openshift_cluster_ingress_domain}/registry/default/v0.1/servers | jqYou should now see two servers registered:
{ "servers": [ { "server": { "name": "com.toolhive.k8s.mcp-gitea-{user}/fetch", "description": "Fetch MCP Server for web content retrieval", ... } }, { "server": { "name": "com.toolhive.k8s.mcp-gitea-{user}/gitea", "description": "Gitea MCP Server for Git repository operations", ... } } ], "metadata": { "count": 2 } }
The registry has automatically discovered and registered the two MCP servers that have the export annotation. The OpenShift MCP server is not registered because it runs in a different namespace.
Understanding the registry output
The JSON response from the registry API provides comprehensive information about each registered server:
- Server Identity
-
The
namefield uniquely identifies the server. For dynamically-discovered servers, this includes the namespace to ensure uniqueness across the cluster. - Description and Metadata
-
Human-readable descriptions help users understand what each server does. The
_metasection contains base64-encoded extended metadata including tags, tier, and tool lists. - Package Information
-
The
packagesarray describes how to obtain and run the server — the container image, transport type, and version. - Remote Endpoints
-
For running servers, the
remotesarray includes the actual endpoint URLs for connecting to the server. - Server Count
-
The
metadata.countfield shows the total number of servers in the registry — useful for monitoring catalog growth and ensuring expected servers are present.
Adding MCP Servers to the configured registry
In order to add validated MCP Servers to the configured registry you need to update the JSON file in the ConfigMap.
-
In Gitea navigate back to the
templatesdirectory and edit the ConfigMapconfigmap-mcp-registry-config.yaml(you can simply overwrite the existing definition with this one) to add the Fetch MCP Server to your registry.--- apiVersion: v1 kind: ConfigMap metadata: name: mcp-registry-config namespace: {{ .Values.namespace }} annotations: argocd.argoproj.io/sync-wave: "1" data: registry.json: | { "$schema": "https://raw.githubusercontent.com/stacklok/toolhive/main/pkg/registry/data/schema.json", "version": "1.0.0", "last_updated": "2026-02-06T13:00:00Z", "servers": { "fetch": { "name": "fetch", "description": "A Model Context Protocol server that provides web fetching and content retrieval capabilities with streamable HTTP transport.", "transport": "streamable-http", "image": "ghcr.io/stackloklabs/gofetch/server:1.0.2", "target_port": 8080, "status": "Active", "tier": "Community", "permissions": {}, "tools": [ "fetch", "http_get" ], "tags": [ "web", "fetch", "http", "content", "streamable-http" ] } } } -
When the update has synced to the OpenShift Cluster and the Registry API has been updated (this could take 2 minutes) try the curl command again to list the available MCP Servers:
curl -s https://mcp-registry-mcp-gitea-{user}.{openshift_cluster_ingress_domain}/registry/mcp-registry/v0.1/servers | jqYou will see that now we have 1 server available - if you query the
defaultregistry again you’ll see that we still have 2 servers registered there.
Exploring MCP registry capabilities
Now that your registry is operational, let’s explore what you can do with it. These exercises demonstrate the practical value of having a centralized server catalog.
Querying the registry API
The registry API follows REST conventions. Here are some useful queries:
List all server names
Get a quick view of what’s available:
curl -s https://mcp-registry-mcp-gitea-{user}.{openshift_cluster_ingress_domain}/registry/default/v0.1/servers | jq -r '.servers[].server.name'
com.toolhive.k8s.mcp-gitea-{user}/fetch
com.toolhive.k8s.mcp-gitea-{user}/gitea
This is useful for:
-
Self-service discovery: Developers can see available AI tools without documentation hunting
-
Scripting: Automation can iterate over available servers
-
Inventory checks: Verify expected servers are registered
Get full server details
Retrieve complete information about the Fetch MCP server:
curl -s https://mcp-registry-mcp-gitea-{user}.{openshift_cluster_ingress_domain}/registry/mcp-registry/v0.1/servers | jq '.servers[] | select(.server.name | contains("fetch"))'
This provides all configuration details in one place — no hunting through multiple sources.
Governance workflows
The registry enables structured governance of AI tool access. Here are patterns you can implement:
Adding a new approved server
When a team wants to introduce a new MCP server:
-
Review request: Security/platform team evaluates the server
-
Add to catalog: Edit the ConfigMap in Gitea to add the server entry
-
Wait for sync: The registry picks up changes based on
syncPolicy.interval -
Verify registration: Query the registry to confirm the new server appears
-
Announce availability: Teams can now discover and use the server
This workflow ensures controlled introduction of new AI capabilities with an audit trail in Git.
Server lifecycle management
Use the status field to communicate server lifecycle:
-
Active: Server is production-ready and recommended for use
-
Deprecated: Server works but is being phased out — migrate to alternatives
-
Experimental: Server is available for testing but not production-approved
Clients can filter on status to avoid deprecated servers. This enables safe transitions when retiring old capabilities.
Tier-based access control
The tier field enables risk-based policies:
-
Official: Vetted and supported by your organization
-
Community: Third-party servers — use with awareness of support limitations
-
Experimental: Cutting-edge but potentially unstable
Production environments might only permit "Official" tier servers, while development environments allow broader access.
Enforcing approved servers only
The governance patterns above — tier classification, lifecycle status, and approval workflows — are informational by default. The registry documents which servers are approved, but nothing prevents someone from deploying an unapproved server.
ToolHive provides registry enforcement to close this gap. When enabled, the operator validates that any MCPServer deployed in the namespace uses an image that exists in the registry catalog. Unapproved servers are rejected before they can run.
This addresses several enterprise requirements:
- Compliance
-
Auditors can verify that only approved AI tools are deployed. The registry serves as the authoritative list of permitted capabilities.
- Shadow AI Prevention
-
Teams cannot deploy random MCP servers from the internet. All AI tool access must go through the approval process.
- Change Management
-
Adding new AI capabilities requires updating the registry — creating an auditable change record in Git.
Integration patterns
The registry API enables powerful automation scenarios:
CLI tooling
Create scripts that query the registry before operations:
#!/bin/bash
# Example: List available MCP servers with their descriptions
REGISTRY_URL="https://mcp-registry-mcp-gitea-{user}.{openshift_cluster_ingress_domain}"
REGISTRY_NAME="default"
curl -s "${REGISTRY_URL}/registry/${REGISTRY_NAME}/v0.1/servers" | \
jq -r '.servers[] | "\(.server.name): \(.server.description)"'
This reduces manual configuration and ensures consistency.
Monitoring integration
Track registry health with simple checks:
# Check that expected number of servers are registered
SERVER_COUNT=$(curl -s https://mcp-registry-mcp-gitea-{user}.{openshift_cluster_ingress_domain}/registry/default/v0.1/servers | jq '.metadata.count')
echo "Registered servers: ${SERVER_COUNT}"
# Alert if below expected threshold
if [ "$SERVER_COUNT" -lt 2 ]; then
echo "WARNING: Expected at least 2 servers, found ${SERVER_COUNT}"
fi
This can integrate with Prometheus AlertManager or other monitoring systems.
Ideas for further exploration
Try these additional exercises to deepen your understanding:
-
Add a custom server entry: Edit the ConfigMap to add a fictional MCP server with your own metadata. Watch it appear in the registry after sync.
-
Experiment with tags: Add tags like "team-platform" or "env-production" to organize servers. Query to filter by tag.
-
Test sync behavior: Modify an existing entry in the ConfigMap (change the description). Observe how quickly the registry reflects the change.
-
Explore the API: The registry follows the MCP Registry specification. Review the spec to discover additional endpoints and capabilities.
-
Consider federation: In a multi-cluster environment, how would you aggregate registries? Each cluster could have its own registry, with a federated view for discovery across environments.
-
Plan for scale: As your organization adds more MCP servers, what governance processes would you implement? Consider approval workflows, periodic reviews, and automated compliance checks.
-
Configure allowed MCP Servers: Configure the MCP Registry to only allow pre-registered servers thus preventing the deployment of non-sanctioned MCP Servers to the namespace.
Summary
In this module, you deployed an MCP Registry and experienced the governance workflow:
-
Empty start: Deployed a registry with no pre-defined servers
-
Automatic registration: Used annotations to register running servers with the registry
-
Dynamic discovery: Watched the registry populate automatically with two servers (Gitea and Fetch)
-
API exploration: Queried the registry API to discover available servers
You also learned that the ToolHive registry controller only discovers MCPServers within its own namespace, which is an important architectural consideration when planning your MCP deployment topology.
This foundation enables your organization to scale AI tool adoption while maintaining control, visibility, and compliance. The registry provides a central catalog for discovering and documenting AI capabilities across your organization.