Creating a Hosted Cluster on OpenShift Virtualization

You can use the hosted control plane command line interface, hcp, to create an OpenShift Container Platform hosted cluster. The hosted cluster is automatically imported as a managed cluster.

Size guidance

See the following highly available hosted control plane requirements, which were tested with OpenShift Container Platform version 4.12.9 and later:

  • 78 pods

  • Three 8 GiB PVs for etcd

  • Minimum vCPU: approximately 5.5 cores

  • Minimum memory: approximately 19 GiB

Prerequisites

  1. Connect to the bastion host of your environment

    sudo ssh root@192.168.123.100
  2. Login to the OpenShift Cluster (your hosting cluster)

    oc login {ocp_api} --username={ocp_username} --password={ocp_password} --insecure-skip-tls-verify=true
  3. Allow wildcard routes for your Ingress Controllers

    oc patch ingresscontroller -n openshift-ingress-operator default --type=json -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]'
    This command was already executed for you to avoid disconnect your terminal.

Install hcp CLI

  1. When Red Hat Advanced Cluster Manager for Kubernes is installed on a hosting cluster it provides a convenient download location for the hcp command line tool. Download the cli from your cluster:

    curl -O https://hcp-cli-download-multicluster-engine.apps.my-guid.dynamic.redhatworkshops.io/linux/amd64/hcp.tar.gz
  2. Extract the client

    tar xvfz hcp.tar.gz -C /usr/local/bin/
  3. Confirm that the command hcp is available

    hcp --version
    Sample Output
    hcp version openshift/hypershift: e87182ca75da37c74b371aa0f17aeaa41437561a. Latest supported OCP: 4.14.0

Create Hosted Cluster

Some considerations:

  • Run the hub cluster and workers on the same platform for hosted control planes.

  • Each hosted cluster must have a unique name in order for multicluster engine operator to manage it.

  • A hosted cluster cannot be created in the namespace of a multicluster engine operator managed cluster.

    1. Create a new hosted cluster named cluster1 using the command hcp:

      hcp create cluster kubevirt \
      --name cluster1 \
      --release-image quay.io/openshift-release-dev/ocp-release:4.14.9-x86_64 \
      --node-pool-replicas 2 \
      --pull-secret ~/pull-secret.json \
      --memory 6Gi \
      --cores 2
      Sample Output
      2023-12-02T21:49:55Z    INFO    Applied Kube resource   {"kind": "Namespace", "namespace": "", "name": "clusters"}
      2023-12-02T21:49:55Z    INFO    Applied Kube resource   {"kind": "Secret", "namespace": "clusters", "name": "cluster1-pull-secret"}
      2023-12-02T21:49:55Z    INFO    Applied Kube resource   {"kind": "", "namespace": "clusters", "name": "cluster1"}
      2023-12-02T21:49:55Z    INFO    Applied Kube resource   {"kind": "Secret", "namespace": "clusters", "name": "cluster1-etcd-encryption-key"}
      2023-12-02T21:49:55Z    INFO    Applied Kube resource   {"kind": "NodePool", "namespace": "clusters", "name": "cluster1"}
      The parameter --release-image flag to set up the hosted cluster with a specific OpenShift Container Platform release.

      A default node pool is created for the cluster with two virtual machine worker replicas according to the --node-pool-replicas flag.

    2. Verify the host control plane are being created inside of the new namespace created (clusters-cluster1)

      oc get pod -n clusters-cluster1
      Sample Output
      NAME                                      READY   STATUS    RESTARTS   AGE
      capi-provider-554c58b965-6cx78            1/1     Running   0          4m23s
      cluster-api-5f5b78d889-2tnqm              1/1     Running   0          4m24s
      control-plane-operator-67b7d4556b-4b4mq   1/1     Running   0          4m23s
    3. Check the status of the host clusters, querying the Custom Resource named HostedCluster.

      oc get --namespace clusters hostedclusters
      Sample Output
      NAME       VERSION   KUBECONFIG                  PROGRESS   AVAILABLE   PROGRESSING   MESSAGE
      cluster1             cluster1-admin-kubeconfig   Partial    False       False         Waiting for Kube APIServer deployment to become available
      It takes around 15 minutes until the cluster switches from Partial to Completed. You do not have to wait for the cluster to be fully deployed at this point and you may continue with the lab.
    4. Go back to the ACM Console (All Clusters in the top menu)

    5. Notice that the cluster1 will appear automatically. If you don’t see it yet, wait until it appears.

      01 ACM Review
    6. Click on cluster1 and check the progress

      02 ACM Progress
    7. Review the information and scroll down and wait until the status is Ready

    8. In the middle of the screen, press Control plane pods

      02 ACM CP pods
    9. Expect to see a big list of pods, as shown in the following image:

      02 ACM CP pods list

The Hosted Control Plane and Data Plane use the Konnectivity service to establish a tunnel for communications from the control plane to the data plane. This connection works as follows:

The Konnectivity agent in the compute nodes, connects with the Konnectivity server running as part of the Hosted Control Plane.

The Kubernetes API Server uses this tunnel to communicate with the kubelet running on each compute node.

The compute nodes reach the Hosted Cluster API via an exposed service. Depending on the infrastructure where the Hosted Control Plane runs this service can be exposed via a load balancer, a node port, etc.

The Konnectivity server is used by the Hosted Control Plane to consume services deployed in the hosted cluster namespace such as OLM, OpenShift Aggregated API and OAuth.

hcp dp connection

Review creation

  1. While the installation continues, check OpenShift Virtualization

    1. Switch back to your local-cluster

    2. In the left menu navigate to VirtualizationVirtual Machines

    3. Select project clusters-cluster1

      03 OCPV VMs

      CoreOS disks are imported automatically and the VMs starts. They will act as a workers for the hosted control planes cluster.

In some situations in the lab we have detected the servers are failing reaching the ignition server. If the installation is taking long and accessing the console of the VM is showing failures accessing to ignition server please remove the router pods.
  1. (Run only if VMS are failing to reach ignition server)

    oc delete pods -n openshift-ingress --all
  2. In the left menu navigate to NetworkingServices:

    04 OCPV Services

    Notice the required services for OpenShift are created inside the namespace for each cluster that has been created.

  3. Navigate to NetworkingRoutes:

    05 OCPV Routes

    The routes to access the hosted cluster from the internet are listed.

  4. Navigate to StoragePersistentVolumeClaims

    06 OCPV PVCs

    Notice that the etcd disks for the control plane are created. These disks are used for the control planes pods. It is recommended to use a low-latency and fast I/O disks for etcd to avoid issues.

Review creation

  1. Go back to ACM console and select cluster1 and wait until the cluster creation is complete.

    07 OCPV Ready

    The cluster can be Ready but the worker nodes may still be provisioning - which also means that Cluster Operators are still rolling out.

    Wait until the Cluster node pools section switches from Pending to Ready

  2. Review the Details information of the cluster

    08 OCPV Guest Details
  3. Click Reval credentials and copy the password for the kubeadmin user

  4. Click on the Console URL and accept the self-signed certificate

  5. Login to the new cluster with the credentials

    09 OCPV Guest Home

    Notice the Infrastructure provider is KubeVirt.

  6. Navigate in the left menu to ComputeNodes and review the workers

    10 OCPV Guest Nodes
  7. Navigate to the left menu to StorageStorageClasses

    11 OCPV Guest StorageClass

    Storage Class is a interface to the host OpenShift Cluster storage class. Storage will be covered with more detail later on.

Review the cluster using the CLI

It is possible download the kubeconfig using the UI interface and using the command hcp.

  1. Generate the kubeconfig for the cluster1 cluster

    hcp create kubeconfig --name cluster1 > cluster1-kubeconfig
    Sample Output
    NAME       VERSION   KUBECONFIG                  PROGRESS   AVAILABLE   PROGRESSING   MESSAGE
    cluster1             cluster1-admin-kubeconfig   Partial    False       False         Waiting for Kube APIServer deployment to become available
  2. Check the cluster operators

    oc get co --kubeconfig=cluster1-kubeconfig
    Sample Output
    NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
    console                                    4.14.5    True        False         False      66m
    csi-snapshot-controller                    4.14.5    True        False         False      72m
    dns                                        4.14.5    True        False         False      67m
    image-registry                             4.14.5    True        False         False      67m
    ingress                                    4.14.5    True        False         False      66m
    insights                                   4.14.5    True        False         False      67m
    kube-apiserver                             4.14.5    True        False         False      72m
    kube-controller-manager                    4.14.5    True        False         False      72m
    kube-scheduler                             4.14.5    True        False         False      72m
    kube-storage-version-migrator              4.14.5    True        False         False      67m
    monitoring                                 4.14.5    True        False         False      65m
    network                                    4.14.5    True        False         False      66m
    node-tuning                                4.14.5    True        False         False      69m
    openshift-apiserver                        4.14.5    True        False         False      72m
    openshift-controller-manager               4.14.5    True        False         False      72m
    openshift-samples                          4.14.5    True        False         False      66m
    operator-lifecycle-manager                 4.14.5    True        False         False      72m
    operator-lifecycle-manager-catalog         4.14.5    True        False         False      72m
    operator-lifecycle-manager-packageserver   4.14.5    True        False         False      72m
    service-ca                                 4.14.5    True        False         False      67m
    storage                                    4.14.5    True        False         False      72m
  3. Check the cluster nodes

    oc get nodes --kubeconfig=cluster1-kubeconfig
    Sample Output
    NAME                      AGE   STATUS         READY
    cluster1-ee50e7fb-ctrdd   62m   Running        True
    cluster1-ee50e7fb-dc59k   62m   Running        True