Decentralized Live Migration

A decentralized live migration is a variation of a storage live migration that allows you to migrate a running VirtualMachine between namespaces and clusters.

This can be useful to:

Balance Workloads: Distributing virtual machines between clusters helps optimize resource utilization. If one cluster is heavily loaded while another is idle, rebalancing can significantly improve operational efficiency.

Facilitate Maintenance: For environments with multiple clusters, virtual machine migration allows for seamless maintenance. You can move virtual machines off a cluster slated for upgrades or shutdown, ensuring zero downtime for your services.

Expedite Restores: Instant restore capabilities from backup vendors, particularly when coupled with namespace migration, can drastically speed up recovery times. virtual machines can be quickly restored to a temporary location and then migrated to their original namespace and storage.

This lab demonstrates the forthcoming Tech Preview capabilities of Decentralized Live Migration. In this lab, we will walk through the live migration of a VirtualMachine from one Namespace to another. The process of live migrating between namespaces follows the same process as if you were to migrate a VirtualMachine between 2 clusters.

The UI based workflow is not fully functional yet. During this lab we will introduce you to the UI workflow, but perform the migration through the CLI.

How does it work?

The migration involves two VirtualMachineInstances and two VirtualMachineInstanceMigration objects. Like a storage live migration, disk contents are copied over the network to the receiving VirtualMachine. The key difference is that the receiving virtual machine has a completely separate VirtualMachineInstance. In order to coordinate the migration, the status of the source and target VirtualMachineInstance has to be synchronized. A dedicated synchronization controller, running in the openshift-cnv namespace facilitates communication between the source and target VirtualMachineInstances.

Requirements

These requirements have already been enabled on your cluster and are noted for your reference.

  • The Migration Toolkit for Virtualization (MTV) Operator must be installed with the feature feature_ocp_live_migration set to true when creating the ForkliftController CR

  • You must enable the DecentralizedLiveMigration featureGate on the KubeVirt CR

Accessing the OpenShift Cluster

Web Console

{openshift_cluster_console_url}[{openshift_cluster_console_url},window=_blank]

CLI Login
oc login -u {openshift_cluster_admin_username} -p {openshift_cluster_admin_password} --server={openshift_api_server_url}
Cluster API

{openshift_api_server_url}[{openshift_api_server_url},window=_blank]

OpenShift Username
{openshift_cluster_admin_username}
OpenShift Password
{openshift_cluster_admin_password}

CLI Based Instructions

  1. Ensure you are logged in to both the OpenShift Console and CLI as the admin user from your web browser and the terminal window on the right side of your screen and continue to the next step.

  2. Verify the source VirtualMachineInstance is running.

    oc get vmi -n vm-live-migration-source

    The output will look similar to the following, with a different IP and NODENAME:

    NAME                     AGE  PHASE    IP          NODENAME               READY
    vm-migration-ns-ns-live  28m  Running  10.233.2.16 worker-cluster-hflz6-3 True

    Now let’s migrate our virtual machine vm-migration-ns-ns-live from the namespace vm-live-migration-source to a new namespace vm-live-migration-destination.

  1. As we noted above, the migration requires two VirtualMachineInstances.

    The first step is to create an empty DataVolume for the receiver virtual machine in the destination namespace.

    Execute the following command to create the destination DataVolume:

    cat <<EOF | oc apply -f -
    apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
      annotations:
        cdi.kubevirt.io/storage.usePopulator: "true"
      name: vm-migration-ns-ns-live
      namespace: vm-live-migration-destination
    spec:
      source:
        blank: {}
      storage:
        storageClassName: ocs-external-storagecluster-ceph-rbd
        resources:
          requests:
            storage: 30Gi
    EOF

    Confirm the DataVolume and associated PersistentVolumeClaim have been created by executing the following two commands:

    oc get DataVolume -n vm-live-migration-destination
    NAME                     PHASE              PROGRESS  RESTARTS  AGE
    vm-migration-ns-ns-live  PendingPopulation  N/A                 107s
    oc get PersistentVolumeClaim -n vm-live-migration-destination
    NAME                     STATUS   VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS                          VOLUMEATTRIBUTESCLASS  AGE
    vm-migration-ns-ns-live  Pending                                  ocs-external-storagecluster-ceph-rbd  <unset>                2m5s
  2. Next, create the receiver virtual machine in the destination namespace

    The destination VirtualMachine is same as the source VirtualMachine except for two key differences:

    1. The destination VirtualMachine has an annotation to set the post live migration runStrategy

          kubevirt.io/restore-run-strategy: Always
    2. The destination virtual machine has different spec.runStrategy

        runStrategy: WaitAsReceiver
  3. Execute the following command to create the destination virtual machine:

    cat <<EOF | oc apply -f -
    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      annotations:
        kubevirt.io/restore-run-strategy: Always
      name: vm-migration-ns-ns-live
      namespace: vm-live-migration-destination
    spec:
      runStrategy: WaitAsReceiver
      template:
        metadata:
          annotations:
            vm.kubevirt.io/flavor: small
            vm.kubevirt.io/os: rhel9
            vm.kubevirt.io/workload: server
          creationTimestamp: null
          labels:
            kubevirt.io/domain: vm-migration-ns-ns-live
            kubevirt.io/size: small
            network.kubevirt.io/headlessService: headless
        spec:
          architecture: amd64
          networks:
          - name: default
            pod: {}
          domain:
            cpu:
              cores: 1
              sockets: 1
              threads: 1
            devices:
              disks:
              - disk:
                  bus: virtio
                name: rootdisk
              - disk:
                  bus: virtio
                name: cloudinitdisk
              interfaces:
              - macAddress: 02:a1:3b:00:00:85
                masquerade: {}
                model: virtio
                name: default
              logSerialConsole: false
              rng: {}
            features:
              acpi: {}
              smm:
                enabled: true
            firmware:
              bootloader:
                efi: {}
              serial: 94f58dd1-b1e1-4aab-add2-ab3d3483d297
            machine:
              type: pc-q35-rhel9.6.0
            memory:
              guest: 2Gi
            resources: {}
          terminationGracePeriodSeconds: 180
          volumes:
          - dataVolume:
              name: vm-migration-ns-ns-live
            name: rootdisk
          - cloudInitNoCloud:
              userData: |-
                #cloud-config
                user: cloud-user
                password: redhat
                chpasswd: { expire: False }
            name: cloudinitdisk
    EOF

    Confirm the VirtualMachine and associated VirtualMachineInstance have been created by executing the following two commands:

    oc get VirtualMachine -n vm-migration-ns-ns-live
    NAME                      AGE     STATUS               READY
    vm-migration-ns-ns-live   5m51s   WaitingForReceiver   False
    oc get VirtualMachineInstance -n vm-migration-ns-ns-live
    NAME                      AGE     PHASE            IP    NODENAME   READY
    vm-migration-ns-ns-live   5m55s   WaitingForSync                    False

    You will notice that the VirtualMachine and VirtualMachineInstance have a unique PHASE, WaitingForReceiver and WaitingForSync. This indicates the VirtualMachine is waiting for the migration to start as a receiver and the VirtualMachineInstance is waiting for data to be synchronized from the source VirtualMachine.

  4. With the destination VirtualMachine and VirtualMachineInstance waiting, create the destination VirtualMachineInstanceMigration by executing the following command:

    cat <<EOF | oc apply -f -
    apiVersion: kubevirt.io/v1
    kind: VirtualMachineInstanceMigration
    metadata:
      name: ns-to-ns-vm-live-migration-instance-destination
      namespace: vm-live-migration-destination
    spec:
      receive:
        migrationID: 52e4398d-bdbf-42b5-b0f4-1e7c6c0a08f5-38cec1f6-43bb-412d-8477-b3d635fd7123
      vmiName: vm-migration-ns-ns-live
    EOF

    The migrationID field in the YAML above does not necessarily refer to anything specific. You may use any unique text in that field.

    Confirm the destination VirtualMachineInstanceMigration has been created by executing the following command:

    oc get VirtualMachineInstanceMigration
    NAME                                              PHASE            VMI
    ns-to-ns-vm-live-migration-instance-destination   WaitingForSync   vm-migration-ns-ns-live
  1. With the destination VirtualMachineInstanceMigration waiting, the final step is to create the source VirtualMachineInstanceMigration.

    To create the source VirtualMachineInstanceMigration, we need the IP address of the leader virt-synchronization-controller.

    1. Find the leader virt-synchronization-controller by looking at the lease holder:

      oc get leases -n openshift-cnv | grep virt-synchronization-controller
      virt-synchronization-controller     virt-synchronization-controller-65c7b9d5bd-f4rbg        147m
    2. Find the IP address of the leader virt-synchronization-controller using the Pod name from the previous command:

      oc get pods -o wide -n openshift-cnv | grep 'virt-synchronization-controller-65c7b9d5bd-f4rbg'
      virt-synchronization-controller-65c7b9d5bd-f4rbg     1/1   Running  0    150m   10.233.0.222   control-plane-cluster-hflz6-1  <none>   <none>
  2. Take the Pod IP address from the previous command output (10.233.0.222 in the example) and replace <leader_sync_controller_ip> in the VirtualMachineInstanceMigration YAML below. Take care that you do not remove the port :9185

    cat <<EOF | oc apply -f -
    apiVersion: kubevirt.io/v1
    kind: VirtualMachineInstanceMigration
    metadata:
      name: ns-to-ns-vm-live-migration-instance-source
      namespace: vm-live-migration-source
    spec:
      sendTo:
        connectURL: <leader_sync_controller_ip>:9185
        migrationID: 52e4398d-bdbf-42b5-b0f4-1e7c6c0a08f5-38cec1f6-43bb-412d-8477-b3d635fd7123
      vmiName: vm-migration-ns-ns-live
    EOF

Monitor the Migration

When the source VirtualMachineInstanceMigration above is created, the migration will start. You can monitor the migration through the CLI or the Console.

Ensure you are logged in to both the OpenShift Console and CLI as the admin user from your web browser and the terminal window on the right side of your screen and continue to the next step.

  1. From the CLI

    oc get vmim -A -w

    Using -w to apply a watch, you will see the Migration progress through a number of PHASE*s from *Scheduling through to Succeeded.

    NAMESPACE                       NAME                                              PHASE        VMI
    vm-live-migration-destination   ns-to-ns-vm-live-migration-instance-destination   Scheduling   vm-migration-ns-ns-live
    vm-live-migration-source        ns-to-ns-vm-live-migration-instance-source        Scheduling   vm-migration-ns-ns-live
    vm-live-migration-destination   ns-to-ns-vm-live-migration-instance-destination   Scheduled    vm-migration-ns-ns-live
    vm-live-migration-destination   ns-to-ns-vm-live-migration-instance-destination   PreparingTarget   vm-migration-ns-ns-live
    vm-live-migration-source        ns-to-ns-vm-live-migration-instance-source        Scheduled         vm-migration-ns-ns-live
    vm-live-migration-source        ns-to-ns-vm-live-migration-instance-source        PreparingTarget   vm-migration-ns-ns-live
    vm-live-migration-destination   ns-to-ns-vm-live-migration-instance-destination   TargetReady       vm-migration-ns-ns-live
    vm-live-migration-source        ns-to-ns-vm-live-migration-instance-source        TargetReady       vm-migration-ns-ns-live
    vm-live-migration-source        ns-to-ns-vm-live-migration-instance-source        Running           vm-migration-ns-ns-live
    vm-live-migration-destination   ns-to-ns-vm-live-migration-instance-destination   Running           vm-migration-ns-ns-live
    vm-live-migration-destination   ns-to-ns-vm-live-migration-instance-destination   Succeeded         vm-migration-ns-ns-live
    oc get vm -n vm-migration-ns-ns-live

    The VirtualMachine will show you it is Migrating.

    NAME                      AGE    STATUS      READY
    vm-migration-ns-ns-live   7m1s   Migrating   False
  1. From the Console

    From the left hand menu, navigate to Virtualization, click Virtual Machines, followed by the namespace vm-live-migration-source to expand it and click on the virtual machine named vm-migration-ns-ns-live.

    On the Virtual machine details page, you will see the the Source VM is Running.

    image1 ui src vm running
    Figure 1. Source Virtual Machine Details - Running

    To view the Destination VM, select by the Namespace vm-live-migration-destination to expand it and click on the virtual machine named vm-migration-ns-ns-live.

    On the Virtual machine details page, you will see the the Destination VM. If the migration has not started, it will have a status of WaitingForReceiver.

    image2 ui dest waiting
    Figure 2. Destination Virtual Machine Details - WaitingForReceiver

    If the Migration has started and is in the early stages, the VM will a status of Starting.

    image3 ui dest starting
    Figure 3. Destination Virtual Machine Details - Starting

    When the Source and Destination are ready, the status of the Destination VM will change to Migrating.

    image4 ui dest migrating
    Figure 4. Destination Virtual Machine Details - Migrating

    Once the Destination VM starts Migrating, you can click on the Migrating status which will display a pop-up panel with information about the state of the migration.

    image5 ui migrating overview panel
    Figure 5. Destination Virtual Machine - Migration Pop-up Panel

    From the Migrating pop-up panel, there is a link Migration metrics. Clicking that link will take you to the Metrics tab with detailed information about the VM and the migration.

    image6 ui dest migrating metrics
    Figure 6. Destination Virtual Machine - Migration Metrics

    Scrolling down on the Metrics page, you will see more migration metrics and a LiveMigration progress bar.

    image7 ui dest migrating metrics2
    Figure 7. Destination Virtual Machine - Migration Metrics

    As the migration progresses, you will see the LiveMigration progress bar move until it reaches 100% and displays a Complete time.

    image8 ui dest migrating progress bar
    Figure 8. Destination Virtual Machine - Migration Progress Bar

    Navigating back to the Overview tab, you will see that the Destination VM is now in a Running state. You can also see, in the Projects sidebar, that the Source VM is now Stopped.

    This marks the successful completion of the Namespace to Namespace live migration.

    image9 ui dest running src stopped
    Figure 9. "Destination Virtual Machine - Running & Source Virtual Machine - Stopped

UI Based Instructions

The UI based workflow is not fully functional yet. The workflow is complete, but the migration will fail when the plan is created. The purpose of this section is to introduce you to the UI based workflow.

  1. Ensure you are logged in to the OpenShift Console as the admin user from your web browser and continue to the next step.

  2. From the Console left hand menu, navigate to Migration for Virtualization and click Migration plans.

    image1 click migration plans
    Figure 10. Migration Plans
  3. From the Migration plans page, click Create plan.

    image2 click create plans
    Figure 11. Create Migration Plan
  4. On the Creation migration plan page, give your plan a name like namespace-to-namespace. Leave the Plan project as the default value of openshift-mtv.

    image3 planinfo name project
    Figure 12. Create migration plan
  5. Further down the Creation migration plan page, select host as both the Target and Source provider and select vm-live-migration-destination as the Target project.

    Click Next

    Because we are migration our VirtualMachine between Namespaces within the same cluster, our Source and Target provider are the same. If we were migrating between clusters, we would have a different provider for the Target cluster and select that instead.

    image4 planinfo source target providers
    Figure 13. Create migration plan
  6. From the Virtual machines list, select vm-migration-ns-ns-live as the VirtualMachine you want to migrate.

    Click Next

    image5 select virtual machine
    Figure 14. Select Virtual Machine
  7. On the Network Map page, select Use new network map and select /Default network as the Source network. The Target network should already be set to Default network.

    Click Next

    image6 new network map
    Figure 15. Create Network Map
  8. On the Storage Map page, select Use new storage map and select ocs-external-storagecluster-ceph-rbd as the Source storage. The Target storage should already be set to ocs-external-storagecluster-ceph-rbd.

    Click Next

    image7 new storage map
    Figure 16. Create Storage Map
  9. On the Migration type page, select Live migration.

    Click Next

    image8 migration type live
    Figure 17. Migration Type
  10. On the Other settings page, leave the defaults.

    Click Next

    image9 power state
    Figure 18. Other Settings
  11. On the Hooks page, leave the defaults.

    Click Next

    image10 migration hooks
    Figure 19. Hooks
  12. On the Review and create page, make sure everything looks correct.

    Click Create plan

    image11 review create
    Figure 20. Review and Create
  13. When the Plan is created, you will be taken to the Plan details page where you can monitor the status and progress of your migration.

    In our case, we see The plan is not ready due to a MAC address conflict. Once this issue is resolved, the migration will complete successfull with no other changes in the workflow.

    You can follow CNV-72966 - loosen (or drop) mac collision detection for more information.

    image12 plan created error
    Figure 21. Plan Details