Storage Management

Red Hat OpenShift supports multiple types of storage, both for on-premises and cloud providers. OpenShift Virtualization can use any supported container storage interface (CSI) provisioner in the environment you’re running on. For example, OpenShift Data Foundation, NetApp, Dell/EMC, Fujitsu, Hitachi, Pure Storage, Portworx, and many others support on-premises, CSI provisioned, ReadWriteMany (RWX) volumes with OpenShift Virtualization.

This workshop segment will explore Persistent Volume Claims (PVCs), which are used to request storage from the provider and store a VM disk. Many storage providers also support snapshots and clones of their devices, be sure to check with your vendor to verify the features supported by the CSI driver and storage device.

Notably, there are no restrictions on storage protocol (e.g. NFS, iSCSI, FC, etc.) specific to OpenShift Virtualization. The only requirement is that the RWX access mode is available to support live migration of VMs within the cluster. Otherwise, the storage that best meets the needs of your VMs and applications is always the right choice.

00 disk concepts

Examine the PVC for a VM

In this lab, we are going to take a closer look at the storage behind the virtual machine we just created fedora01.

  1. Start by clicking on the left menu for StoragePersistent Volume Claims. Make sure you are in the vmexamples-{user} namespace, you should see the fedora01 PVC that was created when you created the fedora01 VM in the previous section.

  2. Click on the fedora01 PVC and you will be presented with a screen that shows additional details about the storage volume backing the VM.

  3. Notice the following information about the persistent volume claim:

    1. The PVC is currently bound successfuly

    2. The PVC has a requested capacity and size of 30GiB

    3. The Access mode of the PVC is ReadWriteMany (RWX)

    4. The Volume mode of the PVC is Block

    5. The volume is using the ocs-external-storagecluster-ceph-rbd storage class.

      02 Fedora01 PVC Details

Managing Snapshots

OpenShift Virtualization relies on the CSI storage provider’s snapshot capability to create disk snapshots for the virtual machine, which can be taken "online" while the VM is running or "offline" while the VM is powered off. If the KVM integrations are installed on the VM, you will also have the option of quiescing the guest operating system (quiescing ensures that the snapshot of the disk represents a consistent state of the guest file systems, e.g., buffers are flushed and the journal is consistent).

Since disk snapshots are dependent on the storage implementation, abstracted by the CSI, performance impact and capacity used will depend on the storage provider. Work with your storage vendor to determine how the system will manage PVC snapshots and the impact they may or may not have.

Snapshots, by themselves, are not a backup or disaster recovery capability. The data needs to be protected in other ways, such as one or more copies stored in a different location, to recover from the storage system failing.

In addition to the OpenShift API for Data Protection (OADP), partners such as Kasten by Veeam, Trilio, and Storware support the ability to backup and restore virtual machines to the same cluster or other clusters as needed.

With the VM snapshots feature, cluster administrators and application developers can:

  • Create a new snapshot

  • List all snapshots attached to a specific VM

  • Revert a VM to a snapshot

  • Delete an existing VM snapshot

Creating and Using Snapshots

  1. Navigate back to VirtualizationVirtualMachines and select the virtual machine, fedora01 in the project vmexamples-{user}.

    03 VM Overview
  2. Notice there are currently no snapshots of this VM listed on the overview page.

    04 Snapshots Overview
  3. Navigate to the Snapshots tab.

    05 Snapshot Menu
  4. Press Take snapshot and a dialog will open

    There is a warning about the cloudinitdisk not being included in the snapshot. This is expected and happens because it is an ephemeral disk.
    06 VM Snapshot Dialog
  5. Press Save and wait till the Snapshot has been created and the status shows as Operation complete.

    07 VM Snapshot Taken
  6. Press the three-dot menu, and see that the Restore option is greyed out because the VM is currently running.

    08 VM Restore Disabled
  7. Next, switch to the Console tab. We are going to login and perform a modification that prevents the VM from being able to boot.

    09 Console Login
  8. Click on the Guest login credentials dropdown to gather the username and password to log into your console.

    There is a Copy to clipboard button and a Paste button available here, which makes the login process much easier.
  9. Once you are logged in, execute the following command:

    sudo rm -rf /boot/grub2; sudo shutdown -r now
  10. The virtual machine will no longer be able to boot.

    10 Bootloader Broken

    In the previous step, the operating system was shutdown from within the guest. However, OpenShift Virtualization will restart it automatically by default. This behavior can be changed globally or on a per-VM basis.

  11. Using the Actions dropdown menu or the shortcut button in the top right corner, Stop the VM. This process can take a long time since it attempts a graceful shutdown and the machine is in an unstable state. If you click on the Actions dropdown menu again you will have the option to Force stop. Please make use of this option in order to continue with the lab.

  12. You can click on the Overview tab to confirm that the VM has stopped. You can also see the snapshot we recently took listed in the Snapshots tile. (You may need to Force Stop the VM via the dropdown. This is fine as we are about to restore the snapshot.)

    11 VM Stopped Snapshot
  13. Navigate back to the Snapshots tab, click the three-dot menu, and with the VM stopped, you will find Restore is no longer greyed out. Click it.

    12 VM Restore
  14. In the dialog shown, press Restore.

    13 VM Restore Dialog
  15. Wait until the VM is restored, the process should be fairly quick.

    14 VM Restored
  16. Return to Overview tab, and start the VM.

    15 VM Start
  17. Click on the console tab to confirm that the VM has now restarted successfully.

    16 VM Running

Clone a Virtual Machine

Cloning creates a new VM that uses it’s own disk image for storage, but most of the clone’s configuration and stored data is identical to the source VM.

  1. Return to the Overview screen, and click the Actions dropdown menu to see the option to clone the VM.

    17 Overview Actions Clone
  2. Press Clone from the Actions menu, and a dialog will open. Name the cloned VM fedora02, and select the check box to Start VirtualMachine on clone, then click Clone.

    18 VM Clone Dialog
  3. A new VM is created, the disks are cloned and automatically the portal will redirect you to the new VM, and you can see the Created time as very recently.

    19 VM Cloned
    The cloned VM will have the same identity as the source VM, which may cause conflicts with applications and other clients interacting with the VM. Use caution when cloning a VM connected to an external network or in the same project.
  4. Click on the YAML menu at the top of the screen, you will see that the name of the VM is fedora02, however there are labels that remain from the fedora01 source VM that will need to be manually updated.

    20 Cloned VM YAML
  5. Modify the the app and kubevirt.io/domain values in the YAML so that they are set to fedora02 then click the Save button at the bottom, this will allow us to work with this VM in future modules much more easily.

Summary

In this section of our lab we explored the storage options that are available to us when managing virtual machines. We also performed several VM management functions that are dependant on the storage provisioned for the virtual machine, including taking snapshots of VMs and cloning VMs to be used in another project or to help streamline development.