Deploy a virtual machine from a bootc image

In this exercise, you will build a qcow2 disk image from the bootc image and then launch a virtual machine from it. qcow2 is a standard file format used by the Linux virtualization system.

Create the config for building the virtual machine image

To deploy our bootc image as a host, we need to get the contents of the image onto a disk. While bootc handles those mechanics, the actual creation of a physical disk, virtual disk, or cloud image are handled by other standard tools. For example, Anaconda can be used with bootc images for physical or virtual hosts like you would today.

In this lab, we’ll use the bootc-image-builder tool which can create various different disk image types and call bootc to deploy the image contents. This is a variant of the Image Builder tooling you are already familiar with, just containerized and with bootc built in.

You may have noticed the bootc image we’ve created does not include any login credentials. Not a typical concern for an application container, but as a host we will very likely need to interact at some point. Since we are defining an image to be shared by multiple hosts in multiple different environments, users and authentication is likely to be handled at the deployment stage, not the build stage.

There are cases where it may be useful for a user and credentials to be added to an image, as a 'break glass' emergency login for example.

To add a user during the deployment to a disk image, the credentials are put into the config file used by bootc-image-builder to create the final disk image.

An ssh key pair was generated for use during this lab, you can view the public part like this for later use. There’s no newline at the end of this file, so the echo is just to make copying the output easier.

cat ~/.ssh/my-guidkey.pub; echo
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAuXnpoluye+KM+9tvIAdHf+F0IHh+K73tlcjEG8LJRB
This is only a sample and this public key will not work in your environment

You can now create a file called config-qcow2.toml with the following contents, replacing "SSHKEY" in the key field with the contents of public key from your system.

nano config-qcow2.toml
[[customizations.user]]
name = "core"
password = "LB1054!"
groups = ["wheel"]
key = "SSHKEY"

This configuration blueprint creates a user in the wheel group with the login core, the password LB1054! and the ssh key of the user on the lab host.

Create the disk image

To create a bootable disk image from an OCI image, we have a special version of image builder that has support for bootc. This bootc-image-builder itself runs as a container, and as a result needs additional capabilities and to be run as root.

As bootc-image-builder accesses local storage to find the source image, we will need to copy our image from user storage to system storage. The podman image scp command can do this for us, as well as copying to various different targets. When executed with a source but without a destination as below, the default is local system storage.

podman image scp lab-user@localhost::{registry_hostname}/httpd

Try to generate the image now:

This could take 2+ minutes to complete.
sudo podman run --rm --privileged \
  --security-opt label=type:unconfined_t \
  --volume .:/output \
  --volume ./config-qcow2.toml:/config.toml \
  --volume /var/lib/containers/storage:/var/lib/containers/storage \
  registry.redhat.io/rhel9/bootc-image-builder:9.5 \
  --type qcow2 \
  --local \
  {registry_hostname}/httpd

A brief explanation of the arguments used for podman run and bootc-image-builder:

  • --rm → do not keep the build container after execution finishes

  • --privileged → add capabilities required to build the image

  • --volume → podman will map these local directories or files to the container

  • registry.redhat.io/rhel9/bootc-image-builder:latest → the image builder container image

After the image builder version, these arguments passed to bootc-image-builder:

  • --type qcow2 → the type of image to build

  • --local → uses system container storage to find the image, this will be the default as of 9.6.

  • {registry_hostname}/httpd → the bootc image we are unpacking into the qcow2 disk

Once this completes, you will find the image in the qcow2/ directory:

file qcow2/disk.qcow2
qcow2/disk.qcow2: QEMU QCOW2 Image (v3), 10737418240 bytes

Create the virtual machine

Since we provided the QCOW2 type to bootc-image-builder, the resulting disk image is complete and ready to run without additional installation or other steps. We can copy the image to the standard storage pool location for KVM.

sudo cp qcow2/disk.qcow2 /var/lib/libvirt/images/qcow-vm.qcow2

The creation of KVM virtual machines is out of scope for this lab, but the core of the virt-install command used is --import which skips any install process and creates the VM around the provided disk image.

virt-install --connect qemu:///system \
  --name qcow-vm \
  --disk /var/lib/libvirt/images/qcow-vm.qcow2 \
  --import \
  --memory 4096 \
  --graphics none \
  --osinfo rhel9-unknown \
  --noautoconsole \
  --noreboot

We can now start our new bootc virtual machine.

virsh --connect qemu:///system start qcow-vm

Check to make sure the virtual machine running:

virsh --connect qemu:///system list
 Id   Name                State
------------------------------------
 1    qcow-vm                running

Test and login to the virtual machine

Congratulations, you are running a bootc virtual machine! Now that the virtual machine is up and running, you can see if the webserver behaves as expected.

Until the VM finishes booting, you will get name resolution errors from curl. You can either wait and retry or monitor the progress of the boot by connecting to the serial console via virsh`. Or try using watch and looking for the results below.
curl http://qcow-vm

And the results should be the "Hello Red Hat" string defined in the index.html.

Before we log into our VM, let’s check the SHA256 digest of the image in the local registry:

skopeo inspect docker://{registry_hostname}/httpd | jq '.Digest'
"sha256:99694ce76cedd1fa58250c4e5ee6deeb4d91993b89054793394cda31b1d046ab"

This is a way to ensure the image on the system is the same as the image in the registry.

You can now login to the virtual machine.

ssh core@qcow-vm
If the ssh key is not automatically picked up, use the password defined in the config file at the beginning of this lab (by default LB1054!). This is also the password to use when prompted by sudo.

Once you have logged in, you can inspect the bootc status.

sudo bootc status
No staged image present
Current booted image: node.g94th.g94th.gcp.redhatworkshops.io/httpd:latest (1)
    Image version: 9.20250326.0 (2025-04-03 14:36:38.438935004 UTC)
    Image digest: sha256:99694ce76cedd1fa58250c4e5ee6deeb4d91993b89054793394cda31b1d046ab
No rollback image present
1 This section details the name of the image, the version with creation timestamp, and the SHA256 digest from the registry. This digest should match the previous output from skopeo.

This status provides information about the images on the host. There are 3 different images that may be available on a bootc host: the booted image, the staged image, and the rollback image. We’ll discuss the latter two images later in the lab. The booted image is what’s currently defined as the active environment. The image name here is what bootc tracks to detect any updates that come available.

You can explore the virtual machine before moving on to the next section:

  • systemctl status httpd → check on the httpd service we have enabled in the Containerfile

  • cat /var/www/html/index.html → see the index.html file we tested via curl

Our services are running, but how can we tell that we are on system and not running a container?

First, bootc tells you directly if it’s being run on an image mode host or not. If bootc were to be installed and run on a non-bootc host, bootc status will show all null values instead of the output seen here.

For other ways, we can look at how the system was started, let’s look at kernel command line.

 cat /proc/cmdline
BOOT_IMAGE=(hd0,gpt3)/boot/ostree/default-6fe9dddacaf5c3232ba2332010aa7442e0a6d0e3f455b7572b047cc2284c3f2f/vmlinuz-5.14.0-427.26.1.el9_4.x86_64 root=UUID=5425bac2-bfc2-457d-93f8-ae7d3bf14d6d rw boot=UUID=9b9c7b0a-61c6-4a66-ade5-8c6690f1efa7 rw console=tty0 console=ttyS0 ostree=/ostree/boot.1/default/6fe9dddacaf5c3232ba2332010aa7442e0a6d0e3f455b7572b047cc2284c3f2f/0

We can see in the kernel command line some clear ties to an ostree partition, which is how images are stored and managed on a bootc host. We’ll talk more about that later.

One other obvious difference for bootc hosts is the layout of the filesystem.

 df -Th
Filesystem     Type      Size  Used Avail Use% Mounted on
devtmpfs       devtmpfs  4.0M     0  4.0M   0% /dev
tmpfs          tmpfs     2.0G     0  2.0G   0% /dev/shm
tmpfs          tmpfs     783M  592K  783M   1% /run
/dev/vda4      xfs       8.5G  1.9G  6.7G  22% /sysroot
composefs      overlay   6.3M  6.3M     0 100% /
tmpfs          tmpfs     2.0G     0  2.0G   0% /tmp
/dev/vda3      xfs       960M  145M  816M  16% /boot
/dev/vda2      vfat      501M  7.1M  494M   2% /boot/efi
tmpfs          tmpfs     392M     0  392M   0% /run/user/1000

Rather than the usual layout, you’ll notice that our root filesystem is an overlay and /sysroot looks like where most of the storage is used. You can also check the output of ls -l / and notice that there are a lot of symlinks where you might expect directories or filesystems. This is a key difference that is tied to how updates, rollbacks, and software management works on image mode hosts. We’ll explore this in a later exercise.

Before proceeding, make sure you have logged out of the virtual machine:

logout

The prompt should look like `[lab-user@bastion ~]$ ` before continuing.