Creating an install ISO with Anaconda
In this section you will learn the basics of deploying a bootc image using Anaconda and an all in one iso
image.
Anaconda is the official Red Hat Enterprise Linux installer which uses Kickstart as it’s scripting language. Recently, Kickstart received a new command called ostreecontainer
to support bootc
installs.
Building an installation ISO with the bootc image
Using bootc-image-builder
is great for creating virtual and cloud images, but for bare metal installs we need a different approach. In those environments, the Anaconda installer is still widely used, automated by kickstart files.
Anaconda is typically used via the boot ISO shipped by Red Hat. There are several ways to get a host to boot from this ISO image including PXE or HTTP Boot over the network, as an image via IMPI, or a physical disk inserted into a drive. The kickstart file for an install can also be provided in several ways, including over a network or as part of the ISO.
For this section of the lab, we’ll focus on creating a boot ISO that includes the kickstart. The configuration of Anaconda via kickstart will be the same, regardless of how it is presented to the host being installed. This means you can incorporate bootc hosts into an existing kickstart infrastructure fairly seamlessly.
To create the ISO with an embedded kickstart, we will use bootc-image-builder
. Make sure you have the bootc image to be installed available in system storage.
sudo podman images {registry_hostname}/httpd
Creating the kickstart file
All of the kickstart is standard fare. Anaconda will create the partitions and filesystems on disk, enable services, and create users just like it would in any other install. You even have access to the %pre
and %post
blocks too, the only missing section from any kickstart you have today is the %packages
list. The full details of kickstart is beyond the scope of this lab, but the example will create a single partition, create a user, start ssh
, disable the firewall, and reboot once finished. Oh, and of course deploy our image.
The kickstart file is added to the ISO via a customization block of the bootc-image-builder
config file. Add the following to config-iso.toml
and save the file. Be sure to add the ssh public key for your system.
cat ~/.ssh/my-guidkey.pub; echo
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAuXnpoluye+KM+9tvIAdHf+F0IHh+K73tlcjEG8LJRB
This is only a sample and this public key will not work in your environment |
You can now create a file called config-iso.toml
with the following contents, replacing "SSHKEY"
in the key
field with the contents of public key from your system.
nano config-iso.toml
[customizations.installer.kickstart]
contents = """
# Basic setup
text
network --bootproto=dhcp --device=link --activate
# Basic partitioning
clearpart --all --initlabel --disklabel=gpt
reqpart --add-boot
part / --grow --fstype xfs
# No ostreecontainer command needed, instead we lay down the container contents with BIB in the ISO
firewall --disabled
services --enabled=sshd
user --name=core --groups=wheel --password=LB1054!
sshkey --username=core "SSHKEY"
# Only inject a SSH key for root
rootpw --iscrypted locked
"""
The new ostreecontainer
command replaces the standard %packages block of the kickstart file that defines what RPMs would be installed by Anaconda. This replaces that set of RPM transactions with the bootc
image to be unpacked and installed to disk.
When creating an ISO with an embedded kickstart however, bootc-image-builder
will automatically inject the correct ostreecontainer
directive for the image we pass. This image is embedded in the ISO with the kickstart, which makes this ISO a completely self-contained installer for a single image.
The command looks nearly identical to the one used to create the virtual machine QCOW2 disk in the earlier exercise, using a new output type
.
This could take 6+ minutes to complete. You may see the following error from
|
sudo podman run --rm --privileged \
--security-opt label=type:unconfined_t \
--volume .:/output \
--volume ./config-iso.toml:/config.toml \
--volume /var/lib/containers/storage:/var/lib/containers/storage \
registry.redhat.io/rhel9/bootc-image-builder:9.5 \
--type anaconda-iso \
--local \
{registry_hostname}/httpd
For this output type there are two stages, bootc-image-builder
will first create the ISO image from the same source repos as the container it’s passed. This ensures we get matching versions on the installer and the image to be installed regardless of the host’s OS version, eg a RHEL 10 ISO for a RHEL 10 image.
While the ISO builds, let’s talk about the change needed to use bootc
with a kickstart file in a network based install. If you wanted to deploy from a container registry, just add the new ostreecontainer
command. In that kickstart, ostreecontainer
takes the url
argument with the full location and name of the image:
ostreecontainer --url quay.io/myorg/myimage:mytag
This will download the image during the install process and use bootc
to deploy the contents to the disk set up by Anaconda according to the kickstart file. You can boot from the standard boot ISO downloaded from the Customer Portal and pass the kickstart via HTTP, or from a Satellite server. The only change is how software is processed during the install.
Starting the installation with Anaconda in a virtual machine
Having created a custom installation ISO, we can boot a virtual machine and automatically install our image. We are once again using virt-install
to create a local VM, the main difference is that here we are using the options to pass the local path of the ISO file. We’ll attach virsh
to the console created via the --extra-args
options so we can see what happens.
The kernel arguments for the kickstart are embedded automatically into the ISO by |
Locate the installer ISO in the bootiso
directory.
file bootiso/install.iso
bootiso/install.iso: ISO 9660 CD-ROM filesystem data (DOS/MBR boot sector) 'RHEL-9-5-BaseOS-x86_64' (bootable)
Copy the ISO to a location that virsh
can find it, like with the QCOW2 disk image. This is mainly about permissions and access.
sudo cp bootiso/install.iso /var/lib/libvirt/images/install.iso
Now we’re ready to start the install from our ISO image.
This could take 2+ minutes to complete. |
virt-install --connect qemu:///system \
--name iso-vm \
--disk size=10 \
--location /var/lib/libvirt/images/install.iso \
--extra-args "inst.ks=hd:LABEL=RHEL-9-5-BaseOS-x86_64:/osbuild.ks console=ttyS0" \
--memory 4096 \
--graphics none \
--osinfo rhel9-unknown \
--noreboot
Once the initial boot is complete, you’ll get to see anaconda at work. At various points you’ll see output like below, showing anaconda reading the kickstart, and calling bootc
to deploy the image to the disk layout it created.
Starting installer, one moment... anaconda 34.25.5.9-1.el9 for Red Hat Enterprise Linux 9.5 started. Starting automated install.Saving storage configuration... Deployment starting: /run/install/repo/container
When prompted, hit Enter
to finish the installation and shut down the VM
Installation complete Use of this product is subject to the license agreement found at: /usr/share/redhat-release/EULA Installation complete. Press ENTER to quit:
We can now start our new bootc virtual machine.
virsh --connect qemu:///system start iso-vm
Check to make sure the virtual machine running:
virsh --connect qemu:///system list
Id Name State ------------------------------------ 1 qcow-vm running 2 iso-vm running
Test and login to the virtual machine
Like with the previous virtual machine created, you can directly see if the http application is already running on the host:
curl http://iso-vm
The output should be "Hello Red Hat Summit 2025!!"
You can now login to the virtual machine.
ssh core@iso-vm
If the ssh key is not automatically picked up, use the password defined in the config file at the beginning of this lab (by default LB1054! ).
This is also the password to use when prompted by sudo .
|
Once you have logged in, you can inspect the bootc status.
sudo bootc status
No staged image present Current booted image: /run/install/repo/container Image version: 9.20250326.0 (2025-04-03 14:36:38.438935004 UTC) Image digest: sha256:99694ce76cedd1fa58250c4e5ee6deeb4d91993b89054793394cda31b1d046ab No rollback image present
Switching to a different transport method
One thing that immediately is different in the bootc status
output is that the deployed image image is a local path, not the registry naming convention we’ve been using. Let’s dig a little deeper by pulling the spec
block from the full YAML output.
sudo bootc status --format yaml | grep -A 4 spec
spec:
image:
image: /run/install/repo/container
transport: oci
bootOrder: default
The transport
line refers to how containers are pulled and are defined as part of the OCI standards. The oci
transport type means this is a single image located at a specific local path. This path existed in the install environment, but isn’t a container storage location we’d use on a live system. In fact, this image may not exist on the system at all since /run
is a tmpfs location.
It’s important to note that not having the container image on the system doesn’t affect bootc
operations at runtime. Once installed or an update is pulled and deployed, the container image is no longer needed. Rollbacks are to the deployment on disk, not to an image.
So far in this lab, we have been using the registry
transport, which requires network access. To manage updates in an offline manner, say for disconnected environments or those with intermittent connectivity, we could replicate the OCI transport and present an image at the same location. But we can also use the standard system storage locations with the containers-storage
transport. A full discussion of transports and their associated uses and configuration is outside the scope of this lab.
For this lab, let’s provide an update via standard system storage. We can use skopeo
to copy images from one location to another. Here, we will use it to copy from the lab registry to the host, but it can also be used to copy to and from a USB drive or other media.
We need to be sure to use sudo
to copy into the system storage location and not the user’s.
sudo skopeo copy docker://{registry_hostname}/httpd containers-storage:{registry_hostname}/httpd
Switch our installation to use the new container image, using the --transport
flag to let bootc know we want to use local container storage for update tracking.
sudo bootc switch --transport containers-storage {registry_hostname}/httpd
Fetched layers: 0 B in 15 seconds (0 B/s) Deploying: done (3 seconds) Queued for next boot: ostree-unverified-image:containers-storage:node.z8d2b.gcp.redhatworkshops.io/httpd Version: 9.20250326.0 Digest: sha256:315cec3b391047bcf931d3c55f381fc0d60f090e1cb5116f85af0401240c17d4
At this point, the "new" installation has been prepared and will be started at next boot of the virtual machine.
The last step for the change to take effect is to reboot the virtual machine. Before doing so, please make sure you are logged in to the virtual machine and not the hypervisor (the prompt should look like `[core@localhost ~]$ `):
sudo systemctl reboot
In a short time after that command, you should be able to ssh back to the virtual machine:
ssh core@iso-vm
And check the bootc status:
sudo bootc status
No staged image present Current booted image: containers-storage:node.z8d2b.gcp.redhatworkshops.io/httpd Image version: 9.20250326.0 (2025-04-08 18:59:59.167494817 UTC) Image digest: sha256:315cec3b391047bcf931d3c55f381fc0d60f090e1cb5116f85af0401240c17d4 Current rollback image: oci:/run/install/repo/container Image version: 9.20250326.0 (2025-04-08 18:59:59.167494817 UTC) Image digest: sha256:315cec3b391047bcf931d3c55f381fc0d60f090e1cb5116f85af0401240c17d4
In the status you can see bootc
is now tracking local container storage for updates, not the filesystem path. Further updates just need to be copied there for bootc
to recognize and apply. You could use skopeo
sync a registry repository to media, like a USB drive, as well as copy it from the media to the local storage on the host.
This opens a range of possibilities to deliver installations and updates for edge devices, disconnected networks, and any other arenas where direct connectivity to a registry over a network isn’t possible or desired.
Before proceeding, make sure you have logged out of the virtual machine:
logout
The prompt should look like `[lab-user@bastion ~]$ ` before continuing.