Dynamic volume provisioning on Red Hat OpenShift

Scenario - Persistent Storage Volume Provisioning and Availability

In this scenario, you’ll learn about Portworx Enterprise StorageClass parameters and deploy a demo application that uses RWO (ReadWriteOnce) Persistent Volumes provisioned by Portworx Enterprise, and see how Portworx makes them highly available.

Reminder: Accessing the Red Hat OpenShift Console

To connect to the console, click on the OpenShift Console tab above.

IMPORTANT: The OpenShift Console tab will open in a new browser window.

We can then log in with the following credentials:

Username: kubeadmin Password: kubeadmin_password

Step 1 - Deploying Portworx BBQ With a ReadWriteOnce Volume

In this step, we will deploy our Portworx BBQ application that uses MongoDB and an RWO persistent volume.

Task 1: Create the pxbbq namespace

In the Terminal tab of the Instruqt webpage, run the following commands to provision Portworx BBQ.

oc create ns pxbbq

Task 2: Deploy the MongoDB backend in the pxbbq namespace

oc apply -f pxbbq-mongo.yaml -n pxbbq

Task 3: Deploy the PXBBQ frontend in the pxbbq namespace

oc apply -f pxbbq-web.yaml -n pxbbq

Task 4: Monitor the application deployment using the following command:

watch oc get all -n pxbbq

When all of the pods are running with a 1/1 Ready state, press CTRL+C to exit.

Task 5: Expose the pxbbq service with a route:

oc expose svc -n pxbbq pxbbq-svc
oc -n pxbbq patch route pxbbq-svc \
  -p '{"spec":{"tls":{"termination":"edge"}}}'

Task 5: Order up some BBQ!:

Click on the PX-BBQ tab next to the Terminal tab at the top of the screen. Since you just deployed the application, you may see an Nginx error - if you do, click on the refresh icon next to the words "PX-BBQ" on the tab.

Click the "Menu" icon, and select "Login". Enter the username guest@portworx.com and the password guest. Next, click on "Menu" again and select "Order". Select a main dish, two sides, and a drink - then click on the "Place Order" button. The order history page appears - click on the hyperlink to your order number, and make sure your order is correct!

Task 6: Inspect the MongoDB volume

Switch back to the Terminal tab and use the following command to inspect the MongoDB volume and look at the Portworx parameters configured for the volume:

VOL=`oc get pvc -n pxbbq | grep  mongo-data-dir-mongo-0 | awk '{print $3}'`
pxctl volume inspect $VOL

You can see that the HA parameter is set to 3 - this means we have three replicas of the persistent volume, as we declared in the StorageClass that we created.

You can also notice that these three replicas are spread out across our three worker nodes as shown in the Replica sets on nodes section of the pxctl output, and that all three replicas are in sync by observing the Replication Status output which shows Up.

Finally, you can see in the Volume consumers section that our mongo-0 pod is the consumer of the volume, which shows that MongoDB is using our volume to store its data!

Task 7: View your order from the MongoDB CLI

To look at the MongoDB entry generated by your order, first get a bash shell on the MondoDB pod:

oc exec -it mongo-0 -n pxbbq -- bash

Then, let’s connect to MongoDB using mongosh:

mongosh -u porxie -p porxie

And finally, let’s query for the order you placed earlier:

use pxbbq
db.orders.find()
exit

You should see your order in the database! Finally, let’s exit from our exec into the MongoDB pod:

exit

In this step, you saw how Portworx can dynamically provision a highly available ReadWriteOnce persistent volume for your application, and you got to order up some delicious BBQ from Portworx!

Step 2 - Proving Availability for Portworx Volumes

Great, we’ve got our Portworx BBQ application up and running on highly available PVs, but what happens when the worker node hosting the active replica dies unexpectedly? How will our application react?

First, let’s observe where the MongoDB pod is running (note the NODE column):

oc get pod mongo-0 -n pxbbq -o wide

Next, let’s get the worker node running the MongoDB pod into a variable:

NODENAME=$(oc get pod mongo-0 --no-headers=true -n pxbbq -o wide | awk '{print $7}')

We will now debug the node:

oc debug node/$NODENAME

IMPORTANT: Running oc debug node/$NODENAME can take a few seconds as a pod needs to be attached to the node.

Chroot to the host

chroot /host

And finally, reboot the OpenShift worker node hosting the running MongoDB pod:

sudo reboot

We can watch our MongoDB pod get deleted and recreated. Note the NODE column as the pod gets evicted and recreated; the node name will change and the pod will be rescheduled on a node in a different AZ:

watch oc get pod mongo-0 -n pxbbq -o wide

IMPORTANT: It takes about 20-30 seconds for Kubernetes to detect the worker node has unexpectedly gone offline, and another 10 seconds for the MongoDB pod to get evicted and rescheduled on a surviving node. You will see the MongoDB pod disappear from the watch command, and shortly thereafter will see a new MongoDB pod appear.

The beauty of this is that Kubernetes is aware of where surviving replicas of the MongoDB volume are thanks to STORK (Storage Orchestration Runtime for Kubernetes), and the replacement MongoDB pod is rescheduled on a node that has a surviving replica.

Press CTRL-C to exit the watch command once you see the MongoDB pod has successfully restarted.

Now that we have a fresh copy of our MongoDB pod running, let’s check the information in our MongoDB collection again to make sure your order is still there!

Again, exec into the freshly created MongoDB pod:

oc exec -it mongo-0 -n pxbbq -- bash

Then connect to MongoDB using mongosh:

mongosh -u porxie -p porxie

And finally, query for the order you placed earlier:

use pxbbq
db.orders.find()
exit

Finally, let’s exit from our exec into the MongoDB pod:

exit

To finish making sure our application is healthy, click on the PX-BBQ tab, and click the refresh icon next to the "PX-BBQ" tab name. Use the menu to nagivate to "Order History". You can see that your order for Portworx BBQ is still there!

IMPORTANT: Since this is a lab environment, we haven’t been able to use active health checks for Portworx BBQ since that requires NGINX Plus instead of the Open Source version, and your lab environment may not refresh the web frontend as fast as we’d like.

If your app is not responsive, simply copy the following commands into the terminal to redeploy the web frontend, refresh the application in the PX-BBQ tab, and you should be all set to check your order via the UI!

oc delete -f pxbbq-web.yaml -n pxbbq
oc apply -f pxbbq-web.yaml -n pxbbq

Click the Next button to continue