š” Solution: Scenario 6 ā ALIA Lightspeed issues
This page provides the detailed solution for the issue presented in Scenario 6, which involves an incorrect Large Language Model (LLM) configuration for Ansible Lightspeed Intelligent Assistant (ALIA).
š Problem: Invalid Large Language Model (LLM) configuration
Diagnosis
The Ansible Lightspeed Intelligent Assistant (ALIA) service was unable to initialize because it was configured to use a model that it was not authorized to access (codellama-7b-instruct). This prevents the ALIA UI component from being activated.
The pod logs for the lightspeed-api and chatbot pods confirm the model connection failure.
š ļø Resolution: Updating the chatbot secret
The configuration for the LLM is stored within the chatbot-configuration-secret. The fix is to edit this secret and replace the unauthorized model name with the correct one: granite-3-2-8b-instruct.
1. Update the chatbot configuration secret
In the OpenShift Console, navigate to Workloads ā Secrets in your namespace. Find chatbot-configuration-secret, open it and click Edit.
Update the chatbot_model value to the correct model name:
data:
chatbot_model: granite-3-2-8b-instruct # <--- CORRECT VALUE
|
Make sure you are editing the |
2. Force reconciliation
The Lightspeed Operator must restart to pick up the changes from the secret.
In the OpenShift Console, navigate to Workloads ā Pods and select your namespace. Find the lightspeed-api pod, click the three-dot menu on the right, and select Delete Pod. Repeat for the lightspeed-chatbot-api pod. Both will restart automatically and pick up the corrected configuration.