
ConvyAI
⚡Generative AI⚡
⚡Generative AI⚡
Combining Generative AI within a chatbot platform offers numerous advantages, including natural conversations, personalized responses, continuous learning, and real-time content creation. Harness this advanced technology to enhance user experience and optimize interactions with your chatbot.
The introduction of Generative AI in ConvyAI provides several advantages. Firstly, it enables more natural and seamless conversations, thanks to the generative AI's ability to generate coherent and contextually relevant responses. This enhances the user experience and creates a perception of interacting with a human-like virtual assistant.
Furthermore, the use of Generative AI allows chatbots to deliver personalized and tailored responses to meet specific user needs. The generative AI can learn from historical conversation data and user inputs, enabling it to provide more relevant information and solutions.
Another key feature is the capability for continuous learning. Generative AI can adapt and improve over time by analyzing new data and integrating real-time knowledge. This enables the chatbot to always provide updated and relevant responses.
Lastly, integrating Generative AI into ConvyAI powered chatbots enables real-time content generation. This means the chatbot can create original and personalized responses on the spot, enhancing interactivity and the effectiveness of conversations.
To fully leverage the potential of Generative AI, it is crucial to be able to define the context in which it should operate.
For this purpose, ConvyAI provides a set of features that allow indexing a knowledge base to contextualize the generative bots.
This way, it is possible to create generative experiences that draw from the available knowledge base, providing verified and brand-aligned content as per the brand guidelines.
The Knowledge base section contains three different pages that help you in creating and managing your knowledge base indexes. Please contact our Sales team in order to access those functionalities.
From a technical standpoint, the process of contextualizing the LLM model is based on embeddings and vector representation of the text extracted from the knowledge base.
Initially, relevant content is extracted from the documentation, and vector embeddings are generated to capture the meaning and context of the texts and then stored in a vector database.
During user interaction, the input is also converted into a vector embedding. This embedding is compared to stored vectors in the database, based on similarity search in the vector database and the most relevant and contextually coherent document is found to generate the appropriate response.
In this way, the LLM model contextualizes responses based on the correlation between the user input and the documents in the vector database, providing a better understanding and contextualization of information.
Following an high-level architecture of what ConvyAI provides:
Given the open and flexible nature of ConvyAI, it is possible to use different Large Language Models (LLMs), including custom ones.
In the following pages, we will demonstrate how to configure the service, with an emphasis on the functionalities provided by the OpenAI platform as one of the available options.
Through this page, you can configure the technical aspects for creating a customized knowledge base.
It contains two sections:
- OpenAI Configs
- Redis Configs
OpenAI Configs
To start using the indexing tools, you need to input the API key obtained during the subscription phase to the OpenAI platform. For more information, please visit the documentation at the following link.
Redis Configs
ConvyAI provides a managed Redis service out of the box, eliminating the need for installation and configuration.
However, if you wish to use an existing Redis database, you can configure the following data to enable integration. This is also useful in cases where you want to maintain the indexed knowledge base in-house.
In this page, you can launch batches of indexing for the desired knowledge base.
ConvyAI allows indexing of documents - doc/pdf available within an AWS S3 bucket - or through crawling a website.
Once running, the batch extracts the contents of the documents or website pages and, using the embedding procedure described earlier, creates the corresponding knowledge base index on ConvyAI.
The platform allows you to run multiple indexing batches, which are useful for loading information from different sources, organized according to your needs.
Following the AWS S3 bcket configuration, where you have to set the bucket URL, the region involved, and AWS Access and Secret Key for accessing the bucket.
Instead, for the website crawler, you simply need to input the URL of the website to be crawled.
Important: through the index name field, you can manage the structure of your knowledge base. There are two different management scenarios available:
-
Specify a single index for all loading batches: This allows you to create a single global knowledge domain that bots can draw from during conversations with end users.
-
Specify different index names: This allows you to create specific knowledge domains, for example, divided by thematic areas. During bot design, you will be able to specify precisely within which domain to contextualize the information when the bots respond to user questions.
Through the Advanced parameters section, you can control various settings such as the token size of each extracted text chunk, the LLM model, and the embedding model to use for the current indexing.
Once you have finished filling up the new index batch, you can test the configuration.
If everything is correct, you can run your indexing batch!
The duration of the indexing batch may vary depending on the amount of information to process, and it is possible to monitor the progress. The platform will send a notification when the process is completed.
Here is an explanation of the possible progress states managed:
- Ready: Configuration is ready to be executed.
- Running: the process is currently running.
- Completed: the process has been successfully completed.
- Cancelled: the upload was manually canceled or errors were encountered.
Through this page, you can check and manage your knowledge base. For each defined index, the previously uploaded documents or website pages are displayed. From here, you can search for documents or pages, remove them within the knowledge base, or completely remove knowledge domains.
Important: deleting a document uploaded via AWS S3 does not physically delete it from the bucket; it only removes its indexing within the knowledge base.
In practice, the bot will no longer be able to provide information related to the deleted document or domain.
ConvyAI allows you to leverage the categorization and information extraction capabilities inherent in generative models. To achieve this, the ability to define a semantic engine of the OpenAI type has been added, in which you can specify which intents and entities can be extracted during the conversation. This enables the combination of real-time content generation capabilities with the management of business processes to be presented to users.
Please define the OpenAI access key, as well as additional parameters such as the minimum intent determination threshold, the LLM model to use, the maximum number of tokens handled, and the level of creativity used by the engine in generating responses.
Once created, you can then provide the list of intents and entities that should be extracted during the conversation.
You can easily test the configuration directly from the page in order to check if the model behavior is coherent with the desired requirements.
As indicated in the chapter Create your bot, you can leverage the capabilities of generative models to create advanced and personalized user experiences. The creation of the customer journey is available through the flow designer tool provided by the platform, with an intuitive and user-friendly no-code design tool.
Specifically, the platform offers a new node called Knowledge Base, which allows you to define the behavior of the generative model at a specific point in the conversation. This enables the design of precise customer experiences, drawing from the available knowledge base and customer data gathered during the conversation.
Below, the individual fields will be indicated along with how to configure them to get the most out of Generative AI:
- Question: it is the text shown to the user to start the conversation
- Semantic Engine: you can choose which generative semantic engine to use for analyzing the user's request, see the chapter on Semantic Engine
- Tone of voice: the platform provides three tone of voice options - Empathetic, Professional, Friendly - that can be used to define the response style adopted by the bot
- Add commands to prompt: specific instructions to give to the model to outline a specific behavior
- Add context to prompt: specific instructions to add to the bot's knowledge domain, useful for passing customer information gathered during the conversation to the prompt
- Knowledge base index: definition of the knowledge domain to contextualize the responses generated by the bot
- Show exit buttons: flag to enable if you want to provide an alternative guided navigation to the end user, with the presentation of a button menu
- Intents: selection of intents, defined in the previously selected semantic engine, that need to be managed. In practice, if the end user's request is categorized with one of the selected intents, the journey's navigation would continue with the corresponding exit node to manage its subsequent behavior
- Send generated answer to user: if enabled, the bot will automatically show the generative answer provided by the LLM model to the customer
- Set maximum number of interactions: You can set the maximum number of interactions between the user and the generative bot before directing the conversation towards more controlled navigation. In the example given, after the fifth interaction between the user and the bot, the conversation exits the current node, and you can then manage its subsequent behavior.