Bootstrap A Chatbot With Search
There Is Renewed Focus On The Effort, Competence & Development Time Required For New Implementations
Considering the latest report from Gartner, it is evident that there is a need to fast-track chatbot development in terms of squad-size, expertise required and time taken to reach a minimum viable product (MVP).
Chatbot Development Frameworks have been exploring avenues on how to make the process from product inception to deployment of a chatbot quick, with greater efficiency.
Bootstrapping a chatbot using Large Language Models has also surfaced.
In this article I would like to discuss bootstrapping a chatbot using search…and obviously there are a few avenues to achieve this with the search options available.
Search Options Available
Below are five options where search can be leveraged. These options range in complexity and ability to scale from left to right. There are obviously overlaps and shared technology components, as you will see. 🙂
With rolling out a chatbot in any environment, the chatbot usually evolves from an initial informational (ITR) bot, towards a more transactional & conversational bot.
FAQ’s are usually existing within an organisation and are established within most organisations. Hence usually being the first port of call during the inception of a chatbot project.
It will be remiss not to mention an alternative example of bootstrapping a chatbot, which is with Oracle Digital Assistant’s Quick Reply Intents.
FAQ’s can simply be implemented using the traditional framework of intents/entities/dialog flow/messages.
Or using a quick POC tool, like Microsoft QnA maker can be used. This tool can be used to design complex multi-turn conversations easily through QnA Maker portal or using REST APIs.
Knowledge Bases and Semantic search are merging to a large degree. As semantic search is used increasingly to search knowledge bases using natural language input. But for the purpose of this article I will keep it separated.
Knowledge bases are created via various means, Elasticsearch, Watson Discovery, Rasa knowledge base actions, OpenAI Language API with fine-tuning or Pinecone, etc.
Suffice to say that the concept of a Knowledge Base allows for three elements:
- A process of organising and uploading information of various formats, into a common repository, often annotation/labelling of data is possible and organising response messages, etc.
- Integration to a conversational AI development framework, and most development frameworks have a Knowledge Base extension.
- Lastly, the method of searching and extracting data. And this is where semantic search is finding its way into the Knowledge Base category. This is where the data is searched not only by finding keywords, but to determine the intent and contextual meaning of the search phrase.
Semantic Search is emerging as a category on its own, with various technology providers delivering solutions. These solutions allow companies to implement Semantic Search for their own domain-specific organisational data. Solution providers are looking at how custom models can be fine-tuned to boost performance and accuracy.
Above a demo from Deepset where semantic search is used, and again we see an overlap here where the data is not private or company related, but the data is sourced from Wikipedia.
General Search API’s
Information can be made available via various API’s, like weather and traffic API’s. Or the Wikipedia API can be leveraged for a general search and answer chatbot.
Here the dialog turn management still needs to be performed and conversational context managed. This leads us to the last example, Large Language Models…
Large Language Models
Large Language Models are evolving into search engines, where general knowledge and QnA can be handled. Added to this, the questions can be automatically answered in a natural language fashion while conversational state is maintained. As seen below in this OpenAI example where a chatbot is defined or trained with three sentences (few shot training).
This LLM example can be implemented as an extension of a chatbot, breaking out of a predefined knowledge base corpus of data.
And in turn making use of the LLM’s for general knowledge capabilities, instead of the web.
This does help with the responses being curated and more suited in terms of corporate and brand responsibility.
I wrote about this in a previous post, the fact that one cohesive Conversational AI platform will become harder to maintain, and implementations will start seeing different components being orchestrated into a single solution.
Will a Twilio of Language Technology emerge?
And is this the direction in which Cognigy is moving with the Cognigy AI Marketplace?
Is OneReach AI also focussing on becoming a single Conversational AI portal to act as an orchestration engine / aggregator for Conversational AI experiences?
Cobus Greyling - City of Johannesburg, Gauteng, South Africa | Professional Profile | LinkedIn
Rasa Hero. NLP/NLU, Chatbots, Voice, Conversational UI/UX, CX Designer, Developer, Ubiquitous User Interfaces…
Cobus Greyling - Medium
Read writing from Cobus Greyling on Medium. NLP/NLU, Chatbots, Voice, Conversational UI/UX, CX Designer, Developer…
Eliza Language Technology Community - Language Technology: Conversational AI, NLP/NLP, CCAI…
ELIZA - Where language technology enthusiasts unite.
Using Pinecone For Question Answering with Similarity Search
And How To Create A Knowledge Base Chatbot
Oracle Digital Assistant, Quick Reply Intents, Knowledge Documents & QnA
What Is the Oracle Approach & What Can We Learn From It?
Bootstrapping A Chatbot With A Large Language Model
How To Harness The Power Of OpenAI In Creating A Chatbot From Scratch
The Gartner® Critical Capabilities for Enterprise Conversational AI Platforms Assessment
And How Does It Compare To the Gartner® Magic Quadrant?