For starters, a chatbot requires a data source for training data. Training data is decomposed into intents, entities, dialog and return/bot dialog.
This data can be thought up by the chatbot creators, or ideally live agent chat conversations can be used as a data source.
Other customer service channels can also serve as a data source.
These can be email enquiries, phone call transcriptions etc.
But, once the chatbot is launched, what would be the best avenue for continuous improvement?
The key points from the IBM Watson Assistant approach are:
- Observe customer conversations
- When ambiguity arises, present the user with multiple relevant options
- Subsequently, allow users to disambiguate
- Learn from user disambiguation and incorporate it into the chatbot.
IBM Watson Assistant allows the selection of an assistant that is actively exchanging messages with customers as your data source. More about this principle later in the Observing Agent section.
Observations from the chatbot is used for:
- Intents and intent training example Recommendations
Autolearning can be described as the process of:
- Ordering the disambiguation options according to accuracy based on user selection.
- Lessening the options as responses are refined from user behavior.
- To a refined point where Watson Assistant can present a single response accurately.
Intent and intent training example Recommendations:
- Are generated from user conversations.
- This is a very relevant data source, as this is the conversation your users want to have.
- This process can highlight intents which were not catered from in the chatbot, hence negating none-intents, or false out of domain assessments.
- Additional user examples can be gleaned and added to intents.
When you connect a live assistant as the data source for recommendations, you enable observation. When you turn on autolearning, you put the observed insights to use ,to improve your skill, which results in a better customer experience.
A skill, or multiple skills constitute an assistant. A live assistant is in use fielding real conversations with customers.
A skill can be linked to multiple assistants, hence choosing the observing assistant is important as this determines where the skill will be surfaced. This can influence user behavior.
The user conversations are used for intents and intent training example recommendations.
As seen below, the three elements are listed; Disambiguation, Observing assistant & Autolearning.
Disambiguation can be used in standalone mode. Autolearning requires an Observing assistant to be defined.
Disambiguation, in its simplest form, when a user asks a question that the assistant isn’t sure it understands, the chatbot can present a list of options to the user and request the customer to choose the right one.
This process is called disambiguation.
If, when a similar list of options is shown, customers most often click the same one option #2, for example), then your skill can learn from that experience.
It can learn that option #2 is the best answer to that type of question. And next time, it can list option #2 as the first choice, so customers can get to it more quickly.
And, if the pattern persists over time, it can change its behavior even more. Instead of making the customer choose from a list of options at all, it can return option #2 as the answer immediately.
The premise of this feature is to improve the disambiguation process over time to such an extend, that eventually the correct option is presented to the user automatically. Hence the chatbot learns how to disambiguate on behalf of the user.
Each dialog can be customized to be included or excluded from disambiguation with a toggle switch. The node names are presented as the option name to the user. Hence dialog node names must be descriptive and helpful to the user and not cryptic.
Of course the the training data and examples should not unnecessarily create ambiguity. Hence training data should checked; intents should be grouped correctly and the training data describing the intents must accurately describe the intent. Overlaps create confusion and inconsistency.
First, Watson Assistant moves the best answer to the top of the disambiguation list.
Next, Watson Assistant reduces the number of other options in the list.
Ultimately, Watson Assistant is able to replace the disambiguation list entirely with the single, best answer. The higher the percentage of single answers, the better.
Above, An explanation from the IBM on how autolearning works.
The principles employed by Watson Assistant, in its essence, are extremely well suited to be built out into more encompassing implementations.
The idea to leverage customer conversations to automatically improve a chatbot speaks the the basic principles of introspection and empathy.
Empathy, as the structure of is focused on the conversations a user wants to have and aligning to user need.
Introspection, speaks to an awareness of the chatbot that it does not posses the most correct answer in a given response. hence responding to the user with a few relevant options, and learning from customer behavior.
Subscribe to my newsletter.
NLP/NLU, Chatbots, Voice, Conversational UI/UX, CX Designer, Developer, Ubiquitous User Interfaces, Ambient…
Cobus Greyling - Medium
Read writing from Cobus Greyling on Medium. NLP/NLU, Chatbots, Voice, Conversational UI/UX, CX Designer, Developer…
IBM Cloud Docs
Find documentation, API & SDK references, tutorials, FAQs, and more resources for IBM Cloud products and services.