Photo by zhang kaiyv on Unsplash

Add Introspection & Empathy To Your Chatbot

And Using IBM Watson Assistant To Observe, Disambiguation & Autolearn From Conversations

Cobus Greyling
6 min readOct 3, 2021

--

Introduction

For starters, a chatbot requires a data source for training data. Training data is decomposed into intents, entities, dialog and return/bot dialog.

An example of disambiguation, where the confidences on the user utterance is in close proximity, and the chatbot attempts to disambiguate by offering the user relevant option to choose from.

This data can be thought up by the chatbot creators, or ideally live agent chat conversations can be used as a data source.

Other customer service channels can also serve as a data source.

These can be email enquiries, phone call transcriptions etc.

But, once the chatbot is launched, what would be the best avenue for continuous improvement?

The key points from the IBM Watson Assistant approach are:

  • Observe customer conversations
  • When ambiguity arises, present the user with multiple relevant options
  • Subsequently, allow users to disambiguate
  • Learn from user disambiguation and incorporate it into the chatbot.

IBM Watson Assistant allows the selection of an assistant that is actively exchanging messages with customers as your data source. More about this principle later in the Observing Agent section.

Observations from the chatbot is used for:

  • Autolearning
  • Intents and intent training example Recommendations
Change In Disambiguation Menu Over A Few Iterations

Autolearning can be described as the process of:

  • Ordering the disambiguation options according to accuracy based on user selection.
  • Lessening the options as responses are refined from user behavior.
  • To a refined point where Watson Assistant can present a single response accurately.

Intent and intent training example Recommendations:

  • Are generated from user conversations.
  • This is a very relevant data source, as this is the conversation your users want to have.
  • This process can highlight intents which were not catered from in the chatbot, hence negating none-intents, or false out of domain assessments.
  • Additional user examples can be gleaned and added to intents.

Observing Assistant

When you connect a live assistant as the data source for recommendations, you enable observation. When you turn on autolearning, you put the observed insights to use ,to improve your skill, which results in a better customer experience.

A skill, or multiple skills constitute an assistant. A live assistant is in use fielding real conversations with customers.

Here you see the banking skill linked to two assistants.

A skill can be linked to multiple assistants, hence choosing the observing assistant is important as this determines where the skill will be surfaced. This can influence user behavior.

The user conversations are used for intents and intent training example recommendations.

As seen below, the three elements are listed; Disambiguation, Observing assistant & Autolearning.

Disambiguation can be used in standalone mode. Autolearning requires an Observing assistant to be defined.

The three elements of Autolearning: Disambiguation, Observing assistant & Autolearning.

Disambiguation

Disambiguation, in its simplest form, when a user asks a question that the assistant isn’t sure it understands, the chatbot can present a list of options to the user and request the customer to choose the right one.

Disambiguation can be set globally for the skill with a few fine-tuning settings. The initial disambiguation guiding message can be set. The number of suggestions; 3 to 5 options is ideal. An alternative needs to be added.

This process is called disambiguation.

If, when a similar list of options is shown, customers most often click the same one option #2, for example), then your skill can learn from that experience.

It can learn that option #2 is the best answer to that type of question. And next time, it can list option #2 as the first choice, so customers can get to it more quickly.

And, if the pattern persists over time, it can change its behavior even more. Instead of making the customer choose from a list of options at all, it can return option #2 as the answer immediately.

The premise of this feature is to improve the disambiguation process over time to such an extend, that eventually the correct option is presented to the user automatically. Hence the chatbot learns how to disambiguate on behalf of the user.

Each dialog node can be customized to be included or excluded from disambiguation with a toggle switch. The node names are presented as the option name to the user. Hence dialog node names must be descriptive and helpful to the user and not cryptic.

Each dialog can be customized to be included or excluded from disambiguation with a toggle switch. The node names are presented as the option name to the user. Hence dialog node names must be descriptive and helpful to the user and not cryptic.

Watson Assistant highlights overlap in training data, causing ambiguity.

Of course the the training data and examples should not unnecessarily create ambiguity. Hence training data should checked; intents should be grouped correctly and the training data describing the intents must accurately describe the intent. Overlaps create confusion and inconsistency.

Automatic Learning

First, Watson Assistant moves the best answer to the top of the disambiguation list.

Next, Watson Assistant reduces the number of other options in the list.

Ultimately, Watson Assistant is able to replace the disambiguation list entirely with the single, best answer. The higher the percentage of single answers, the better.

An explanation from the IBM on how autolearning works. (Source: IBM Documentation)

Above, An explanation from the IBM on how autolearning works.

Conclusion

The principles employed by Watson Assistant, in its essence, are extremely well suited to be built out into more encompassing implementations.

The idea to leverage customer conversations to automatically improve a chatbot speaks the the basic principles of introspection and empathy.

Empathy, as the structure of is focused on the conversations a user wants to have and aligning to user need.

Introspection, speaks to an awareness of the chatbot that it does not posses the most correct answer in a given response. hence responding to the user with a few relevant options, and learning from customer behavior.

--

--

Cobus Greyling

I explore and write about all things at the intersection of AI & language; LLMs/NLP/NLU, Chat/Voicebots, CCAI. www.cobusgreyling.com