Photo by Brandon Sehl on Unsplash

Updated: Your Chatbot Should Be Able To Disambiguate

Looking At The Approach Of HumanFirst, Watson Assistant & Cognigy…

Cobus Greyling
9 min readApr 8, 2022

--

Disambiguation Is Part & Parcel Of Human Conversations And Should Be Part Of Your Chatbot Experience

Introduction

This article covers three approaches to addressing disambiguation of user utterances.

In essence the three approaches are:

  • Pre- & Post-Conversation, intent and example utterance based
  • In-Conversation: Automated global setting per skill
  • In-Conversation: Set per intent with varying thresholds
  1. HumanFirst. Disambiguation is performed upfront, ideally prior to dialog flow development. Disambiguation is performed on intent/training utterance level. This is to a large degree a proactive approach and avoids solving the problem in-conversation. Although HumanFirst addresses this problem very much in a proactive way, conversations can also be analyzed after the fact. HumanFirst addresses Vertical Vectors one and two of Conversational AI. And really speaks to Intent-Driven Development.
  2. IBM Watson Assistant. Here disambiguation is set on a chatbot or skill level. The process is automated and the platform detects the lack of an outright high intent confidence. Subsequently offering the user with a few options to choose from. Watson Assistant addresses this challenge in-conversation and with auto-learning, the process of training is automated for future improvement. IBM Watson Assistant uses a global setting on skill level with the top 3 to 5 intents presented to users.
  3. Cognigy. What makes Cognigy different, is that it enables disambiguation setting on an intent level; per-intent. Two thresholds can be set, one for reconfirmation and one for overall confidence. Higher transactional or consequential intents are good candidates for disambiguation.

It needs to be noted, in most chatbot frameworks, the dialog state management system has the ability to compare multiple intent confidence scores. Logic can be introduced to parse these scores to determine the appropriate response to the user. This article’s aim is to take look at technology exclusively aimed at addressing the problem of ambiguity.

More On Disambiguation

By now we all know that the prime aim of a chatbot is to act as a conversational interface, simulating the conversations we have as humans. Unfortunately you will find that many of these basic elements of human conversation are not introduced to most chatbots.

A good example of this is digression…and another is disambiguation. Often throughout a conversation we as humans will invariably and intuitively detect ambiguity.

https://www.lexico.com/en/definition/ambiguity

Ambiguity is when we hear a phrase which is open for more than one interpretation. Instead of just going off on a tangent which is not intended by the utterance, we should perform the act of disambiguation; by asking a follow-up question. This is simply put, removing ambiguity from a statement or dialog.

Ambiguity makes sentences confusing. For example,

  • “I saw my friend John with binoculars”.

Does this mean John was carrying a pair of binoculars? Or, I could only see John by using a pair of binoculars?

https://www.dictionary.com/browse/disambiguate

Hence, I need to perform disambiguation, and ask for clarification. A chatbot encounters the same issue, where the user’s utterance is ambiguous and instead of the chatbot going off on one assumed intent, it could ask the user to clarify their input. The chatbot can present a few options based on a certain context; this can be used by the user to select and confirm the most appropriate option.

HumanFirst

HumanFirst follows a Intent-Driven Development approach which is proactive, in the sense that distinguishing between similar training examples and intent names are performed upfront.

On the HumanFirst interface, user utterances can be imported. In turn these utterances can be clustered and intents derived from this with varying granularity and cluster sizes.

After this step, once the clusters are named and the intents created, the Disambiguation button can be used to detect if any utterance in a selected intent will potentially cause confusion. In essence disambiguating the future conversation.

Under the Intents option, there is a button for Disambiguation on the HumanFirst interface.

A big advantage of the HumanFirst interface in general is the use of sliders. Data is automatically and instantly organized accordingly. Users can visually inspect data and tweak the data. Thus ensuring the optimal balance is achieved in allocating training data to intents.

A slider with sets the minimum confusion percentage.

As seen below, The selected intent is grayed out. The intents available or applicable for disambiguation are all listed with a 16% likelihood.

The selected intent is grayed out. The intents available or applicable for disambiguation are all listed with a 16% likelihood.

The options available to the user are:

  • Delete unwanted training data.
  • Delete duplicate intents.
  • Merge similar intents.
  • Create sub-intents.

It needs to be stressed that HumanFirst has a two-pronged approach. Training data can be scrutinized and tweaked. But customer conversations can also be uploaded and used as a barometer on how good intents are defined. And how accurate segmentation of intents are.

If you have a deployed conversational AI, this workflow allows you to continuously improve its coverage and accuracy, by easily identifying new intents, and sourcing training examples for existing ones

If you have a deployed conversational AI, this workflow allows you to continuously improve its coverage and accuracy, by easily identifying new intents, and sourcing training examples for existing ones.

Select a specific intent, adjust the confidence scale to find matches from new unlabeled training data. Or sort unlabeled data on 5 metrics.

When the active NLU engine is HumanFirst NLU, and you have exiting trained intents, more functionality becomes available. For instances, selecting a specific intent, and then sliding the confidence scale to find matches from new unlabeled training data.

Or, automatically sort unlabeled data on 5 metrics for visual inspection.

IBM Watson Assistant

In conversations, instead of defaulting to an intent with the highest confidence, the chatbot should check the confidence score of the top 5 matches. If these scores are close to each-other, it shows your chatbot is actually of the opinion that not a single intent will address the query. And a selection must be made from a few options.

IBM Watson Assistant Example of Disambiguation Between Dialog Nodes. Four options are presented on the users input.

Here, disambiguation allows the chatbot to request clarification from the user. A list of related options should be pretested to the user, allowing the user to disambiguate the dialog by selecting an option from the list.

But, the list presented should be relevant to the context of the utterance; hence only contextual options must be presented.

Disambiguation enables chatbots to request help from the user when more than one dialog node might apply to the user’s query.

Instead of assigning the best guess intent to the user’s input, the chatbot can create a collection of top nodes and present them. In this case the decision, when there is ambiguity, is deferred to the user.

What is really a win-win situation is when the feedback from the user can be used to improve the NLU model; as this is invaluable training data vetted by the user.

There should of course be a “non of the above” option, if a user selects this, a real-time live agent handover can be performed, or a call-back can be scheduled. Or, a broader set of option can be presented.

IBM Watson has a built-in feature which allows for the configuration of disambiguation. In this practical example you can toggle the feature on or off.

IBM Watson Assistant configuration window for disambiguation

Also, you can set the message introducing the clarification request. The default is, “Did you mean”…this could be changed to “This might help” or, “This is what I could find”.

An option is also available for none of the above and the maximum number of suggestions can be limited. The scope and size of the dialog will determine what this number might be.

This also provides a central point where disambiguation can be switched off; this as a global toggle switch to enable or disable this feature.

Apart from this, each node can be added or removed individually as the structure of the application changes.

Here the name of the node becomes important as this is what will be displayed to the user. There is also an option to add internal and external facing node name.

On Dialog Node Level A Single Node Can Be Added Or Removed

It is crucial that the name of the node displayed to the user is clear, presentable and explains the function and intention of the node it refers to.

Cognigy

As seen below, within each intent a Disambiguation Sentence can be entered. This sentence is presented to the user to ask confirmation, or allow the user to rephrase a request.

Within each intent a Disambiguation Sentence can be entered. This sentence is presented to the user to ask confirmation, or allow the user to rephrase a request.

The Cognigy framework has two settings to manage disambiguation:

  • Reconfirmation Threshold
  • Confidence Threshold

The confidence threshold score on an Intent is considered confirmed if a confirmation sentence is set.

Setting the reconfirmation threshold and confidence threshold for intents with confirmation sentences.

The confidence threshold has no effect unless the intent uses confirmation sentences. The Reconfirmation Threshold is your lower confidence bound. You must set it in addition to the Confidence Threshold. Intent scores above the reconfirmation threshold are confirmed or marked for reconfirmation.

Cognigy holds and develops an augmented JSON document during the conversations. The intent scores can be used with custom scripts to steer the conversation based on results within dialog state management.

Cognigy holds and develops an augmented JSON document during the conversations. The intent scores can be used with custom scripts to steer the conversation based on results.

Conclusion

In conclusion, suffice to say that the holy grail of chatbots is to mimic and align with a natural, human-to-human conversation as much as possible. And to add to this, when designing the conversational flow for a chatbot, we often forget about what elements are part and parcel of true human like conversation.

Digression is a big part of human conversation, along with disambiguation of course. Disambiguation negates to some extent the danger of fallback proliferation where the dialog is not really taken forward.

With disambiguation a bouquet of truly related and contextual options are presented to the user to choose from which is sure to advance the conversation. Or, in the case of HumanFirst, ambiguity in a NLU level is remedied.

And finally, probably the worse thing you can do, is present a set of options which is not related to the current context. Or a set of options which is predefined and finite which reoccurs continually.

--

--

Cobus Greyling
Cobus Greyling

Written by Cobus Greyling

I’m passionate about exploring the intersection of AI & language. www.cobusgreyling.com

No responses yet