Two Significant Enhancements Were Made To IBM Watson Assistant

One Is To Detect Irrelevance In A Conversation And Manage It

Cobus Greyling

--

Introduction

The commercial cloud based NLU or Chatbot environments usually focus on assisting the user in crafting a NLU API. This NLU API can then be used to plug your chatbot into, to supply intelligent, predictive natural human language understanding etc.

The idea is that you have a model created with your data which will predict user intent and from the intent derive and identify entities. When it comes to session management, state management and general dialog management, you are pretty much left to your own devices.

What makes the IBM Watson Assistant environment interesting is their largely successful attempt to present a complete ecosystem. Where you can not only define the intents and entities, but also the script, dialog flow and contextual variables.

Even response types like buttons, images etc can be defined within the dialog in the format it should be presented to the user.

One: New System Entities

The system entities have been enhanced significantly.

Updated System Entities List from IBM Watson Assistant
Different Date Entries Normalized

You need to manually enable the new system entities to take advantage of improvements that were made to the number-based system entities provided by IBM.

The list of supported languages can be found here.

The new system entities can recognize more nuanced mentions in user input. For example, system date can calculate the date of a national holiday when it is mentioned by name. This is obviously very country specific.

System date can also recognize when a year is specified as part of a date mentioned in the user's input. The improvements also make it easier for your assistant to distinguish among the many number-based system entities.

For example, a date mention, such as April 15, that is recognized to be System Date is not also identified as a System Number mention.

Day of Interest Recognized and Normalized To Date

This example shows how Christmas day is translated or normalized to the year, month and day.

This significantly simplify the process of date normalization from various natural language formats.

Two: Automatic Irrelevance detection

Conversational Interfaces, also known as chatbots are usually used to address a very narrow domain. But then a lot of time needs to be spend on handling conversations which is irrelevant to the domain of implementation.

Irrelevance detection helps the chatbot to recognize when a user touches topics which it is not designed to answer, and with confidence earlier in the development process.

This feature helps your chatbot to recognize subjects which you did not designed and developed for, even if you haven’t explicitly taught it about what to ignore, by marking specific user utterances as irrelevant.

The algorithmic models that help your chatbot understand what the users say are built from two key pieces of information:

  • Domains you want your chatbot to address. For example, queries about deliveries to a chatbot designed to take payments.
  • You train your chatbot on these subjects by defining intents and providing lots of example user utterances that articulate the intents so your chatbot can recognize them.
  • Then you need to present counterexamples and often there are false positives where an utterance is erroneously assigned to an entity.

Irrelevance detection is designed in order to navigate any vulnerability there might be in counterexample data as you start your chatbot development.

When you enable it enabled, an alternative method for evaluating the relevance of a newly submitted utterance is triggered in addition to the standard method.

Irrelevance Detection Enabled
  1. From the Skills page, open your skill.
  2. From the skill menu, click Options.
  3. On the Irrelevance detection page, choose Enhanced.

The supplemental method examines the structure of the new utterance and compares it to the structure of the user example utterances in your training data.

This alternate approach helps chatbots that have few or no counterexamples, recognize irrelevant utterances.

Marking User Input as Irrelevant To Build a Counterexample Model

Note that the new method relies on structural information that is based on data from outside your skill. So, while the new method can be useful as you are starting out, to build a chatbot that provides a more customized experience, you want it to use information from data that is derived from within the application’s domain.

The way to ensure that your assistant does so is by adding your own counterexamples. If only a few.

In the chatobt development process, you provide example user utterances or sentences which are grouped into distinct topics that someone might ask the assistant about — these are called “intents.”

Utterances Are Detected As Irrelevant with No False Intent Assignments

The bad news…users do not stick to the script. The chatbot usually gets a variety of unexpected questions that the person building the assistant didn’t plan for it to handle initially.

In these cases an escalation to a human is triggered, or a knowledge base search is triggered. Bot not the most effective response. How about the chatbot saying, I cannot help you with that.

Negate False Intent Assignment

Often, instead of stating the intent is out of scope, in a desperate attempt to field the utterance, the chatbot assigns the best fit option to the user; often wrong.

Or the chatbot continues to inform the user it does not understand; and having the user continuously rephrasing the input. Instead of the chatbot merely stating the question is not part of its domain.

The traditional approaches are:

  • Many “out-of-scope” examples are dreamed up and entered
  • The NLU model automatically select a set of irrelevant examples different to the training data

We could easily build a very accurate irrelevant-question-detector by tagging most of the questions from assistant users as irrelevant… but this would be a pretty bad experience because the assistant would have low coverage on in-domain questions.

Since none of these approaches are perfect, especially when an assistant is new and doesn’t have a lot of training data, we decided to come up with an approach that is more human-like.

If we want to know if an utterance from a user is irrelevant, IBM Watson Assistant initially check if it is similar to the set of relevant examples, and if not, it is tagged.

IBM combine this with another algorithm which gauges the dissimilarity of the incoming question from concepts that are not close to the assistant’s domain.

In Conclusion

It is very encouraging to see an environment develop and grow in functionality. The IBM Watson Assistant team has been effective in augmenting the building out the Watson Assistant environment with tools and functionality which make a huge difference.

--

--

Cobus Greyling

I explore and write about all things at the intersection of AI & language; LLMs/NLP/NLU, Chat/Voicebots, CCAI. www.cobusgreyling.com