Photo by Daniel Roe on Unsplash

Learn How Your Chatbot Can Detect Irrelevance

Effectively Handle Conversations Which Are Irrelevant To Your Domain Of Implementation

Cobus Greyling
5 min readFeb 18, 2020

--

Introduction

How do you develop for user input which is not relevant to your design…

Irrelevance detection, also referred to as Out-of-domain (OOD) detection is much needed and effective from an user experience perspective. However, from an implementation perspective it has not receive much attention.

The objective is to detect irrelevant user input using very limited to no training data. Training data is anyway in most cases insufficient, for reasons listed below…

User input can broadly be divided into two groups, In-Domain (ID)and Out-Of-Domain (OOD)inputs. ID inputs are where you can attach the user’s input to a label based on existing training data. OOD detection refers to the process of tagging data which does not match any label in the training set; intent.

Traditionally OOD training requires large amounts of training data, hence OOD not performing well in current chatbot environments. An advantage of most chatbot development environments is a very limited amount of training data; perhaps 15 to 20 utterance examples per intent.

We don’t want developers spending vast amounts of time on an element not part of the bot’s core.

Intent Labels With Examples Showing ID & OOD

The Problem

The challenge is that as a developer, you need provide training data and examples. The OOD or irrelevant input is possibly an infinite amount of scenarios as there is no boundary defining irrelevance.

The ideal is to build a model that can detect OOD inputs with a very limited set of data defining the intent; or no OOD training data at all. The second option being the ideal.

Below is an implementation of this ideal

Automatic Irrelevance Detection

Conversational Interfaces, like chatbots, are usually developed in such a way to only address a very narrow domain. Obviously this is due to time and cost restraints.

Vast amounts of time is spent on handling conversations which are irrelevant to the domain of implementation and the intended purpose of the chatbot.

And as stated earlier, the Out Of Domain (OOD) field is infinite, and effectively defining example utterances are neigh impossible.

Irrelevance Detection Enabled on IBM Watson Assistant

It remains very helpful as irrelevance detection is an element which can assist your chatbot in recognizing scenarios where an user touches topics which are not designed to be addressed.

And address the fact with confidence early in the conversational flow before looping through a few iterations.

Such a feature can help your chatbot to recognize subjects which you did not design and developed for, even if you haven’t explicitly taught it about what to ignore, by marking specific user utterances as irrelevant.

The algorithmic models that help your chatbot understand what the users say are built (ID) from two key pieces of information:

  • Subjects you want the assistant to address. For example, questions about order shipments for an assistant that manages product orders.

You teach your assistant about these subjects by defining intents and providing lots of sample user utterances that articulate the intents so your assistant can recognize these and similar requests as examples of input for it to handle.

  • Subjects you want the assistant to ignore. For example, questions about politics for an assistant that makes pet grooming appointments exclusively.

You teach your assistant about subjects to ignore by marking utterances that discuss subjects which are out of scope for your application as being irrelevant. Such utterances become counterexamples for the model.

Then you need to present counterexamples and often there are false positives where an utterance is erroneously assigned to an entity.

Irrelevance detection is designed in order to navigate any vulnerability there might be in counterexample data as you start your chatbot development.

When you set Irrelevance Detection to enabled, an alternative method for evaluating the relevance of a newly submitted utterance is triggered in addition to the standard method.

To set this on feature to available on IBM Watson Assistant:

  1. From the Skills page, open your skill.
  2. From the skill menu, click Options.
  3. On the Irrelevance detection page, choose Enhanced.

The supplemental method examines the structure of the new utterance and compares it to the structure of the user example utterances in your training data.

This alternate approach helps chatbots that have few or no counterexamples, recognize irrelevant utterances.

Marking User Input as Irrelevant To Build a Counterexample Model

Note that the new method relies on structural information that is based on data from outside your skill.

So, while the new method can be useful as you are starting out, to build a chatbot that provides a more customized experience, you want it to use information from data that is derived from within the application’s domain.

The way to ensure that your assistant does so, is by adding your own counterexamples. If only a few.

Utterances Are Detected As Irrelevant with No False Intent Assignments

In the chatbot development process, you provide example user utterances or sentences which are grouped into distinct topics that someone might ask the assistant about — these are called “intents.”

The bad news…users do not stick to the script.

The chatbot usually gets a variety of unexpected questions that the person building the assistant didn’t plan for it to handle initially.

In these cases an escalation to a human is triggered, or a knowledge base search is triggered. Bot not the most effective response.

How about the chatbot saying, I cannot help you with that.

Negate False Intent Assignment

Often, instead of stating the intent is out of scope, in a desperate attempt to field the utterance, the chatbot assigns the best fit option to the user; often wrong.

Alternatively the chatbot continues to inform the user it does not understand; and having the user continuously rephrasing the input. Instead of the chatbot merely stating the question is not part of its domain.

The traditional approaches are:

  • Many “out-of-scope” examples are dreamed up and entered
  • Attempts are made to disambiguate the user input.
Demo of handling OOD User Input Using IBM Watson Assistant

In Conclusion

Development tools are evolving in empowering developers to create compelling conversational interfaces with limited training data in a short period of time.

It is very encouraging to see an environment develop and grow in functionality.

The IBM Watson Assistant team has been effective in augmenting the building out the Watson Assistant environment with tools and functionality which make a huge difference.

--

--

Cobus Greyling

I explore and write about all things at the intersection of AI & language; LLMs/NLP/NLU, Chat/Voicebots, CCAI. www.cobusgreyling.com