How To Resolve Intent Conflicts With IBM Watson Assistant

You Can Perform Intent Conflict Resolution Automatically

Cobus Greyling

--

Introduction

…overlap in intent training examples might be confusing your chatbot.

These overlaps are really conflicts, which can exist between two, three or more separate intents.

Once your chatbot grows in the number of intents, and also the number of examples for each intent, finding overlaps or conflicts become harder.

Detect & Resolve Intent Conflicts

Often the approach is just to add more examples, which most probably will just add to the confusion.

While the solution is not a lack of training data or examples, but conflicts within those examples.

It does make sense to segment the examples clearly per intent, but overlaps might not be that easy to spot.

This might very well be the case if a larger team is working on the chatbot and adding example phrases from conversations in an attempt to improve accuracy.

Overlaps can easily be created in such a scenario and the whole attempt to improve the bot will in fact cause a degradation of the quality and clarity of user intent recognition.

The basic chatobt structure is on the left, with a list on twelve intents.

Hence detecting these conflicts become paramount in larger teams and organizations. You might have hundreds or thousands of intents each with a number of examples. This is why automation of this process is key.

Tracking Conflicts

The ideal would be not to have a manual process of picking up conflicts, but rather a process which surfaces conflict real-time as they occur. Too many chatbot teams learn from errors and vulnerabilities in customer conversation data and monthly reports.

Conflicts should be checked for in real-time continuously, or when updates are committed.

How would these conflicts be managed in a real-word scenario? Here is a short tutorial…

IBM Watson Assistant: List of Intents and the Conflict Count is visible in the Conflicts Column

This is an image of the IBM Watson Assistant Intent console. With the name, description and date modified, you will see how many example utterances each intent has.

Also, you will see a Conflicts column.

Within the Intent the Phrases Creating a Conflict is Marked

Once you click on the intent marked with a conflict, you will see the list of user examples, also referred to as utterance samples. And the problematic utterance is marked with a button to action the resolution.

Different Types of Intent Conflicts

There are two types of conflicts;

  • Direct and
  • Indirect conflicts.

Direct conflicts are fairly easy to pick up. This is where two or more intents having the exact same example utterance; verbatim.

Indirect conflicts are harder to detect and this is where machine learning plays a role. There might be user examples informing different intents which are very similar in meaning and sentence construction, but at a glance seem different. The removal of these vulnerabilities go a long way in improving your conversational interface’s accuracy.

Built-In Conflict Resolution

Here again is an example from IBM Watson Assistant showing how this can work in practice.

You can see in the example below the two intents side by side. In this example the conflict is the same utterance word for word, verbatim.

What is quite useful is the Similar Examples. Similar examples are typically not conflicts; they are additional examples that are displayed to help you better understand the meaning of each intent. And hence guide you in your decision in removing the utterance from the right intent.

Watson’s Conflict Resolution Interface

From the similar examples give here, it is clear that the example in conflict needs to be removed from the #Customer_Care_Appointment.

Confirmation of Saved Changes

Clarity needs to be established when managing example utterances and having a set of examples can assist in this process.

The examples are enough for you to create a mental picture of what embodies the intent.

Once the change is committed, a confirmation message is displayed and you can test your chatbot again.

Conclusion

There is this commonly held belief that the amount of data you throw at your chatbot yields a direct and proportional improvement in the conversational experience.

This cannot be further from the truth. Training data needs to be well thought through, segmented and meaningful.

--

--

Cobus Greyling
Cobus Greyling

Written by Cobus Greyling

I’m passionate about exploring the intersection of AI & language. www.cobusgreyling.com

No responses yet