Sitemap

The Problem Of Multiple Intent Detection & How Kore AI Can Improve

Users do not always speak in single intent utterances…and voicebots are especially susceptible to more verbose input.

5 min readOct 24, 2022

--

When it comes to Digital Assistants, users do not necessarily speak in single intent utterances…for example…

💬 “What are my hotel reservation details.”

💬 “Please give me my flight fairs.”

💬 “What are my transfer details again?”

A user will most probably combine multiple intents in one utterance, like the example below:

💬 “Regarding my trip, I need hotel reservations, flight fairs and transfers please?”

Keep in mind that voicebots or speech interfaces lead to more verbose user input. Users tend to speak longer and utter multiple intents…

There are basically three ways of managing multi-intent user input…

1️⃣ Use visual design affordances like buttons, drop-down menus, etc to restrict and direct user dialog. This option is highly dependant on the medium in use; for instance WhatsApp has much less design affordances than Messenger, etc.

2️⃣ Make use of disambiguation menus and mandate a user to select a single intent to continue the dialog with.

3️⃣ Implement routines where the user input is parsed and intents are ranked according to confidence scores. These confidence scores need to be used to create a conversational flow where the intents are addressed in a sequential way.

These approaches won’t scale well for growing and complex implementations as the settings will have to be implemented, maintained and managed across the conversational landscape.

Please follow me on LinkedIn for the latest updates on Conversational AI. 🙏🏽

❓How Is Kore AI automating Multi-Intent User Input?

Kore AI has a feature called “Multi Intent Detection”. This feature can be toggled on and off, when enabled it allows for the detection and execution of multiple intents identified in a single user input.

Five key fixed settings are given within the “Multi Intent Detection” view, these five setting descriptions are currently static or fixed, and cannot be tweaked or fine-tuned.

It will be a leap forward for the Kore AI platform if fine-tuning settings will be available for each of these five conditions.

⬇️ Below are a few ideas on how these five key fixed settings can be made intelligent thus allowing for more granular management of multiple intent detection.

Kore AI Static Rule 1️⃣

Identification of multiple intents from a single utterance is driven by the platform's in-built training phrases

It would make sense here to manage the number of intents identified and surface to the user.

The conversational designer might only want to surface two or three intents to the user and not all intents detected from the user utterance.

This will help with predictability of the conversational interface.

Kore AI Static Rule 2️⃣

The order of intent execution is determined based on the structure of the sentence and phrases used to express multiple intents

An option to set the order will be helpful…in some cases it will make sense to order the intents on NLU confidence scores, in other instances the Conversational Designer might want to order or sequence the responses based on the structure of the user input.

Kore AI Static Rule 3️⃣

After the execution of each intent, the platform will automatically trigger the next intent in the identified order

An option to add specific wording to display to the user will make sense. For instance, telling the user…”let me help you with your flight detail before getting to the two other items you mentioned”…almost like a comfort message and analogous to how digression is currently handled by Kore AI.

Kore AI Static Rule 4️⃣

If a task fails to get executed, then the subsequent tasks (identified from the utterance) will not be initiated by the platform

The lack of transparency here can lead to user confusion and ambiguity…imagine if the chatbot could rather say…I cannot retrieve your hotel details at the moment, but here are your flight fairs

From a Conversation Design & NLU Design Perspective, it will make sense to be able to manage failures on a more granular level.

Kore AI Static Rule 5️⃣

Multi intent identification is currently supported only for Dialog Tasks in English, Spanish, German and French languages

Toggling Multi Intent Detection on or off for a specific language will be helpful…

Please follow me on LinkedIn for the latest updates on Conversational AI. 🙏🏽

🚧 Multiple Intent Management Example

Below is a two-intent example from Kore AI, the user asks about

1️⃣ hotel details and

2️⃣ flight fairs,

Both items are presented to the user in sequence, and again it needs to be mentioned that some contextual message will help the conversational flow.

Lastly, here is an example with three user intents, and the response from Kore AI

⬇️ User Input:

hotel details and flight fairs and transfer details

⬇️ The Conversational Interface’s Response:

Please follow me on LinkedIn for the latest updates on Conversational AI. 🙏🏽

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.

https://www.linkedin.com/in/cobusgreyling
https://www.linkedin.com/in/cobusgreyling

--

--

Cobus Greyling
Cobus Greyling

Written by Cobus Greyling

I’m passionate about exploring the intersection of AI & language. www.cobusgreyling.com

Responses (1)