Photo by Andy Kelly on Unsplash

Contextual Chatbots

Cobus Greyling
2 min readJul 2, 2019

Context does matter when discovering intent.

Chatbots must be contextually aware with the ability to detect multiple entities of interest in a single intent/utterance; and not necessarily in any particular order.

Defining Contextual Entities within Intents ~ IBM Watson Assistant

IBM Watson Assistant has introduced a new feature called Contextual Entities.

To add contextual entity all you need to do is select the relevant text in each of your intent examples. When you select a section of text, a pop up will appear allowing you to add it to an entity.

Once defined, the entities will be highlighted in the examples, as in the screenshot above.

Testing Contextual Unstructured Interface

This allows you to provide Watson Assistant with training data indicating possible utterances (intents) and where the entities (data you want to capture) might be located within the dialog. You need as little as 10 examples; but for a more robust model more examples are helpful. Watson Assistant can then construct a model based on the data. The results are astonishing. This supersedes previous methodologies where key words were being spotted. Or where entities have to be defined with a finite list possibilities. Form filling or slots are also handled in a more conversational and natural manner; without expecting the user to construct their conversation in any particular way.

This opens the door for a truly unstructured user interface; and where the data is structured post user input with a model fully aware of the user’s context.

Phone Simulation of Chatbot

Most chatbot frameworks are based on the concept of intent and entity detection, which involves identifying both the intent of a user utterance and the entity embedded in that intent. For example, in the sentence “I need a lift to the Station,” most chatbot frameworks would detect “Transport Request” as the intent, and “Station” as the “destination” entity. For simple utterances, such as this example, most chatbot frameworks work just fine. But when users use more complex dialogs, many existing solutions often are inadequate. For example, consider the utterance “We are arriving in Amsterdam and then taking the train further.” which has two entities (City:Amsterdam and TransportMode:Train) that could be the object of the intent. Many frameworks consider single entities per intent, so they fail to handle natural requests with two intents such as this one. This video illustrate how dual entities per intent or dialog can be extracted.

Another chatbot framework which performs contextual lookup very well with multiple entities per intent is the Berlin based company Rasa. The advantages of Rasa, among other, is that it is opensource, can be installed in premise with no cloud dependencies and has a very strong community of contributors and users.

--

--

Cobus Greyling

I explore and write about all things at the intersection of AI & language; LLMs/NLP/NLU, Chat/Voicebots, CCAI. www.cobusgreyling.com