IBM Watson Natural Language Understanding API (Part 1 of 2)
Enable Chatbots to handle longer, multiple intent user dialogs
The Medium Impacts the Message
When users interact with a chatbot via a mobile phone, the dialogs are often shorter and close to single intents. Once the intent is established it is usually easy to capture the different entities.
When interaction with the chatbot is via the web, any modality which is not a phone, the dialog (messages) from the user tends to be longer. These longer messages often include multiple intents, entities and even sentences. Thus, resulting in paragraphs being sent.
For most chatbot frameworks this is a nightmare. However, the Watson NLU API is ideal to handle these complex dialog instances. The chatbot can gauge the size of the dialog. Once a predefined threshold is exceeded, the dialog can be sent to the NLU API for parsing.
From here the dialog can be parsed into sentences; each sentence analyzed etc. Think of the employment of the NLU API as a higher order first path prior to hand of to Watson Assistant.
Extract Categories
If this is set to true, explanations are returned for each categorization; as you can see in the image (Example 1). In an article to follow, custom categories will be addressed.
Extract Concepts
High level concepts can be extracted, these concepts can be used to retrieve relevant and likely relevant information to return to the user.
Emotion
Detects anger, disgust, fear, joy, or sadness that is conveyed in the content or by the context around target phrases specified in the targets parameter.
Emotion can be detected pertaining to key words, these key words can be related to the specific industry the chatbot is relevant to. In the case of a hotel, key words might be room, bed, breakfast etc.
Entities
On a good day, the process of extracting entities is hard. Establishing the location of entities using a contextual search is ideal. But in instances where the context is not clear, or there is no prior setup, the Entities section of the NLU API can be used. The results are astoundingly accurate.
In Example 4 the relatively complex sentence is given. As it stands today, few chatbots will be able to successfully parse that utterance. Especially without any prior contextual training for spotting entities.
Often a finite list of entities is required to match the user’s output against.
Examples 5 and 6 hold the output of the query. And the success of the analysis is evident. The response JSON is also feature rich should there be a requirement to delve deeper into some of the meanings or detail like city backgrounds, personalities and the like.
Metadata
Metadata is only available for input text HTML or URL. For example, if the URL www.rasa.ai is entered, the return payload is as per Example 7.
Relations
Another powerful feature is, not only to detect entities, but also detect if there specific relation between entities.
Should the user input the text “Lionel Messi won the award for the Golden Boot but no other awards were given.”, the NLU API will return the following.
A 59% certainty that an award was given to an entity identified by the type “person” and with the text “Lionel Messi”.
In a subsequent article the following analytics features will be covered:
- Semantic Roles
- Sentiment
- Syntax (Experimental)
Read more here…
Categories hierarchy: https://cloud.ibm.com/docs/services/natural-language-understanding?topic=natural-language-understanding-categories-hierarchy
Further NLU API Reading: https://cloud.ibm.com/apidocs/natural-language-understanding#metadata
Postman is good tool to test your utterances: https://www.getpostman.com/