Three IBM Watson Assistant Features You Need To Know About

Ecosystem Enhancements To Help Your Chatbot Scale

Introduction

Here I look at a few features, some new, in the IBM Watson Assistant echo-system. These features are not discussed or presented much. However, it is these elements which distinguishes environments and serves as building blocks for a greater future product.

Web Hooks

Normally, this API call will sit somewhere externally, divorced from your NLU environment.

Image for post
Image for post
Webhook Settings From The Options Menu in Watson Assistant

IBM Watson Assistant affords you the luxury of defining webhooks or API calls from within IBM Watson Assistant. This allows for a neat and confined implementation of your chatbot.

Image for post
Image for post
Setting Webhooks in a Dialog Node

A few examples of what a webhook within your dialog can be used for:

  • Validate User Information entered by the user.
  • Interface with external information sources to get data like balances, payments, weather etc.
  • Transnational information where a request from the user needs to be sent to a third party environment or fulfilled.
  • SMS / Text notifications, in the case of One-Time PIN’s.
  • Interact with Other IBM Cloud elements or functions.

Disambiguation

Image for post
Image for post
Options in Configuration of Disambiguation and Clarification

The feedback from the user can be used to improve the model in future and provide invaluable training data.

According to the IBM documentation, this feature is triggered under the following circumstances:

Disambiguation is triggered when the following conditions are met:

The confidence scores of the runner-up intents that are detected in the user input are close in value to the confidence score of the top intent.

The confidence score of the top intent is above 0.2.

Image for post
Image for post
An Example Of Disambiguation And Asking For Clarification

Should a user opt for None of the above, the intents identified from the user’s input are canceled and the utterance is resubmitted.

Typically this will trigger the “anything else” node in your dialog tree and here you can decide how to handle this query…a call-back (chat-back) might be application, or real-time live agent handover etc.

Auto Correction

Image for post
Image for post
Toggle Auto Correction On

IBM has developed an auto correction model that take into account the full sentence contextually and the existing training within your chatbot to take real-time (not predefined) action on each correction.

Image for post
Image for post
Simple Example Where the Word Computer Was Corrected

The same spelling error can be corrected in a different way depending on the contextual setting.

But very important, words you used as training data in your dialog will not be corrected.

IBM claims that the accuracy of autocorrection in Watson Assistant consistently outperforms the most open source solutions.

With limited training data the contextual awareness of Watson Assistant's spelling correction functionality does impress.

Image for post
Image for post
Context Is Used For Accurate Correction

Conclusion

Once your prototype is up and running, you can iterate on it with a measured approach to understand the impact of each change.

Image for post
Image for post
Photo by Nick Carter on Unsplash

Written by

NLP/NLU, Chatbots, Voice, Conversational UI/UX, CX Designer, Developer, Ubiquitous User Interfaces. www.cobusgreyling.me