Three IBM Watson Assistant Features You Need To Know About
When building a prototype or a MVP, most chatbot or NLU environments will suffice. It is only once you are down the road a bit with your product and run into more complex customer utterances and use cases not previously imagined, that the true test of your environment of choice is tested.
Here I look at a few features, some new, in the IBM Watson Assistant echo-system. These features are not discussed or presented much. However, it is these elements which distinguishes environments and serves as building blocks for a greater future product.
We all know what a webhook is, right? It is a mechanism which allows you to call an external program via an API based on an event in your chatbot.
Normally, this API call will sit somewhere externally, divorced from your NLU environment.
IBM Watson Assistant affords you the luxury of defining webhooks or API calls from within IBM Watson Assistant. This allows for a neat and confined implementation of your chatbot.
A few examples of what a webhook within your dialog can be used for:
- Validate User Information entered by the user.
- Interface with external information sources to get data like balances, payments, weather etc.
- Transnational information where a request from the user needs to be sent to a third party environment or fulfilled.
- SMS / Text notifications, in the case of One-Time PIN’s.
- Interact with Other IBM Cloud elements or functions.
Disambiguation allows the chatbot to ask the user for help when more than one dialog node can apply to the user’s query. Instead of assigning the best guess to the user’s input, the chatbot can choose the top nodes and present them. In this case the decision when there is ambiguity, is deferred to the user.
The feedback from the user can be used to improve the model in future and provide invaluable training data.
According to the IBM documentation, this feature is triggered under the following circumstances:
Disambiguation is triggered when the following conditions are met:
The confidence scores of the runner-up intents that are detected in the user input are close in value to the confidence score of the top intent.
The confidence score of the top intent is above 0.2.
Should a user opt for None of the above, the intents identified from the user’s input are canceled and the utterance is resubmitted.
Typically this will trigger the “anything else” node in your dialog tree and here you can decide how to handle this query…a call-back (chat-back) might be application, or real-time live agent handover etc.
Autocorrection fixes misspelled words users enter in their utterances as user input. Once autocorrection is enabled, the misspelled words are automatically corrected by Watson. Subsequently the corrected word is used to evaluate and determine the intent or entities of the user text. Augmenting the accuracy of the user’s input, enables the chatbot to respond more accurately.
IBM has developed an auto correction model that take into account the full sentence contextually and the existing training within your chatbot to take real-time (not predefined) action on each correction.
The same spelling error can be corrected in a different way depending on the contextual setting.
But very important, words you used as training data in your dialog will not be corrected.
IBM claims that the accuracy of autocorrection in Watson Assistant consistently outperforms the most open source solutions.
With limited training data the contextual awareness of Watson Assistant's spelling correction functionality does impress.
The best way to get to grips of may of these terms and environments is to register an account for yourself and start by building a small prototype. Testing your prototype within Watson is real convenient.
Once your prototype is up and running, you can iterate on it with a measured approach to understand the impact of each change.
Hi, I'm Cobus... Currently I conceptualize, mock-up, wire-frame, prototype and develop final products. Primarily…