How To Create Actions With IBM Watson Assistant

Get To Grips With The New Actions Skill In Watson Assistant

Introduction

A new feature has been added to IBM Watson Assistant called Actions. This new feature allows users to develop dialogs in a rapid fashion.

Testing Actions via the Preview Option

The approach taken with Actions is one of an extreme non-technical nature. The interface is intuitive and requires virtually no prior development knowledge or training. User input (entities) variables are picked up automatically with a descriptive reference.

Conversational steps can be re-arranged and moved freely to update the flow of the dialog.

Updates can be saved automatically, machine learning takes place in the background.

And the application (action) can be tested in a preview pane.

There is something of Actions which reminds me of the Microsoft’s Power Virtual Agent interface. The same general idea is there, but with Watson the interface is more simplistic and minimalistic. And perhaps more a natural extension of the current functionality.

  • You can think of an action as an encapsulation of an intent. Or the fulfillment of an intent.
  • An action is a single conversation to fulfill an intent and capture the entities.
  • A single action is not intended to stretch across multiple intents or be a horizontally focused conversation.
  • Think of an action as a narrow vertical and very specific conversation.

How To Create An Action

The best way to get to grips with Actions, is to create your very first skill and have a conversation.

You can click on skills and select the very top option under skills, Actions skill.

The three skill types available in IBM Watson Assistant; Actions, Dialog & Search.

We do not have a skill to import, so we choose to create a skill. For this example we give it the name of BankingApplication. A short description is added, which is optional.

You will also see the list of languages which are available. This is obviously an impediment if you want to create a skill for minority vernacular languages.

Creating the Actions skill by defining the name etc.

Next you add phrases which Watson Assistant will use to create a model by which it knows, based on user input, to invoke your Action.

Adding example phrases to invoke an Action.

Subsequently the process starts of building the conversational steps with its detail.

Building the Conversation

The next step is defining the user input options. User input can be constrained to a large extend to have a higher degree of control over the conversation.

Building the way users can respond.

The chatbot’s response can be edited by means of drag-and-drop to customize the input presentation to the user.

Edit the order in which input options are presented to the user.

You can add conditions to a conversational step which needs to be fulfilled. As you can see, even this is very much in a human readable format.

Adding Conditions to a Conversation Step or Event

Lastly, you can test your Action on the fly as you develop the interface and adjustments can be made by the drag-and-drop interface.

Conclusion

The concept of Actions is astute and the way it is introduced to Watson Assistant 100% complements the current development environment.

There is no disruption or rework required of any sort.

Actions democratizes the development environment for designers to also create a conversational experience, again, without disrupting the status quo.

Actions used as intended will advance any Watson Assistant implementation.

But I hasten to add this caveat; Actions implemented in a way it was not intended will lead to impediments in scaling and leveraging user input in terms of intents and entities.

NLP/NLU, Chatbots, Voice, Conversational UI/UX, CX Designer, Developer, Ubiquitous User Interfaces. www.cobusgreyling.me