Photo by Ståle Grut on Unsplash

Building A Chatbot Using Actions In IBM Watson Assistant

Actions Are New In Watson Assistant…This Is What I have Learnt

Cobus Greyling
7 min readNov 3, 2020

--

Introduction

A new feature has been added to IBM Watson Assistant called Actions. This new feature allows users to develop dialogs in a rapid fashion.

Moving Steps In An Action To Alter The Sequence Of Conversational Events

The approach taken with Actions is one of an extreme non-technical nature. The interface is intuitive and requires virtually no prior development knowledge or training. User input (entities) variables are picked up automatically with a descriptive reference.

Conversational steps can be re-arranged and moved freely to update the flow of the dialog.

Updates can be saved automatically, machine learning takes place in the background.

And the application (action) can be tested in a preview pane.

There is something of Actions which reminds me of the Microsoft’s Power Virtual Agent interface. The same general idea is there, but with Watson the interface is more simplistic and minimalistic. And perhaps more a natural extension of the current functionality.

  • You can think of an action as an encapsulation of an intent. Or the fulfillment of an intent.
  • An action is a single conversation to fulfill an intent and capture the entities.
  • A single action is not intended to stretch across multiple intents or be a horizontally focused conversation.
  • Think of an action as a narrow vertical and very specific conversation.
Here you a Single Actions skill called BankBalance with two actions listed under it.

How To Use Actions

Firstly, Actions should be seen as another type of skill to complement the other two existing skills;

  • dialog skills and
  • search skills.
Option when crating a skill for an assistant. Search skill, dialog skill or actions skill.

Actions must not be seen as a replacement for dialogs.

Secondly, actions can be used as a standalone implementation for very simple applications. Such simple implementations may include customer satisfaction surveys, customer or user registration etc. Short and specific conversations.

Thirdly, and most importantly, actions can be used as a plugin or supporting element to dialog skills.

Of course, your assistant can run 100% on Actions, but this is highly unlikely or at least advisable.

The best implementation scenario is where the backbone of your assistant is constituted by one or more dialog skills, and Actions are used to enhance certain functionality within the dialog. With something like a search skill.

This approach can allow business units to develop their own actions, due to the friendly interface. And subsequently, these Actions can then plugged into a dialog.

Setting up a dialog node to call an Action skill.

This approach is convenient if you have a module which changes on a regular basis, but you want to minimize impact on a complex dialog environment.

Within a dialog node, a specific action that is linked to the same Assistant as this dialog skill can be invoked. The dialog skill is paused until the action is completed.

An action can also be seen as a module which can be used and reused from multiple dialog threads.

When adding actions to a dialog skill, consideration needs to be given to the invocation priority.

Within the dialog, if the Dialog Skills intent is #Balance, invoke a action skill with a return variable.

If you add only an actions skill to the assistant, the action skill starts the conversation. If you add both a dialog skill and actions skill to an assistant, the dialog skill starts the conversation. And actions are recognized only if you configure the dialog skill to call them.

A conversation scenario where both Dialog and Actions Skills are employed.

Fourthly, if you are looking for a tool to develop prototypes, demos or proof of concepts, Actions can stand you in good stead.

Mention needs to be made of the built-in constrained user input, where options are presented. Creating a more structured input supports the capabilities of Actions.

Disambiguation between Actions within an Action Skill is possible and can be toggled on or off. This is a very handy functionality. It should address intent conflicts to a large extend.

System actions are available and these are bound to grow.

How NOT To Use Actions

It does not seem sensible to build a complete digital assistant / chatbot with actions. Or at least not as a standalone conversational interface. There is this allure of rapid initial progress and having something to show. However, there are a few problems you are bound to encounter.

A conversation built making use of Actions with conditional checks and re-prompts where the condition fails.

Conversations within an action are segmented or grouped according to intents. Should there be intent conflicts or overlaps, inconsistencies can be introduced to the chatbot.

Entity management is not as strong within Actions as it is with Dialog skills. Collection of entities with a slot filling approach is fine.

But for more advance conversations where entities need to be defined and detected contextually Actions will not suffice. Compound entities per user utterance will also pose a challenge

Compound intents, or multiple intents per user utterance is problematic.

If you are use to implementing conversational digression, actions will not suffice.

Positives

  • Conversational topics can be addressed in a modular fashion.
  • Conversational steps can be dynamically ordered by drag and drop.
  • Collaboration
  • Variable management is easy and conversational from a design perspective.
  • Conditions can set.
  • Complexity is masked and simplicity is surfaced.
  • Design and Development are combined.
  • Integration with current solutions and developed products
  • Formatting of conversational presentation.

Negatives

  • If used in isolation scaling impediments will be encountered.
  • Still State Machine Approach.
  • Linear Design interface.

How To Create An Action

The best way to get to grips with Actions, is to create your very first skill and have a conversation.

You can click on skills and select the very top option under skills, Actions skill.

The three skill types available in IBM Watson Assistant; Actions, Dialog & Search.

We do not have a skill to import, so we choose to create a skill. For this example we give it the name of BankingApplication. A short description is added, which is optional.

You will also see the list of languages which are available. This is obviously an impediment if you want to create a skill for minority vernacular languages.

Creating the Actions skill by defining the name etc.

Next you add phrases which Watson Assistant will use to create a model by which it knows, based on user input, to invoke your Action.

Adding example phrases to invoke an Action.

Subsequently the process starts of building the conversational steps with its detail.

Building the Conversation

The next step is defining the user input options. User input can be constrained to a large extend to have a higher degree of control over the conversation.

Building the way users can respond.

The chatbot’s response can be edited by means of drag-and-drop to customize the input presentation to the user.

Edit the order in which input options are presented to the user.

You can add conditions to a conversational step which needs to be fulfilled. As you can see, even this is very much in a human readable format.

Adding Conditions to a Conversation Step or Event

Lastly, you can test your Action on the fly as you develop the interface and adjustments can be made by the drag-and-drop interface.

Testing Actions via the Preview Option

Conclusion

The concept of Actions is astute and the way it is introduced to Watson Assistant 100% complements the current development environment. There is no disruption or rework required of any sort.

Actions democratizes the development environment for designers to also create a conversational experience, again, without disrupting the status quo.

Actions used as intended will advance any Watson Assistant implementation.

But I hasten to add this caveat; Actions implemented in a way it was not intended will lead to impediments in scaling and leveraging user input in terms of intents and entities.

Read More Here…

--

--

Cobus Greyling

I explore and write about all things at the intersection of AI & language; LLMs/NLP/NLU, Chat/Voicebots, CCAI. www.cobusgreyling.com