How To Setup IBM Watson Assistant For Customer Effort Extraction
And Adding Data Visualization With Jupyter Notebooks
Introduction
What is Customer Effort, and how can it be measured from chatbot conversations?
And, how can Disambiguation improve Customer Effort?
Also, can Automatic Learning be employed to improve Customer Effort over time?
Below you will find an explanation of what customer effort is.
And a complete how to guide on extracting Customer Effort from your IBM Watson Assistant chatbot.
Setting Up Your Skill
After you have created your skill, you will have to link it to an assistant to bring all the components together.
Once you have created the skill, go to Options within the IBM Watson Assistant console. Here you need to activate Disambiguation. You can set the disambiguation message, the closing or last option and how many suggested option you want to present.
Subsequently, Autolearning needs to be activated. This is where you reference the Assistant and not the skill. But it is merely an administrative task to link the skill to an assistant.
Autolearning can be activating with one click.
A Simple Prototype
Here is a look at the simple prototype skill I built to test customer effort. Firstly, there are four intents, all related to banking. I have made them very similar on purpose to force ambiguity and hence activating the disambiguation function.
Next a basic structure is defined for the dialog. Each of the intents have a dialog node assigned to it.
The next step is to engage with your bot and create some conversational data. I continually entered ambiguous utterances on purpose, to force the disambiguation menu. But I was very consistent in what option I selected to be associated with the utterance.
One example, I entered the utterance paying loan, kept selecting the Pay item in the menu. Eventually this item moved up in the list to the first option. So this is one example where I improved (artificially at that) the customer effort to get to the Pay menu.
But the idea was to create data we can work with.
Next you can open a Jupyter notebook in a browser…
Select for Classic Notebook
Then you can open this GitHub page on a separate tab to copy and paste the commands.
Once you have pasted a section of code, click on Run. Everthing should run successfully. Do not run too large chunks of code…
The only difficulty you might have is connecting to your Assistant’s API settings. Follow the standard in the image below. Finding the apikey and other ID’s within the Watson Assistant Console is straightforward.
If you see the text below, you have successfully connected to your assistant.
Extracting disambiguation logs from 8 conversations code…
disambiguation_utterances = extract_disambiguation_utterances(df_formatted)
And the result:
Clicks versus effort and the effect of auto learning….
show_click_vs_effort(disambiguation_utterances, interval='5-minute')
And the result:
Nodes yielding the highest customer effort…
show_top_node_effort(disambiguation_utterances, top=10, assistant_nodes=assistant_nodes)
And the result:
Customer effort per dialog node over time…
show_node_effort(disambiguation_utterances, assistant_nodes, interval='5-minute')
And the result:
Select an utterance for Customer Effort…
show_input_effort(disambiguation_utterances, top=20, interval='5-minute')
And the result:
Conversational dialog nodes heat map…
# Select nodes appearing in the top co-occurred node pair
TOP_N_NODE_PAIR = 30
selected_nodes = pd.unique(top_confused_pairs.head(TOP_N_NODE_PAIR)[['Node A', 'Node B']].values.ravel())
selected_matrix = cooccurrence_matrix[selected_nodes].loc[selected_nodes]# Select all nodes
# selected_matrix = cooccurrence_matrixshow_cooccured_heatmap(selected_matrix)
And the result:
Conclusion
There are more commands you can experiment with, and you can also create your own or edit the existing.
Combining these three elements:
- Disambiguation
- Automatic Learning
- Customer Effort
makes for a measured and user focused continuous improvement of the user experience.