Sitemap

Rasa Debunks Five Myths Of Machine Learning & Chatbots

There Are Commonly Held Beliefs When It Comes To Implementing Machine Learning…

7 min readAug 26, 2020

--

Introduction

When it comes to Artificial Intelligence and Machine learning one key area of implementation is Conversational AI.

Enabling and automating conversational channels in an organization by means of AI is much easier than what people believe.

So what is the reason for these perceived impediments for implementing Conversational AI in an organization?

Emergence and Development of Conversational UI’s

I believe there are a few impeding myths which still loom large in many minds.

In some cases these myths have been established by popular video, image & text analysis projects.

These projects demand GPU access, huge data sets, must be run in cloud environments etc.

Below I list five commonly held beliefs when it comes to Conversational AI. And how, using the Rasa platform, these myths can be debunked.

Myth One: You Need Huge Datasets

There is this believe that you require large and very specific datasets to get started. Or, that a feasibility study is required which runs over months. And does this data actually exist?

And how do you even format and process these fast amounts of data?

Starting With The Basics

The three words used most often in relation to chatbot data are:

  • Utterances,
  • Intents and
  • Entities.

An utterance is really anything the user will say. The utterance can be a sentence, some rare instances a few sentences, or just a word or a phrase. During the design phase, you try and anticipate what your users might say to your bot.

Example Intent & Entities

An Intent is the user’s intention with their utterance, or engagement with your bot. Think of intents as verbs, or working words. An utterance or single dialog from a user needs to be distilled into an identifiable intent.

Entities can be seen as nouns, often they are referred to as slots. These are usually things like date, time, cities, names, brands etc. Capturing these entities are crucial for taking action based on the user’s intent.

Think of a travel bot, capturing the cities of departure, destination, travel mode, price, dates and times is at the foundation of the interface. Yet, this is the hardest part of the chatbot. Keep in mind the user enters data randomly and in no particular order.

The ideal is to capture all the entities presented by the user in one go. It is not desirable to ask for information the user have already given.

Training Data

The best source of training data is from existing customer conversations. These conversations can be gleaned from current live agent chat conversations. Or call centre conversations; these are often automatically transcribed. Any other electronic communication can also yield valuable data.

Alternatively focus groups with agents can be conducted to collect background on customer conversations.

Example of Intents, Entities & Compound Entities

These conversations can be segmented into different intents. And within these intents the different entities can be identified. It is evident that the compilation and preparation are very simple and apparent.

## intent:bank_transfer- I want to transfer [R100](amount) from  my [savings](from_account) to my [credit card](to_account)- Can I transfer [R 2000](amount) from the [bond account](from_account) to the [credit card](to_account)- Can we move [R 50,000](amount) to my [credit card](to_account) from  my [savings account](from_account)- Let's move [R100](amount) from  [savings](from_account) to [investment account](to_account)- Let's transfer [R1000](amount) from  my [savings](from_account) to my [investment](to_account)- Take money from [savings](from_account) and place it in my [credit card](to_account), for the amount of [R 100](amount)- Transfer [R100](amount) from  my [savings](from_account) to my [credit card](to_account)- Move [R100](amount) from  my [savings](from_account) to my [credit card](to_account)- move money from [savings](from_account) and place it in my [credit card](to_account), for the amount of [R 100](amount)- Allocate [R100](amount) from  my [savings](from_account) to my [credit card](to_account)- I want to transfer [R100](amount) from  my [savings](from_account) to my [credit card](to_account)

You can typically have a minimum of 15 to 20 example utterances per intent and small training iterations can be maintained. So you can add training data, test, tweak and retest.

Chatbots typically address a very narrow domain…you can also segment your chatbot development. So I would start the process with just the NLU component with training data. From here you can train, test, tweak and retest.

Intent & Entities Breakdown From User Utterance

After training your NLU model, you can test this portion in isolation. From the entered text you can see the intent and the entities extracted from your utterance.

You will see the identified intent name and the confidence score. The detected entities are also listed with the start and end positions. The entity name, value and extractor is listed.

This view helps you to really hone your training data and pipeline through a process iterative testing.

From this it is evident that compiling training data is more of an administrative task.

Myth Two: Training Takes A Long Time

Training is really fast and efficient. And with all the configurations and components in place it is an one command process.

Among other things, training time of a NLU model depends on

  • Components defined in pipeline
  • Number of intents and examples
  • Number of entities
  • Number of epochs defined

In general the training of a model takes minutes. Here is a practical example, below is my machine’s specifications:

My Laptop Specifications

And the pipeline used:

# Configuration for Rasa NLU.
# https://rasa.com/docs/rasa/nlu/components/
language: en
pipeline:
- name: WhitespaceTokenizer
- name: RegexFeaturizer
- name: LexicalSyntacticFeaturizer
- name: CountVectorsFeaturizer
- name: CountVectorsFeaturizer
analyzer: "char_wb"
min_ngram: 1
max_ngram: 4
- name: DIETClassifier
epochs: 300 #100
- name: EntitySynonymMapper
- name: ResponseSelector
epochs: 300 #100

Here is the start of the training process..to give you an indication of training data size…

Training NLU model...
rasa.nlu.training_data.training_data - Training data stats:
rasa.nlu.training_data.training_data -
Number of intent examples: 405 (20 distinct intents)

This took less than 10 minutes to train; I did not optimize any environment settings.

Myth Three: You Need Highly Specialized People

Rasa has whittled down the installation, development and training process to a very well defined sequence of events. The production installation, containerization and integration (actions) portions can be performed by existing in-house resources.

Rasa-X can be described as Rasa’s user interface. It is a web based GUI which allows for the continuous improvement of the conversational interface.

List of Functionality Within Rasa-X

You have the ability to talk to your bot and make use of interactive learning while in a conversation. This bridges the gap between practical experience and training data.

When users talk to your assistant — via a messaging channel, the Share your bot feature, or through the Talk to your bot screen — their messages are funneled into the NLU inbox. When you have unprocessed messages in the Inbox, you’ll now see an indicator in the sidebar, alerting you that messages are ready to be reviewed.

The NLU model can be trained from the console, with a list of all models available and an indicator which one is currently in production. Switching between models are easy. This is very convenient should you want to roll back to the last model; or even a few models back.

Your Pipeline configuration file is available via the console, with stories and responses.

Managing Your Trained Models

After each training iteration a time stamped model is generated. These models are listed and can me deleted, or activated. If after training you do not have a desirable result, rollback to a previous model. Redundant models can be deleted, and you might have specific models you run at certain times or instances.

Myth Four: Cloud Based Is A Must

Rasa can be installed anywhere, in the cloud, on premise or any data center. Migration to the cloud can be performed at any stage.

Migrate Your Cloud Chatbot To A Rasa Installation Anywhere

Tools also exist to migrate from current cloud environments to Rasa. You have the freedom to run your software where you meet objectives of:

  • Cost
  • Security
  • Legislation
  • DRP
  • BCP
  • etc.

Myth Five: ROI Is Hard To Calculate

With the advent of call centers; cost, convenience and self-service became real concerns. Hence IVR was introduced to address all of these issues.

Very much the same scenario is playing out with live agent chat. It is expensive to have a large number of agents attending to live chat conversations, and waiting in a queue frustrate users.

Enter chatbots to act as the IVR of live agent chat.

But the difference is that chatbots, unlike IVR, have the ability to act as a virtual agent or assistant. And in this alone lies a great source of ROI.

Having a well defined and clear development path and tools help quantify costs.

Rasa with the Conversation-Driven Development (CDD) approach, strong community, excellent resources and help allows you to approach a chatbot project just as you would any other software project.

Conclusion

The key to most things in life is to just make a start and move past perceived and imagined impediments. Hopefully this article can assist you in moving forward with Conversational AI…

--

--

Cobus Greyling
Cobus Greyling

Written by Cobus Greyling

I’m passionate about exploring the intersection of AI & language. www.cobusgreyling.com

No responses yet