Five Steps For Continuous Chatbot Improvement
A Process Cycle For Prolonged & Scheduled Bot Maintenance
Introduction
The moment your chatbot is released into the wild, the real work starts. Often most testing prior to launch was supervised…and by people with a mental image of what the customer journey should look like. Hence following the conversational happy path.
Once in production, users throw all kinds of utterances at your conversational interface, and this experience can be humbling. The remedy is to have an action plan to analyze what is happening within the conversations and with accuracy improve the conversational experience.
Something which is evident when it comes to the continuous improvement of chatbots is that intents and entities are merging with entities becoming more contextual and intrinsically linked to example utterances. Hence the context of the entity within the intent is becoming more important.
Added to this, the use of patterns in which entities are defined by users decreases the number of test data required.
Deprecation of the User Interface
The whole idea of continuous improvement is to work towards the deprecation of the user interface. Thus allowing the user to interact in free text, unstructured and natural.
Where the user is not governed in the way entities and intents should be expressed; but rather the interface picking out the bits of meaning and furthering the conversation with it.
The cautious approach is to structure the conversational interface with buttons, menus or number driven with key word. This is essence is not wrong, but it does deprive the conversational interface of its magic. This is creating structure where the user does not expect structure.
Think of Google is the most popular intent discovery interface. Think of the simplicity of the interface. The primary purpose of Google is to discover your intent; and has a single dialog turn, furnishing you with contextual information, ranked, based on the intent you expressed.
Fix Unsure Predictions
One of the manual tasks of improving a chatbot is to fix unsure predictions by reviewing endpoint utterances.
You really want to improve your chatbot predictions by verifying or correcting utterances received via the NLU engine which your model is unsure of. Some utterances may have to be verified for intent and others may need to be verified for entity. You should review endpoint utterances as a regular part of you scheduled chatbot maintenance.
This review process is another way for your chatbot to learn your domain of implementation. Within a larger organization, there might be various domains. The ideal is if your NLU environment can select the utterances that needs attention and present it in a review list.
This list really needs to be:
· Specific to the app
· Meant to improve the app’s prediction accuracy
· Should be reviewed on a periodic basis
By reviewing the endpoint utterances, you verify or correct the utterance’s predicted intent. You also label custom entities that were not predicted or predicted incorrectly.
It is important that the quantity and quality of the utterances across intents is balanced.
It is ideal if you can select the particular utterance and assign it to a different intent; also, to select portions of the utterance
Batch Test Datasets
This tutorial demonstrates how to use batch testing to find utterance prediction issues in your app and fix them.
Batch testing allows you to validate the active, trained model’s state with a known set of labeled utterances and entities. In the JSON-formatted batch file, add the utterances and set the entity labels you need predicted inside the utterance.
Requirements for this example batch testing are:
· Maximum of 1000 utterances per test.
· No duplicates.
· Entity types allowed: only machined-learned entities of simple and composite. Batch testing is only useful for machined-learned intents and entities.
When using an app other than this tutorial, do not use the example utterances already added to an intent.
A quick fix might be to fix the individual entries; but a better approach would be to amend the example utterances your intent has. Or in this case, add more intents to improve your model’s prediction accuracy. You want your model to predict these utterances without adding them as explicit examples.
A true positive is an outcome where the model correctly predicts the positive class. Similarly, a true negative is an outcome where the model correctly predicts the negative class.
A false positive is an outcome where the model incorrectly predicts the positive class. And a false negative is an outcome where the model incorrectly predicts the negative class.
Add Common Pattern Template Utterance Formats
You can see a pattern as the merging of intents and entities; where you select an intent, and associate with that intent a list of utterances. Within the utterance, which is now associated to an intent, you can define where the entity will be, and what type of entity it is. Hence creating a pattern associated with an intent and multiple entities.
Ideally you want to add have an indicator for optional text, and even nested optional text; refer to the video as a reference. Again, this speaks to the growing trend of intents and entities merging and the NLU being able to detect patterns. And also extensions or contractions of those patterns.
Use patterns to increase intent and entity prediction while providing fewer example utterances. The pattern can be provided by way of a template utterance example, which includes syntax to identify entities and ignorable text.
A pattern is a combination of expression matching and machine learning. The template utterance example, along with the intent utterances, give your NLU a better understanding of what utterances fit the intent.
Keep in mind that in most all NLU environments, real-world app should have at least 15 utterances of varying length, word order, tense, grammatical correctness, punctuation, and word count.
Patterns are template utterances with placeholders for entities used to further improve your model.
Extract Contextual Patterns
LUIS gives you the functionality to combine Entities, Patterns and Phrase Lists. This is a powerful combination to detect complex entities with advanced contextual awareness.
Use a pattern to extract data from a well-formatted template utterance. This can be very useful in RPA scenarios. Template utterances use a simple entity and roles to extract related data such as origin location and destination location. When using patterns, fewer example utterances are needed for the intent.
Roles can be seen as a subsection of an entity. For example, an entity Relocation, can have a To role and a From role; a origin role and a destination role.
While patterns allow you to provide fewer example utterances, if the entities are not detected, the pattern does not match.
In this example, there is an entity called NewEmployeeRelocation, but then this entity has two roles within it. One for Relocation Destination and another for Relocation Origin. Added to this, there is a phrase list of City names. And also a pattern, denoting how the entities can be used to constitute an user utterance.
Extract Free-Form Data
In this demo, you can use the pattern.any entity to extract data from utterances where the utterances are well-formatted and where the end of the data may be easily confused with the remaining words of the utterance. This pattern.any entity allows you to find free-form data where the wording of the entity makes it difficult to determine the end of the entity from the rest of the utterance.
The varying length of terms to be detected may confuse LUIS about where the entity ends. Using a Pattern.any entity in a pattern allows you to specify the beginning and end of the form name so LUIS correctly extracts the form name.