What Are Good User Utterances For Your Chatbot?
Utterances Are User Input Your Chatbot Will Need to Understand
Understanding What Are Good Utterances
When designing and developing your chatbot application, you need to have a good understanding of utterances.
Utterances are the input from the user which the chatbot needs to derive intents and entities from. To train any chatbot to accurately extract intents and entities from the user’s dialog input, it is imperative to capture a variety of different example utterances for each and every intent. Most chatbot development environments use these example utterances to create a model used to detect intent and entities.
What Might The User Say
From here you need to decide or guess different utterances that you think users will enter. If the customer you are developing the chatbot for, has a current live agent chat implementation, the conversation logs will be an invaluable source of utterance data.
These historic conversations can be grouped into different categories. You can read more about conversation categorization here. Each of these categories can be used to form an intent. And now you can group the customer utterances according to each category.
Having access to these previous conversations will remove the guessing game from determining what users might say.
Variation of Utterances
For each intent, when you create utterances, try and create different variations of the possible user utterances, sentence which mean the same thing but are constructed in a variety of different ways. One way of doing this is Natural Language Generation (NLG).
You can read more about this further down in this story.
However, when doing it manually, consider the following guidelines…
- Create utterances of different lengths; short sentences, medium and longer sentences.
- Change the words and also the length of phrases.
- Vary the placement of the entity. You might want to place the entity and the start, middle and end of the utterance. This will allow the bot to better understand the context in which to expect the entity.
- Mix the grammar up.
- Pluralization
- Stemming
- Punctuation — use punctuation in some instances, not in other, and bad grammar in other instances. Anticipate the way your audience might speak.
Utterances Are Not Always Well Formed
Your user might use a well formed sentence like, “Please book a flight to Paris from Amsterdam.”, or a more fragmented utterance like “pls book plane amsterdam paris”. Some development environments have spelling correction available.
In your examples you can add badly formed examples and common spelling errors. You can include typos and misspellings in the training data to be included in your model.
Use Representative Terms and References
When formulating your utterances, what you deem as common terminology and references might not be the same as what your typical user or client is using. Bear in mind, you have a specific domain knowledge and experience which your customer might not have. Especially if they are a new customer or prospect. Do not only cater for experts.
Vary Phrases and Terminology
You might focus so much on varying your sentence and word sequences that the same terminology is still being used all along.
You might create user utterances like “Where can I get a mobile phone?”, “How do I get a mobile phone?”, “How is a mobile phone used?” etc.
The core term “mobile phone” is still being used everywhere and is not varied. Try and use alternatives like cell phone, mobile, iphone, Samsung Mobile etc.
Example Utterances For Each Intent
To each intent there needs to be assigned example utterances. Most platforms demands a minimum of 10 to 15 utterances per intent. If your intent does not have example utterances, training will not be possible and obviously accuracy will be impeded.
Most development environments allow you to add utterances, build the model, test, and then revisit your utterances.
It is better to start with a few utterances, then review endpoint utterances for correct intent prediction and entity extraction.
Testing utterances
Sometimes it helps to get a focus group together to test the chatbot. The development and planning group have a huge amount of common understanding among them. This often leads to the testing to be directed along a happy path or golden path. But getting users of the product or even staff together as a focus group to interact with the chatbot can surface vulnerabilities not previously detected. Merely due to the absence of prior planning and design knowledge.
Review utterances
Continuous review of user utterances is of utmost importance. The dialog logs are an invaluable source of information and training data pertaining to the chatbot. These logs can be reviewed on a daily or weekly basis and utterance lists can be edited accordingly, hence improving the NLU model continuously.
When reviewing the utterances, focus should be on the 10% of errors which will have a 90% impact on the overall experience. Often the mistake is made to get lost in the minutia of the data and make adjustments which will have a marginal impact of the overall conversation.
Natural Language Generated Utterances
A practical Implementation of NLG
NLG can be used to generate possible user utterances based on key words or intents.
Below are two examples.
The first video shows how I trained a Tensorflow model with 180,000 real-world news headlines. The objective was to auto generate news headlines based on key words or phrases from the model. The accuracy was astounding, considering the small data sample.
You can see how the key words can be considered as intents and the model is used to create user utterances.
In this second video, I used 550,000 product reviews to create a TensorFlow Model from which Fake Product Reviews can be generated from key words or key intents.
Again, a practical implementation is to use this to generate possible user intents based on the training data and model.
Read More Here…