Your Chatbot Script Is So Important You Should Deprecate It

Use Natural Language Generation to Create Scripts based on User Intents

Cobus Greyling
6 min readOct 1, 2019

--

In general, most chatbot architectures consist of four pillars; intents, entities, dialog flow and of course the script. You can read more about this here. So, let us focus on the fourth element, the script. And how we can deprecate it using Natural Language Generation (NLG).

The Script Defined

The script is the wording used in the speech bubbles you use to speak back to your user; the person your interface is having a conversation with.

It is often said that the dialogue elements used, with the dialogue flow, can make or brake your chatbot. The decision to use active or passive voice, or to what extend, to personalize the content. Also, should the bot be anthropomorphized, and what should the persona be, the gender etc.

Conversational Components

To what degree will conversational components be available? These are often also referred to as affordances. If elements exist within the medium; like buttons, menus, carousels or galleries… these can be used to facilitate the conversation.

Each node of the dialogue needs to move the conversation forward and ultimately to fulfill the intent of the user. Tone, personality and bot vocabulary should be aligned with with target audience.

Jargon

The field in which your chatbot will be deployed to contains a host of jargon, industry specific terms and perhaps technical wording. Ensure your target audience understands these words and terms. And more importantly, that it is applicable to them. The use of an external focus group can be advantageous.

General Guidelines for Scripts

Don’t make your dialogues too long. You want to limit the amount of dialog turns in reaching the point where the user feels she/he is gaining traction in terms of execution. Text based conversation, as opposed to voice based conversation, has the distinct advantageous that is not ephemeral. It does not evaporate like voice. But this can lead to the notion of making your dialogues too long.

You should not ask too many questions. Even-though your chatbot will most probably serve a very narrow domain, it does not mean it should be deep. Attempt to identify your user, and use existing customer data, CRM information, previous orders to create a possible initial context for the conversation.

Creating Multiple Script Options per Node, and setting it to Random or Sequential is an Option which is Available.

Don’t ask too many questions. If you are not smart about extracting your entities from the conversation, you will be prone to asking a vast amount of questions. The key is to make use of contextual entity extraction. This uses the premise that no information shared by the user should be wasted. All pieces of information should be collected from the user. This is a key to truly making your conversational interface effective. The catbot should rally be very efficient in making use of the data supplied by the user. If the chatbot does not interpret data entered, it will move past that dialog and re-prompt the user data already entered.

Deprecation of the Script

Why?

So with machine learning readily available, why should we still manually define the script of our chatbot for certain node or state in our state machine.

http://tensorflow.org

There are option to make it more human-like, for instance to define multiple dialogue options per node. And then having the option to present the dialog options in a random or sequential fashion. Thus creating the illusion of a non-fixed and dynamic dialogue. However, as dynamic as it might seem, still static at its heart; even if it is to a lesser degree.

How?

Using Natural Language Understanding (NLG) of course.

https://colab.research.google.com

Why can we not take a sample size of a few hundred thousand scripts, and create a Tensorflow model by making use of Google’s Colab environment.

Then, based on key intents, generate a response, or dialog based on the model. Hence, generating Natural Language. Or unstructured output based in structured input. Let’s have a quick look at the basic premise of Natural Language Understanding (NLG) and then two practical Examples.

The Inverse of Natural Language Understanding

NLG is a software process where structured data is transformed into natural conversational language for output to the user. In other words, structured data is presented in an unstructured manner to the user. Think of NLG is the inverse of NLU.

With NLU we are taking the unstructured conversational input from the user (natural language) and structuring it for our software process. With NLG, we are taking structured data from backend and state machines, and turning this into unstructured data. Conversational output in human language.

Commercial NLG is emerging and forward looking solution providers are looking at incorporating it into their solution. At this stage you might be struggling to get your mind around the practicalities of this. Below are two practical examples which might help.

Fake News Headline Generator

In the video below, I got a data set from kaggle.com with about 185,000 records.

Fake-News Headline Generator

Each of these records where a newspaper headline which I used to create a TensforFlow model from. Based in this model, I could then enter one or two intents, and random “fake” (hence non-existing) headlines were generated. There are a host of parameters which can be used to tweak the output used.

Fake Product Review Generator

For this example I took close to 580,000 product reviews and created a TensorFlow model from that. Also, based in this model, intents can be entered and a fictitious product review can be generated based on the intents, or key words.

This example shows that longer more complex scripts can be generated.

Fake Product Review using Natural Language Generation

Conclusion

For most this might seem too futuristic and risky to place the response to a customer in the hands of a pre-trained model. However, a practical example where a solution like this can be implemented quite safely is for intent training.

During chatbot design, the team will come up with a list of intents the bot should be able to handle. Then for each of those intents examples of user utterances must be supplied to train the model. It can be daunting coming up with a decent amount of examples.

Should user conversations be available, even from live agent conversations, a TensorFlow model can be created from existing conversations and intents can be passed to the model. New and Random utterances can be generated per intent or grouping of intents. The output can then be curated by the designers and added to the training model.

--

--

Cobus Greyling

I explore and write about all things at the intersection of AI & language; LLMs/NLP/NLU, Chat/Voicebots, CCAI. www.cobusgreyling.com