There is a certain allure to using AI to create AI, and more specifically a Conversational Agent.
The premise is that you describe your chatbot in natural language with regards to personality, domain, area of interest and tone. And in an instant, you have a chatbot which is resilient and knowledgeable.
All the elements of intents, entities, state management and dialog are baked into one monolith of a chatbot.
Obviously fine-tuning comes to mind. In the form of a serious concern. Which I tried to address here.
The advent of GPT-3 did spark a discussion of note on low-code implementations for chatbots.
Low-code interfaces are made available via a single or a collection of tools which are very graphic in nature; and initially intuitive to use. Thus delivering the guise of rapid onboarding and speeding up the process of delivery solutions to production.
As with many approaches of this nature, initially it seems like a very good idea. However, as functionality, complexity and scaling start playing a role, huge impediments are encountered.
Forgetting for a moment about all the impediments when it comes to a low-code implementation, and what is required for large scale corporate implementations in terms if fine-tuning and the rest…
Let’s just appreciated how quickly a general conversational AI assistant can be created. Some of the elements which can be overlooked, which are really astounding are:
- The absolute absence of fallback proliferation. The bot stays on its feet, so to ay. And does not default to a specific fallback when intent cannot be discovered.
- Conversational context is maintained, intents and entities are detected intuitively.
- Digression and to a degree disambiguation are present.
- The natural language generation (NLG) is remarkable.
Below is the simplest code example possible in GPT-3; in Python.
The prompt portion is where the bot is described and a few dialog turns are made available.
There are some settings available under:
- max tokens
- frequency and presence penalty
- Dialog turn indicators are also defined by stop.
pip install openai
import openaiopenai.api_key = "###################"response = openai.Completion.create(
prompt="The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.\n\nHuman: Hello, who are you?\nAI: I am an AI created by OpenAI. How can I help you today?\nHuman: What is RAM?\nAI:",temperature=0.9,
stop=["\n", " Human:", " AI:"]
The JSON response is clear and a few other metrics are included.
"text": " RAM is a computer storage space used to temporarily store data while the computer is processing it."
And that is about as simple as it can be from a code perspective.
The temperature and top_p settings control how deterministic the API is in generating a response.
If you’re asking the API to provide you with a response where there’s only one right answer, then you’d want to set these lower.
If you’re looking for a response that’s not obvious, then you might want to set them higher.
The number one mistake people use with these settings is assuming that they’re “cleverness” or “creativity” controls.
As mentioned before, there are definitely good implementation opportunities for the Conversational AI aspect of GPT-3.
As a support API where text can be processed to assist existing NLU functionality, there is a very real use case.
As mentioned, GPT-3 can be a great help in pre-processing user input as a help for the NLU engine. The challenge is that GPT-3 seems very well positioned to write reviews, compile questions and have a general conversation. This could lead to a proliferation of bots writing reviews, online adds and general copywriting tasks.
This automation does not need to be malicious, in principle.
Open AI is seemingly making every effort to ensure the responsible use of the API’s.
The fact the extensive training is not required, and a few key words or phrases can point the API in the right direction, is astounding.
There are however opensource alternatives for most of the functionality available.
- GPT-3 has quite a bit of functionality which can serve to augment a current chatbot.
- Dialog can be diversified with the NLG capability.
- General chit-chat can easily be created.
- Copywriting is made easy for slogans, headlines, reviews etc.
- Text transformation
- Text generation
- Creating a general purpose bot to chat to.
- With their underlying processing power and data, creating flexible Machine Learning stories should be a good fit.
- The API is cloud hosted
- Social media bot content generation
- Not a framework for sustainable chatbot scaling; yet.
- Possible over and under steering with training data.
Subscribe to my newsletter.
NLP/NLU, Chatbots, Voice, Conversational UI/UX, CX Designer, Developer, Ubiquitous User Interfaces, Ambient…
Cobus Greyling - Medium
Read writing from Cobus Greyling on Medium. NLP/NLU, Chatbots, Voice, Conversational UI/UX, CX Designer, Developer…
How Is GPT-3 Reimagining Traditional Chatbot Architecture?
And Why This Might Not Always Be As Good As It Seems…
Using GPT-3 To Measure Sentiment
And How To Add User Sentiment Classification To A Chatbot
Customizing A GPT-3 Chatbot
And Why still Bother With Other Frameworks & Environments?
Create A Q&A Chatbot Using GPT-3
Provide Search Documents Or Use GPT-3 Default Search