GPT-3: Conversational AI & Chatbots
What Will The Impact Be On Chatbot Design & Development
Introduction
There has been much talk and hype regarding GPT-3 the last couple of days.
Even though though I have not built any prototypes with GPT yet, one thing is evident…
The fact that OpenAI is in the process of releasing an API will have a significant impact on the Conversational AI marketplace.
OpenAI states that their technology is only an HTTPS call away. This will truly democratize access to conversational AI. And will do for AI what the cloud did for computing in general.
The challenging part is their approach of broad-based general AI.
API Focus Area
The challenging part of the OpenAI API is that it does not want to address specific use-cases, but rather be a general-purpose “text in, text out” interface.
Virtually any English language task will be possible…
Seemingly the main focus areas of the Beta API will be:
- Text Generation (NLG, Natural Language Generation)
- Question & Answer
- Parse Unstructured Data
- Improve English
- Human Language Translation (For example English to French)
NLG
Natural Language Generation is important in order to have the output of text from the chatobot automated. Currently we are automating the input to a large extend with NLU/P, but not the output.
The output or dialog is still scripted and set for each dialog state.
Indeed, there are powerful commercial products and models available doing just that.
But with OpenAI API there is no training data required , no training of the model and no specific knowledge.
Parsing Unstructured Data
This function is useful if you have longer text, and you a initial high-pass is required on the input text.
But again, there are very effective opensource products available like spaCy in terms of Industrial-Strength Natural Language Processing.
Improving English
This product can be useful in live agent engagement. To act as a filter between the agent and the customer, curating and improving the agents language.
Why An API & Not Open Source
OpenAI sites three reasons for this…
- Funding for research, safety and policy efforts.
- Many of the underlying API models are very large, demanding specific expertise to develop & deploy. The aim is for the API to make powerful AI systems accessible to small businesses and organizations.
- The API model allows for rapid responds to abuse.
Since it is hard to predict the downstream use cases of our models, it feels inherently safer to release them via an API and broaden access over time, rather than release an open source model where access cannot be adjusted if it turns out to have harmful applications.
Training On Your Data
Programming entails exposing a “few” examples of your objective. Success varies depending on how complex the task is.
The API does make provision for honing performance on specific tasks by training on a dataset (small or large) of examples from the user. Labeling or annotation of data is also possible.
The idea is for anyone to be able to use it, but flexible enough to make machine learning teams more productive. The API runs models from the GPT-3 family.
Conclusion
There are two considerations which come to mind. The first is obviously cost. Will the benefit and performance delivered, justify the expenditure.
Secondly, when organizations start to use it, scaling must not be an issue. The API functionality should be feature rich enough for organizations to not run into a situation where scaling is impeded.
My guess is for chatbots this should act as a convenient API to enhance the conversational experience. The NLG functionality could usher in a new era in general chatbot development…