Four Conversational AI Trends To Watch In 2023

The year 2022 saw the release of LLMs with immense generative capabilities together with what can be called synthetic media. We also saw the rise of a new field called Prompt Engineering and the discipline of NLU Design. In this article I consider four practical implementations of CAI technology which will see advances in practical implementations.

Cobus Greyling
5 min readJan 3, 2023


The four trends to watch in 2023 are:

1️⃣ Generative Capabilities of Large Language Models (LLMs)

2️⃣ NLU Design Closing the Data Gap In LLMs

3️⃣ Adoption Of Voicebots & Related Technologies

4️⃣ Further Fragmentation Via Niche Implementation

🌟 Follow me on Linked-In For Conversational AI Updates! 🙂

1️⃣ Generative LLMs

The number of LLMs are increasing in size and generative capabilities. Although the graphic generative capabilities of LLMs are increasing, it is the practical implementation of text generation capabilities which will make LLM use mainstream. Especially implementations in Chatbots and Voicebots.

During 2022 HumanFirst announced integration with the Cohere LLM and other LLMs. Yellow AI announced DynamicNLP which leverages LLMs, add to this list Oracle Digital Assistant and Amelia.

Related to LLMs and generation is KI-NLP. Meta AI coined the term Knowledge Intensive NLP (KI-NLP). KI-NLP is the notion where a LLM is leveraged to answer general open-domain questions in a conversational manner. The user can define how responses should be formatted, be it summarised, long-form, simplified etc.

ChatGPT is the most recent implementation of a KI-NLP system where users can ask general domain questions. The OpenAI Language API excels at general question answering.

Considering Google’s efforts in the LLM space…recently Google announced their PaLM model whilst LaMDA is still only available on a very limited preview basis.

This is where OpenAI really differentiates itself, by making their models like ChatGPT widely available for testing and feedback. Especially with the release of the new Da Vinci model there was exceptional word-of-mouth coverage.

➡️ Follow me on Linked-In For Conversational AI Updates! 🙂

2️⃣ NLU Design

The concept of closing the Data Gap between generative and predictive LLMs was articulated by Alexander Ratner recently in a LinkedIn post.

LLM functionality can be divided into two main categories, generative and predictive.

Intents or determining the intent of user utterances (classifying user utterances) can be seen as an implementation of a predictive approach.

Considering LLM prediction (classification), one aspect comes to mind…the data gap! For accurate predictive models, a level of LLM fine-tuning is required, this process addresses the problem some refer to as the data gap.

Generative models work with a few-shot learning approach and preparation of training data is not as crucial as with a predictive model.

For predictive applications, data preparation is key, where unstructured data is converted into training data, or NLU design data.

You can read more about a practical example of fine-tuning a LLM here. This an example where NLU Design is utilised to convert unstructured data into structured NLU training data for a LLM.

3️⃣ Voicebots

Two things which stood out from 2022 are the need to realise value from AI implementations together with automation in the contact centre.

Customers still prefer making a phone call to contact centres, and the holy grail of Conversational AI is automating these inbound calls.

During the Modev Voice’22 summit in Arlington it was evident that the focus is moving away from dedicated speech interfaces like Alexa and Google Home to significant focus on voice automation in the contact centre.

A first step towards fully automated voice calls is discovering and analysing recorded voice calls. Hence customer recordings are converted into text for intent detection and more.

Another element feeding into future mass adoption of voicebots are technologies like rapid no-code custom speech synthesis voices, creating prosthetic voices, speech-to-speech language translations and more.

👉🏼 Follow me on Linked-In For Conversational AI Updates! 🙂

4️⃣ Further Fragmentation

As I mentioned earlier, from Gartner research it is evident that value is not realised in a majority of implementations when it comes to Conversational AI. An approach to remedy this problem is limiting the footprint of the CAI implementation. Hence starting with a small implementation and iterating from this initial implementation.

A second approach is to consider the 18 Contact Centre use-cases and choosing to implement AI technology in one or only a few of the use-cases. The new year will see more companies adopting one or more of these use-cases as a first foray into Conversational AI.

Implementing specific use-cases is more cost effective and easier to manage. These specific use-cases can then be extended into other areas of the business.

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.



Cobus Greyling

I explore and write about all things at the intersection of AI & language; LLMs/NLP/NLU, Chat/Voicebots, CCAI.