The Seven Vertical Vectors Of Conversational AI
The Gartner Magic Quadrant for Enterprise Conversational AI report was released recently. In order for a company or product to be considered, certain criteria had to be met. One of the points of qualification for an organization was having a singular stand-alone platform.
Hence Microsoft, which is well-known and has a large product portfolio in the Conversational AI space, was not included. Its conversational AI offerings of Power Virtual Agents, Azure Bot Service and Bot Framework do not offer a singular stand-alone platform, so Microsoft did not qualify for inclusion.
The same could be said about NVIDIA Riva, which follows a similar approach to Microsoft. In having AI offerings in TTS, STT, NLP & NLU. But does not have a singular stand-alone platform.
This is not a disadvantageous, but a deliberate strategy to allow enterprises to build custom-made, highly scalable, Conversational AI ecosystems using independent components.
By the same token, there are niche software companies focusing on narrow vectors.
According to a recent Deloitte report, setup challenges, including training data and maintenance, were among the top reasons for not implementing chatbots in enterprises.
With chatbot development the initial approaches seemed to have been centered around a single vendor or technology framework/stack to address the complete chatbot development and maintenance process, end-to-end.
There is an approach emerging now of building a chatbot ecosystem using various technologies and components. Following a more open approach and opting for the best of breed for each individual component of the conversational experience.
This principle is evident in the open architecture of Cognigy where each component of their framework can be used and implemented independently.
An Example Implementation Of Vertical Vector One
Datasets should reflect commonly occurring linguistic phenomena of real-life chatbots. These phenomena can include grammar and spelling mistakes, run-on words and sentences and missing punctuation.
In the example below, 100 000 utterances were taken from a publicly available dataset, and submitted to HumanFirst with no linguistic flags and no categories or intents.
Only the utterance column were uploaded using HumanFirst…
As seen above, intent-groupings are created by customer utterances being grouped together. Hence creating provisional intents from customer utterances. An example of not starting with the “solution” and working your way back.
Granularity determines how specific clusters will be, and how many intent groupings are created. High granularity is very specific, ramp up granularity to see a basic topical overview of conversations.
Cluster size can help expunge clusters which might not have enough data to constitute an intent. Cluster size can also be used to determine how many examples are in each intent grouping.
Seven Vertical Vectors In Conversational AI
Within the Conversational AI landscape there are seven vertical vectors and also a horizontal view. The verticals can be divided into three main groupings: Pre-Conversation, In-Conversation and Post-Conversation.
The seven vertical vectors are listed above, where respond & negotiate are constituted by NLU and Personalization. Vector 5 is very much a niche market, which is also very much the case with vector 1.
As stated in the introduction, it is not uncommon for larger organizations to construct their Conversational AI landscape from various technologies, choosing the best of breed for each area of concern.
Horizontal View Of Conversational AI Ecosystems
In broad strokes, the Chatbot development framework landscape can be divided into four categories.
Making astute technology decisions is important. Later in the process those tools will shape and influence the way you plan, develop, scale your chatbot. Impediments are usually system or framework related.
Chatbot development tools and frameworks can be divided into four categories. These categories range in their level and approach to code and focus area.
The open source, more technical NLP tools and chatbot development frameworks. Can be installed anywhere, has open architecture and often Open Source. No or limited GUI, configuration file and pro-code focused with a machine learning approach. In many cases a higher barrier to entry, scales well.
These environments demand astute technical planning for installation & operational management. Often used as underlying enabling technology by Category 3 software and new features can be developed and the platform enhanced
Often used by large-scale commercial offerings, Cloud based, big tech companies. In some instances specific geographic regions can be selected, seen as safe bets for large organizations. Solutions range from pro-code, low-code to no-code. Lower barrier to entry, GUI focused and little to no insight or control as to what happens under the hood. Little to no user influence on the product roadmap.
Rigid rule-based dialog state management, cost is most often not negotiable and collaboration and group-design-development focused.
These are independent, alternatives for Conversational AI, providing an encapsulated product. The enabling technology under the hood is often not made known. Independent, alternative solution providers, frequently built using open-source NLP tools. Often innovative approaches are followed to the challenges of Dialog State Design, development and management.
Low-code to no-code approach, the possibility exist of these companies being acquired, price is often more negotiable. Feature requests are more likely to be accommodated, lower barrier to entry and to get going.
Natural language processing and understanding tools, text or conversations can be analyzed for intent, named entities, custom defined entities. Often tasks like summarization, key word extraction, language detection etc. can be performed. Data annotation and training data improvement GUI tools are available in some cases.
Also tools for managing training data, easily accessible, but with a higher technical barrier to entry and ideal for high NLP pass on user input prior to NLU. Not a chatbot development framework and does not include features like dialog state management, chatbot response management etc.
Focused on wider language processing implementations and not just conversational agents, often used for non-real-time, off-line conversational text processing and often used as underlying technology by Category 3 software.
An Example Implementation Of Vertical Vector Seven
Below a last example, in this instance of vector seven, where customer conversations are uploaded.
Conversational data needs to have conversation ID’s per conversation, time stamp, speaker and the dialog-turn text.
From the dialog data, conversations are segmented, dialog bubbles are visible, and data exploration and annotation can commence.
Selecting the right software and solutions for any Conversational AI endeavor is becoming harder. And selecting the most appropriate software for the right vertical is crucial.
There are instances where some solutions cover multiple vectors. There are also instances where specialized software is focused on one or only a few vectors.
According to Deloitte research, 20% of patents in conversational AI relate to improving the training process. Innovations focused on automating and accelerating the training process to better understand users’ inputs and improve the quality of responses.
This research points to a increasing focus on vectors 1 and 7, and responses to customers can only improve if the understanding of customers is enhanced.
Subscribe to my newsletter.
NLP/NLU, Chatbots, Voice, Conversational UI/UX, CX Designer, Developer, Ubiquitous User Interfaces, Ambient…
Cobus Greyling - Medium
Read writing from Cobus Greyling on Medium. NLP/NLU, Chatbots, Voice, Conversational UI/UX, CX Designer, Developer…