Voice Data & Contact Centre AI Use-Cases

Audio is in abundance within enterprises and with existing ASR and PII Redaction technology, voice data is ready for use in multiple Contact Centre AI use-cases.

Cobus Greyling
4 min readNov 24, 2022

--

Firstly, customer voice data is in abundance within enterprises…

Most organisations transcribe recorded customer conversations to the call centre in batch mode and save the output. Calls are recorded by default, often serving as the actual agreement or contract for a transaction.

With the advent of voicebots fronting the contact centre, or at least intelligent IVR’s, even more voice data is collected and stored.

Secondly, access to good training data and harmonising AI projects with organisations’ existing processes and systems were raised by Deloitte as one of the main reasons AI projects do not realise their intented benefits.

Thirdly, contact centres are an ideal space to implement AI for at least three reasons…

1️⃣ Training data is in abundance from customer conversations.

2️⃣ A fragmented approach can be followed to address the roughly 18 use-cases of Contact Centre AI (CCAI). This piecemeal or fragmented approach enables the implementation of smaller and directed AI projects. Hence increasing the likelihood of success based on ease of orchestration with existing processes and procedures.

3️⃣ Value realisation can be measured very direct and accurately by making use of CSAT, tNPS (touch-point NPS), containment, and cross-channel containment. AI can enable intelligent triaging of customer queries.

The image above shows four basic CCAI use-cases where voice can be implemented. Speech analytics can help with topics or categorising data. This data in turn can feed into intent discovery and help with intent maintenance, intent maintenance can be divided into four actions:

  • New Intents with accurate training data
  • Update training data of intents
  • Merge or split intents
  • Create and update nested/hierarchal intents

A big advantage of voicebots is passive identification via voice biometrics of enrolled users and also passive updates of voice training data of authenticated users.

Real-time agent coaching is an element of voicebots which has been undervalued…the basic premise is that the voicebot is conferenced into the call as a third party. The users speech is transcribed real time, and entered during the call into a chatbot on the agents desktop.

Hence the agent is on the call, simultaneously hearing and reading what the caller is saying, and getting real-time responses from the agent-desktop chatbot (agent assist) on what course of action to take.

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.

https://www.linkedin.com/in/cobusgreyling
https://www.linkedin.com/in/cobusgreyling

--

--

Cobus Greyling

I explore and write about all things at the intersection of AI & language; LLMs/NLP/NLU, Chat/Voicebots, CCAI. www.cobusgreyling.com