The End of Omni-Channel

This is how omni-channel ends…

Introduction

The well known term omni-channel is still much used within the customer experience (CX) environment. While most companies have not yet mastered the art of omni-channel in their environment, with web, mobile application and IVR not aligned in terms of content, context and continuity.

Historical CX Architecture

A company is deemed “omni-channel enabled” if the available channels: IVR/Call Center, Mobile Application & Web, are aware of each other in terms of changes in content and functionality. Also, with user activity is tracked across channels.

Incrementalism

Many hold the view that new channels should merely be added to the existing omni-channel environment, like Interactive Text Response (ITR), social, conversational interfaces etc.

Impediments

There are a few impediments with a growing the omni-channel environment, one being human bandwidth. With humans having a real limitation with data input and output. A second is users having to learn different user interfaces.

The old paradigm is humans learning the OS/UI. The new paradigm is the OS/UI adapting to the language, behavior and gestures of the user.

Another impediment for the omni-channel approach to CX is app fatigue. Users are done with downloading apps, managing profiles, updates etc.

Decision fatigue is also at play. Mobile users are very discerning in terms of which push notifications they allow to surface on their devices.

Traditional omni-channels demand undivided attention.

Enter social and messaging (ITR, chatbots) for multi-tasking in CX.

Text/SMS based Chatbot Integration with Instant Mobile App

The New Paradigm

The new paradigm in CX demands the abandonment of rigid channels extending from an organisation to its customers.

A company has services, which they want to deliver; and service delivery (SD) has to find its way into the environment of the user; and especially the digital environment.

Today we have unprecedented access to computing power and especially cognitive computing power, and at a very low cost.

Access to Services via Voice in a multi-modal environment.

New paradigm — Services Living in the User’s Environment

The mobile app economy is in serious decline. Historically services have been imbedded in mobile devices and anchored in a touch screen.

Users are disinclined to engage on a phone call; but can easily spend two hours a day on their socials.

Hence the challenge is how to effectively deliver services to a user; the right service, at the right time. Ambient services anticipating the need and requirements of the user.

Services have to live and exist in the user’s environment. The user must be able to have a natural conversation with your service. Your service should be able to talk back and converse with the user. Due to limitations of face speed and human bandwidth, your service must be able to recognize gestures. Able to see an interpret the user’s facial emotion and recognize the person interacting.

The user should be able to interact with your service via social media and type in a natural language and have automated help. As a user move physically from their home, to their car, to work, context and continuity need to follow the user (ambient orchestration). While relevant information and services surface via digital listening. Cognitive computing make communication possible via vibrations, speech, gesture, expressions etc. etc. Everything will have an OS; an OS for your house, car, lights, entertainment. This all needs to be orchestrated.

Google Home Assistant Embedded in a Raspberry Pi

Large amount of options leads to decision fatigue. In turn, decision fatigue leads to bad choices. Users want their services to return to simplicity. Users are willing to second their decisions and choices. Users are open for anticipatory services and a simplified presentation.

Banking Detail Change via WeChat chatbot Example

Instead of the user learning user interfaces, the ambient cognitive environment will adapt and learn the user’s behavior, language, speech, emotion, gestures etc.

Companies designing, developing and delivering services are really choice architects.

The frightening part is that most of these technologies are available to anyone with a credit card. These technologies are available in the cloud from the likes of Amazon, Microsoft, IBM etc. Which brings the following quote to mind: “Technology is a commodity whereas execution is an art.

Multimodality: Text based chatbot with conversational Interface

Key elements of the Environment Service are:

· Active engagement of individuals, groups or even crowds.

· Users want experience and not ownership

· Services and not necessarily ownership

· Access not ownership

· Sensing versus procedure

· Intuition versus instruction

· Disappearing Apps

· App Utilities

· Fatigue to find functionality

The user environment is enabled by:

· Social channels

· Conversational Interfaces

· Wearables

· Hearables

· Nearables

· Ambient Orchestration

· Tangible User Interfaces

Build better voice apps. Get more articles & interviews from voice technology experts at voicetechpodcast.com

Tools and Technologies of the new paradigm are:

· Conversational Customer Care

· Cognitive Linguistic Analytics

· Voice Assistants

· Multimodality via Cognitive Capabilities; Vision, Conversational interfaces, Gestures, Displays

· Digital Listening and Surfacing

· Tangible User Interfaces (TUI)

· Services living in the environment — not mobile anchored

· Ambient Orchestration with contextual awareness and continuity

We shape our tools and then our tools shape us.” ~ Marshall McLuhan.

Chatbot integration with Facebook Messenger. Conversational interface for Self-Service

New Paradigm Tools — Conversational Customer Care

As users interact with friends and family via their socials (Facebook messenger, Twitter, Text/SMS, Slack, WeChat), they should be able to interact with a service and company; Interactive Text Response (ITR). This interaction should not be menu driven, but rather allow the user to use a conversational tone and enter free text. The ITR system can, from the user text input, extract language, meaning, intent, tone, sentiment etc.

Linguistic models can be built to extract key phrases, key words and the like. All these elements are part of a greater technology grouping: cognitive linguistic analysis.

Chatbot Conversational Interface for Twitter Direct Messages (DM)

Language Detection is important in serving the customers, and knowing which language model to present and apply.

Automated Language Detection Chatbot

Language Translation is useful for leveraging investments in language models. Live agent chat interactions can also be translated on the fly; allowing agents to service customers other languages.

Real-time Language Translation Chatbot — Microsoft Azure
IBM Watson Based Real-time Language Translation Chatbot

Natural Language Understanding is a technology whereby text input from a user can be passed to an interface which can extract meaning and intent from the text. Hence text input from a user can be understood cognitively by your ITR robot, and appropriate services and options can be extended to the user.

Speech Natural Language Understanding Interface

Cognitive Linguistic Analysis — Tone and Sentiment Extraction allows for the following elements to be detected, identified and quantified:

  • Emotion (anger, disgust, fear, joy and sadness),
  • Social Propensities (openness, conscientiousness, extroversion, agreeableness, and emotional range),
  • Language styles (analytical, confident and tentative)
Chatbot Example with Sentiment Analysis

This augmented digital listening allows for precise responses to customers, understanding not only what the customer is saying, but how they feel in terms of emotion, social propensities and even language styles.

Multimodality can be added where the user is able to send voice.

Image for post
Image for post
Image Recognition Example

Upload pictures which can be interpreted.

Image for post
Image for post
Reading Emotion from an Image

Upload handwritten notes which can be read digitally and passed to language models.

Image for post
Image for post
Image for post
Image for post
Handwritten notes read and interpreted

Pictures can be read and interpreted. Faces can be recognized.

Image for post
Image for post
Cognitive Computing: Interpreting Images
Sentiment and Tone Analysis using Aspect CXP and IBM Watson

New Paradigm Tools — Voice Assistants

Voice assistants are here to stay. But it is more than a voice assistant or a smart speaker. It is a speech interface. And interface to which a can talk, be understood and action something or access a service.

In terms of devices Amazon Echo (Alexa) is the most prevalent. Google Home second, but not close. Followed by a slew of other devices.

Amazon Echo: Alexa SSML Demo

By the numbers:

Predictions

· 50% of all searches will be voice searches by 2020

· About 30% of searches will be done without a screen by 2020

· There will be 21.4 million smart speakers in the US by 2020

· By 2019, the voice recognition market will be a $601 million industry

Current Usage

· This year (2017), 25 million devices will be shipped, bringing the total number of voice-first devices to 33 million in circulation

· Google voice search queries in 2016 are up 35x over 2008

· 40% of adults now use voice search once per day

· Cortana now has 133 million monthly users

Amazon Echo: Alexa Conversational Inteface

Advanced Speech Recognition (ASR) allows for the device/interface to receive phrases as spoken.

Once the phrase is received and converted to text, the same elements as used in ITR comes into play. NLU and the full suite of cognitive linguistic analysis tools can be employed to make sense of the user’s speech and respond in speech, not text.

As in any conversation, the context of the conversation is important. Also, a conversation between two humans have a dialog which is directed by one or both parties. The device should try and direct the dialog for the best possible outcome.

Already there is bidirectional speech between users and devices like lights, smoke detectors, thermostats, cars, navigation systems etc.

New Paradigm Tools — Cognitive

Cognitive computing allows for computers to receive input from users in a natural way.

Where the interface can read and interpret gestures, through passive optical detect different people and emotions and movements.

Image for post
Image for post
Image for post
Image for post
Emotion is read from Picture via Chatbot

The service can listen, receive voice input and detect who is speaking. The service has a display to show information and content but also a voice to speak to the user.

Chatbot with Language Detection

New Paradigm Tools — Tangible User Intefaces

In terms of digitization in the past, users had to move into the digital world, and Augmented Reality and Virtual Reality (AR/VR) is a good point in case. There is a movement of digital interfaces merging with physical entities. For example, having interfaces or surfaces displaying information, and having physical objects or elements you can touch, manipulate or move to ‘transmit’ information. Tangible user interfaces allow for data to be transmitted through everyday objects and actions.

New Paradigm Tools — Living Services

The living service is not anchored in a service, a device or a specific interface like a touch screen. The living service lives in the user’s environment. The user does not have to learn the service UI or OS, but the service learns the and adapt according to the user. The living service is tuned to data and digital listening and surface application data on the right time, the right place and the right modality.

The limitation of human bandwidth is circumvented by multimodality where data is exchanged between the user and the service by almost all the senses.

Thus, allowing the user to interact via conversation, gesture, movement, emotion, tone, visually, share images and hand-written notes and the like. The users environment is the interface.

SMS/Text Inteface with Instant Mobile Application

New Paradigm Tools — Ambient Orchestration

Ambient orchestration is the process of understanding the movement, habits, speech, emotion, look, preferences etc. of a user and orchestrate a service according to these.

Image for post
Image for post
Image for post
Image for post
Scene detection and description via a Chatbot

This requires trust from the user, some kind of consent and only by brands, companies etc. that has that privilege. Subconsciously we grant some organisations this right, but others not. Users find some companies doing this as helpful, thoughtful and tuned to their needs. But other companies as unwelcome, intrusive and unethical.

I have been heavily influenced by the thought leadership of the following organisations:

Opus Research, Fjord Design & Innovation, Aspect Software, IBM Cloud & Ocular.

Written by

NLP/NLU, Chatbots, Voice, Conversational UI/UX, CX Designer, Developer, Ubiquitous User Interfaces. www.cobusgreyling.me

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store