Photo by Taylor Kopel on Unsplash

Conversational AI, Verticals & Pinecone

And How Detecting Similarity Is Becoming More Important

Cobus Greyling
7 min readMay 23, 2022

--

Introduction

The use-case and implementation of Large Language Models is clear within organisations like Apple, Google, AWS etc. However, from an enterprise and customer service perspective, the implementation use-case is perhaps not that clear. From a

  • Cost,
  • ROI,
  • Requirement,
  • Fine-tuning and
  • Available NLU API’s perspective.

Before getting into Pinecone, vectors and searching similarity, let’s look at horizontals and verticals…

Horizontals

A while back I wrote on four categories of chatbots

Category 1 being the more open and flexible, often open-source frameworks like MindMeld, Rasa, DeepPavlov etc.

Category 2 being the traditional platforms like IBM Watson Assistant, Google, AWS Lex etc.

Category 3 became the Gartner leaders, these are independent, alternatives for Conversational AI, providing an encapsulated product.

Category 4 is a wider Natural Language Processing and Understanding tools category. Ideal for performing a high NLP pass on user input prior to NLU. Focused on wider Language Processing implementations and not just conversational agents.

The OpenAI playground; showing how a chatbot can be created with only a short sentence describing the chatbot.

Category 5 are large language models where the power of the platforms manifest in the Natural Language Generation prowess, named entities, general Q & A, maintaining a conversation, text classification and the list goes on. More on this later…

The Code from the OpenAI Language API Playground can be copied and used in a notebook or application.

Verticals

As the table-stakes increase with platforms in Category 3, reaching parity for new platforms will become increasingly hard. Add to this the fact that there is an arms-race in terms of functionality, once parity is reached, market share needs to be established. Add to this product differentiation…a steep mountain to climb indeed.

So as the Conversational AI Landscape matures, vertical vectors will become more important. These verticals cover gaps or needs arising as the Conversational AI landscape grows. In broad terms these vectors are:

  • Training Data (Data Preparation & Structuring)
  • Converse, Engage, Resolve (Dialog State Management & NLU)
  • Respond & Negotiate (Personalisation)
  • Listen, Learn & Improve (Automated Bot Testing & QA)
  • TTS & STT (Voice Enablement)
  • Transcript Review & Annotation (Conversation Data Analysis)

There are a number of companies servicing these vertical vectors already, but the list will grow as more niches are discovered.

The Seven Vertical Vectors

When it comes to voice, the work done by Respeecher, DeepGram, Resemble and others are truly remarkable. The advances in this area is creating such a well defined vertical that traditional speech synthesis solutions will struggle to keep up with the innovation.

Large Language Models

Large Language Models are forming a category of its own, of these

are the first to come to mind. (I have not prototyped on AI21 Labs yet.)

These models are impressive in their ability to process unstructured natural language without any prior fine-tuning or training of data.

The seven high-level categories listed in the OpenAI Language API Playground.

General language tasks can be performed like summarisation, extracting key words, phrases, language translation, copywriting and more.

Text Generated in OpenAI playground from a simple description.

Large Language Models are indeed impressive and lends immense language capability to systems. There are future implementations we are most probably not even thinking of now. For now, the are a few considerations:

  • It seems like organisations are struggling to understand how to harness, establish ROI and find practical implementations for these solutions.
  • Fine-tuning is a big concern, especially for Natural Language Generation. In general fine-tuning seems to take long, and the turn-around time from training to test is not efficient.
  • Organisations are finding semantic search important, finding similarities in data, capturing semantics from user input and creating semantically similar clusters. Here HumanFirst comes to mind, also Nuance Mix has a function for clustering user utterances, even-though it needs refining. And then of course co:here.
  • Traditional NLU API’s offer the ability to create highly customised and very granular fine tuned NLU models. The intents and the entities have structure, which are incorporated into the fine-tuning data.
  • Chatbot framework NLU Engines requires very small sets of training data, train in a short period of time (a few minutes) and allow for immediate testing of the NLU.
  • In many instances incremental or partial training is available (Rasa & Cognigy).

What Is Pinecone?

What if you had access to a vector database, to which you can upload data, and perform a true semantic search on the data, returning highly accurate results?

Pinecone has integration to OpenAI, Haystack and co:here. Custom integration is also possible.

Pinecone allows for data to be uploaded into a vector database and true semantic search can be performed.

Not only is conversational data highly unstructured, but it can also be complex. Vector search and vector databases allows for similarity searches.

In the words of Pinecone…

Semantic search Vector databases store and index vector embeddings from Natural Language Processing models to understand the meaning and context of strings of text, sentences, and whole documents for more accurate and relevant search results.

Searching with natural language to find relevant results works much better than users needing to know specifics of the data.

With Pinecone vector databases can be built easily for vectors search applications.

The Pinecone management console is minimalistic and effective, allowing users to manage their environment.

API Keys are managed here, which is used in notebooks or applications. There exists a free tier for experimentation.

As seen below, Pods can be managed to some extent within the console.

Pods can be managed to some extent within the console.

The Simplest Pinecone Demo

This is the simplest Pinecone demo application, it is an excellent way to understand and grasp the basic concepts.

The Pinecone client is installed and the vector created with a set of values.

pip install pinecone-clientimport pinecone
import os
pinecone.init(api_key="xxxxxxxx-xxxx-xxxx-xxx-xxxx", environment="us-west1-gcp")index=pinecone.Index(index_name="pod1")import pandas as pddf = pd.DataFrame(
data={
"id": ["A", "B", "C", "D", "E"],
"vector":[
[1.,1.,1.],
[1.,2.,3.],
[2.,1.,1.],
[3.,2.,1.],
[2.,2.,2.]]
})
df
index.upsert(vectors=zip(df.id,df.vector))index.describe_index_stats()index.query(
queries=[[1.,2.,3.]],
top_k=3,
include_values=True)

The vector is queried, with the value 1,2,3. And the top 3 matches are returned, with a score and the values.

{'matches': [],  
'namespace': '',
'results': [{'matches': [{'id': 'B',
'score': 0.99999994,
'values': [1.0, 2.0, 3.0]},
{'id': 'E',
'score': 0.925820112,
'values': [2.0, 2.0, 2.0]},
{'id': 'A',
'score': 0.925820112,
'values': [1.0, 1.0, 1.0]}],
'namespace': ''}]}

Semantic search denotes search with meaning, as distinguished from lexical search where the search engine looks for literal matches of the query words or variants of them, without understanding the overall meaning of the query.

Conclusion

Pinecone enables large bodies of data to be searched. A number of search options are available, but text search and question answering are the most related to NLP.

The example applications include Semantic text search, Question-answering, Video Recommendations, Audio similarity search, Personalised article recommender and more.

Pinecone has integration to OpenAI, co:here and haystack.

For example, the co:here example, utilises co:here for generating language embeddings, which can then be stored in Pinecone and used for Semantic Search.

--

--

Cobus Greyling

I explore and write about all things at the intersection of AI & language; LLMs/NLP/NLU, Chat/Voicebots, CCAI. www.cobusgreyling.com