Zero-Shot Intent Classification via HuggingFace🤗

Currently everyone is trying to work out how to use LLM implementations in Conversational AI Frameworks…but there are other options with very similar functionality for this specific use-case.

Cobus Greyling
5 min readJan 24, 2023


Virtually all Conversational AI Frameworks (CAIFs) have announced LLM integration in the past few weeks. The focus of the LLM implementations were primarily on the area of intents.

This functionality included auto-generated synthetic intent training examples. In essence synonym sentences are generated from one or a few example sentences.

In some instances intent classification is performed on a single intent name.

The focus on intents is interesting and beg the questions…

⏺ Is intent detection and development seen as impediments to successful chatbot implementations ?

⏺ Or is it the most obvious and easiest first step to implementing LLMs?

If zero-shot intent classification is the goal in of itself, there are other options to achieve this without making use of recently launched Large Language Models like GPT-3.

Below is a short demonstration of how zero-shot intent classification can be performed via HuggingFace🤗

🤗 Zero-Shot Intent Classification

A single user sentence is submitted:

"I want to close my savings account"

This sentence is then compared to a pre-defined list of intent labels:

'Accounts', 'Savings', 'Cheque', 'Credit Card', 'Mortgage', 'Close', 'Open'

There is no supplementary data required, and the intent label name is leveraged to match the sentence to one or more intent classes. The image below shows the input parameters, with intent labels and the user input.

The data is submitted to the language model for zero-shot intent classification. The subsequent output is shown on the right, ranked in relevance from Savings, Close, and Accounts.

Below the model card from HuggingFace🤗 where you can define your input via a no-code interface and click on the Compute button to see the results within seconds.

Here you can see how to run the example above within a Colab Notebook, the complete code you can copy and paste is listed here:

pip install transformers

from transformers import pipeline
classifier = pipeline("zero-shot-classification",

sequence_to_classify = "I want to close my savings account"
candidate_labels = ['Accounts', 'Savings', 'Cheque', 'Credit Card', 'Mortgage', 'Close', 'Open']
classifier(sequence_to_classify, candidate_labels)

And the output:

{'sequence': 'I want to close my savings account',
'labels': ['Savings',
'Credit Card'],
'scores': [0.5641598701477051,

The image below is a complete view of the Colab notebook from start to finish.

⭐️ Please follow me on LinkedIn for updates on Conversational AI ⭐️

In Closing, My Objections To This Approach Are:

How is the initial list of intent labels defined in the first place?

◘ Is the defined list of intent labels aligned with the conversation users want to have? Or is there misalignment between the (user) desire path and (CxD) design path.

◘ The approach does not solve for the long-tail of intent distribution.

◘ Intents are often thought up in not ground-truthed.

◘ This approach is not aligned with the Gartner Deployment Guide where existing conversational data should be used for intent detection.

However, this approach can be used as a short-term bootstrap approach to collect real-world conversational data.

⭐️ Please follow me on LinkedIn for updates on Conversational AI ⭐️

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.



Cobus Greyling

I explore and write about all things at the intersection of AI & language; LLMs/NLP/NLU, Chat/Voicebots, CCAI.