Photo by Stefan Widua on Unsplash

Create A Telegram QnA Chatbot Using 🤗HuggingFace Inference API

A Fully Integrated Chatbot With Less Than 30 Lines Of Code

Cobus Greyling
6 min readJul 21, 2022

--

Introduction

The 🤗HuggingFace website might not seem as clear and crisp in terms of use-cases and implementations as the other LLM providers like Co:here, AI21labs and OpenAI. And it can be daunting for some to find their way around all the options and detect specific use-cases.

Hence, here is an example implementation of 🤗HuggingFace in the simplest terms possible.

The elements for this demo are:

  1. A Colab Notebook
  2. The Telegram Messaging Platform
  3. 🤗HuggingFace Pipelines to access pre-trained models for inference.

Below in the diagram you see the sequence of events from a user’s perspective. Once a message is sent by the user, the bot guides the user on the next expected dialog entry and the Colab notebook facilitates the communication between 🤗HuggingFace and Telegram.

Inference Models and API

The inference Models and API allow for immediate use of pre-trained transformers.

Pipelines are a quick and easy way to get started with NLP using only a few lines of code.

Pipelines in the words of 🤗HuggingFace:

The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of the complex code from the library, offering a simple API dedicated to several tasks, including Named Entity Recognition, Masked Language Modeling, Sentiment Analysis, Feature Extraction and Question Answering. See the task summary for examples of use.

The pipeline() transformers can be used for immediate inference by leveraging pre-trained models and tokenizers for text tasks like:

  • Sentiment analysis of text and user input.
  • Generation from input.
  • Name entity recognition (NER)
  • Fill-mask: fill in the blank given a text with masked words.
  • Summarisation
  • Translate text into another language.
  • And, question answering, extract the answer from the context, given some context and a question. This is the model we will be using for the example below…

Question Answering Example

Below you see the 🤗HuggingFace Question Answering page from their website. You can access the models via a test pane on the right of the page.

And, here below you will see a snapshot of the notebook where the question-answering pipeline is implemented. This is the simplest implementation of the QnA functionality, where the context is defined using a few sentences and assigned to the variable context.

This pipeline can be used for automatic question answering where company documents are used as the context.

You will also see the question is assigned to the variable question.

Notice two things:

  1. The wording used for the search is not present in the contextual piece of text. Hence this example can be thought of as a type of semantic search. Where no exact match exists, the F1 score is used to determine the best match.
  2. If there is an exact match of the search or question text in the contextual piece, then this will be used.

In the red block the answer is printed out with the F1 score…this is the Colab Notebook view…

The code:

!pip install -q transformers
from transformers import pipeline
model_checkpoint = "huggingface-course/bert-finetuned-squad"question_answerer = pipeline("question-answering", model=model_checkpoint)context = """
Cape Agulhas has a gradually curving coastline with rocky and sand beaches.
A survey marker and a new marker depicting the African continent are located at the most Southern tip of Africa.

"""
question = "What is the shore like?"
model_answer = question_answerer(question=question, context=context)
print("The Answer is "+model_answer["answer"]+".")
print("With a score of ")
(model_answer["score"])

Telegram Question Answering Chatbot

Below is a short tutorial to create a Telegram API which can point to your 🤗HuggingFace pipeline. The most important element is to get the API TOKEN which will be your application’s reference to the bot.

You can start creating your Telegram Bot/API by accessing the link below…

There is a bot (BotFather) which guides you through the process of creating a chatbot API.

Below you see the code to run the chatbot. The dialog state management is conducted in a very simplistic manner, using a variable called dialog_check which is set. The API_TOKEN variable is used to reference our Telegram bot from Colab.

The Telegram chatobt is called Cobot, which in turn leverages the 🤗HuggingFace pipeline. Below is an extract of the bot conversation.

The first example is one of a semantic similarity, where there is no exact match…notice the yellow blocks of the bot response.

And here is an example of an exact word match…notice the extraction is based on an exact match. However, the extract or match is contextually retrieved from the contextual paragraph.

In Conclusion

Getting started with 🤗HuggingFace is easier than what most people realise, and the inference API allow for pre-trained models to be accessed. As usage increases, cost will become a factor, however, at this stage proof of concept and user adoption should have been proven already.

--

--

Cobus Greyling

I explore and write about all things at the intersection of AI & language; LLMs/NLP/NLU, Chat/Voicebots, CCAI. www.cobusgreyling.com