How To Create A Chatbot Using Cohere & Telegram
Learn how to integrate Telegram with a Large Language Model. This is an effortless, step-by-step guide on how to integrate the Cohere generate API with Telegram. And how to create a chatbot which generates and responds with a hashtag, based on incoming text messages.
Introduction
Large Language Models (LLM’s) have the ability to generate responses and messages based on a few lines of training data (few shot learning), as we will see below. The same generate API used in this article for summarisation is used again in the example below, to create a hashtag based on the context of a sentence. All based on the training line:
Given a post, this program will generate relevant hashtags.
There is still the challenge with LLM’s to find real-world practical implementations. There is also the challenge of fine-tuning and accessing LLM’s via a no-code, studio environment.
In a previous post I discussed the POC integration between Cohere and HumanFirst. What I like about it is how the HumanFirst studio can be leveraged to utilise a Large Language Model (LLM) from a completely no-code perspective.
In this article I cover the following aspects:
- A very basic code example on how to use the Cohere generation API; with few shot learning to enable the API to generate hashtags.
- Integrate the Cohere code with a Telegram messenger wrapper.
- And, chat with the Cohere LLM via the Telegram based chatbot for an end-to-end test.
- You will make use of Colab to run the Cohere code and poll the Telegram API.
- You need to register to make use of the Telegram API.
- And register with Cohere to generate an API key.
Cohere Summarisation Examples & Code
Cohere has an extensive no-code Playground where you can access all their API’s and functionality for easy no-code prototyping.
The playground is a friendly environment to become familiar with the Cohere API’s:
- The playground is completely no-code, use the small model for rapid experimentation.
- No need for vasts amount of pre-processed data or long training times.
- The three models generate, embed & classify all have pre-loaded data.
The notebook discussed here, demonstrates a simple way of using the Cohere generation model to create one or more hashtags based on the context of a sentence.
In the diagram below, on the left you see the input text to the Cohere API used for few shot learning, with the few shot training data marked in the red block. The Cohere generation API is extremely versatile, and subsequently generates the output text. In this case, the output is a hashtag, as seen below.
print('Prediction: {}'.format(response.generations[0].text))
Prediction: #southafrica
On the right you see the notebook screenshot, which starts with pip install cohere. The next thing to notice is the Cohere API key you can get after registration.
The rest is a standard piece of code copied from the Cohere playground. Results can be tweaked by setting max_tokens, temperature and model size.
Below you see is a complete notebook which can be copied and pasted into Colab and run. Again, notice the API Key required to run the code, you will need to register on the Cohere website; it’s free.
And here is the code to copy and paste into a notebook…
import cohere
co = cohere.Client('{apiKey}')
response = co.generate(
model='small',
prompt='Given a post, this program will generate relevant hashtags.\n\nPost: Why are there no country songs about software engineering\nHashtag: #softwareengineering\n--\nPost: Your soulmate is in the WeWork you decided not to go to\nHashtag: #wework\n--\nPost: If shes talking to you once a day im sorry bro thats not flirting that standup\nHashtag: #standup\n--\nPost: Going to unmute at the end of the Zoom meeting to say bye and realizing you were actually unmuted the whole call\nHashtag:',
max_tokens=10,
temperature=0.5,
k=0,
p=1,
frequency_penalty=0,
presence_penalty=0,
stop_sequences=["--"],
return_likelihoods='NONE')
print('Prediction: {}'.format(response.generations[0].text))
Integration To Telegram
The code for this Telegram hashtag creation chatbot is completely contained within a Colab notebook. As mentioned earlier in the article, two API keys are required, one for Cohere, and one for your Telegram API.
Both of these API’s will be defined within the Colab code. You can start creating your Telegram Bot/API by using this link…
A few lines of code within the Colab notebook will facilitate the dialog turns, of which there are only two. The Colab python code will also invoke the Cohere model and pass the results back to the Telegram Messaging platform.
Installation…
!pip install pytelegrambotapi
!pip install -q transformers
!pip install cohere
import telebot
import cohere
from transformers import pipeline
The few shot learning training data…
prompt='Given a post, this program will generate relevant hashtags.\n\nPost: Why are there no country songs about software engineering\nHashtag: #softwareengineering\n--\nPost: Your soulmate is in the WeWork you decided not to go to\nHashtag: #wework\n--\nPost: ' + longform + '\nHashtag:',
And the Colab code…
Below is a screenshot from Telegram where you can see the conversation with the chatbot, the input is given, and the summary response is generated by Cohere.
In Conclusion
In the past I have considered various ways of using LLM’s in a conversational interface, and how a chatbot can also be bootstrapped by making use of LLM search.
LLM’s are very good at performing semantic search on a piece of text, documents, etc. This is where the search is not based on key word matching, but rather on matching the meaning and intent of the text.
An alternative way of bootstrapping a chatbot using LLM’s, are to orchestrate a few elements of LLM’s to constitute the bot. These elements can include clustering, search, generation, etc.