A Tour Of The co:here Playground
The Playground Gives Easy Access To Large Language Models
Introduction
What Is co:here?
Few things beat a good demo…the co:here playground empowers you to build your own demo.
From the initial impressions of co:here, it is evident that much thought has been given to product and product design. co:here is a very well packaged and cohesive product. There seems to be much focus on managing complexity under the hood and surfacing simplicity.
Thus really democratising accessibility to large data models and natural language processing. Should someone want to access the complexity and get involved in the minutiae the option is available to them. But that need not be the starting point.
One thing that is obvious, is the large array of implementation options for a platform and functionality like co:here. I always tend to think in terms of chatbots and conversational AI as a more synchronous implementation. And a platform like co:here can really be leverage to perform an initial NLP high-pass on user input. But more about this later…
For starters…co:here is a natural language platform. It can be used for both natural language understanding and generation.
co:here is an easy avenue to access large language models, for use cases like:
- Classification,
- Semantic Search,
- Paraphrasing,
- Summarisation, and
- Content Generation.
Fine-tuning can be performed, this is a key feature for users to not only access & leverage the large data models, but also add their own layer of customised training. This is essential for specific implementations.
Through fine-tuning, users can create massive models customised to their use case and trained on their data.
There are three avenues to access the models:
- The playground,
- SDK’s, and the
- CLI tool.
What Is A Playground?
There are a number of ways to access platforms and software functionality remotely…The most common is probably Notebooks. Via notebooks code can be executed, libraries accessed, shared and more.
!pip install cohere
import cohereco = cohere.Client('FhaC7lVDOCZdADKyplQwBIFoPlGRjGIs8zMASnzS')
embedd = co.embed(
model='small',
truncate='LEFT',
texts=["").embeddings
print (embedd)
👇🏼
Some platforms have via Binder interactive code/notebooks within their docs section.
The notion of a playground is available on various platforms. Playgrounds are typically web based interfaces where anyone can rapidly experiment with the platform. Without any specialised or specific knowledge, users can enter their own terms or reference data, execute by the click of the button and see their results in a few seconds or less.
co:here is following the trend of allowing significant free access to prospective users sans the requirement of entering credit card detail. Notable playgrounds are OpenAI Language API, AI21labs, Rasa, and HuggingFace.
The co:here Playground
The co:here Playground is a no-code graphic web-based application giving you access to co:here.
The playground is a good place to test your use-cases with a limited amount of data.
Embed
Embeddings are useful for clustering large amounts of text data. It is also a useful tool for visualising data. Creating a visual representation of the data can be seen as a form of structuring conversational data which is unstructured.
As seen below, the lines are pasted into the Texts window and executed. Clusters of related data are created. Two clusters are marked at the bottom left. One for terminations and a closely related cluster for exits.
Code can be exported in various formats, the api key can be generated within the GUI of the playground.
The clustering of utterances reminds to some extend of the user utterance clustering which is available within HumanFirst. In the case of HumanFirst the clustering is done not graphically, but textual. Granularity and cluster size can be scaled on the fly.
Generated
Text generation has various use-cases and options. The summarisation option I find particular interesting for use-cases where different conversational agents require user messages in varying verbosity.
The return messages of a chatbot can be significantly longer than those of a voicebot. With a voicebot the return messages need to be much shorter due to the ephemeral nature of a voicebot.
The longer text or passage can play a supporting role by being sent as supplementary data to the user, on a supporting chat medium.
Classify
The text classification section of the playground is really convenient in the sense that a small amount of training and example data is required for classification.
There are a few classifier options which can be very useful in a chatbot, of these are:
- FAQ router
- Topic Classifier
- Product Classifier
- Sentiment
Conclusion
A few general, initial observations…
- The results which can achieved with a small amount of training data is really impressive.
- The playground is a good initial stepping stone to move onto more advanced use-cases.
- The playground also facilitates the leap from no-code to pro-code.
- The Natural Language Generation is very specific and use-case driven, which speaks to a more responsible implementation.
- An area of particular interest to me is fine-tuning. This is of utmost importance for specific use-case implementations. The interesting thing is fine-tuning can be performed for both NLG and classification.