The Emergence Of Large Language Model (LLM) API Build Frameworks

The real power of LLMs are leveraged by combining & orchestrating LLM functionality with other sources of computation or knowledge.

Cobus Greyling
5 min readFeb 17, 2023

--

Before getting into API Build tools for LLMs, let’s first consider .

We are all familiar with developing chatbots in a web-based GUI where we develop, configure and orchestrate different components to create a chatbot.

These components include NLU, dialog flow management, integration to APIs and more.

Once the chatbot is developed, we expose the chatbot for use via an API.

Below is an example from the , showing step-by-step how the Watson Assistant chatbot can be accessed via an API.

The chatbot API can be seen as an intelligent API facilitating user input, and orchestrating different underlying services to yield to the user a response.

Below again is an example from where a graphically built application is exposed via a rest endpoint.

So chatbot development frameworks have established this principle of building APIs which are:

  • Highly contextual and intelligent
  • Orchestrating underlying services and systems
  • Creating an unified end-point instead of a dispersed array of different APIs to call

⭐️ Please follow me on for updates on Conversational AI ⭐️

Back To Large Language Models

With frameworks like , , and more, the idea is to create an underlying orchestration engine for LLM interactions, and surface it via an API.

A product like finds itself more on the side of managing prompts for prompt engineering. But I’m sure it will evolve into more of an orchestration engine as user needs increase.

Current chatbot development frameworks can be divided into the three main categories of:

  • Pro-Code
  • Low-Code, &
  • No-Code

The LLM Build Tools are also spread across this spectrum.

For instance, LangChain is a pro-code and more technical development environment. Fortunately the documentation and example code are comprehensive and really accelerates the learning process.

has a web based GUI for configuring blocks and chaining these blocks in series or parallel.

takes as input prompts, pre-existing prompt-chains, and Python code and combines the functionality into a managed API.

Below are some of the LLM, NLP and ASR models Steamship can integrate to.

is more simplistic in their approach, with focus on prompt and chatbot session management together with creating fine-tuned models.

Below you see the Retune model GUI, with the prompt describing the behaviour of the conversational interface. Users can share the model via a REST API.

And below is the LLM chat functionality, with the generated code to create a new chat, creating a new thread, and continuing the conversation.

Functionality will certainly grow within the Retune UI…Retune is not as advanced and granular as Dust or Langchain. However the interface is much less technical and more intuitive for non-technical users.

is not generally available yet, but seemingly their product is premised on these principles:

  • Prompt sourcing and management
  • Fine-tuning custom models
  • Managing LLM cost

Final Thoughts

Existing and established chatbot development frameworks find themselves in a difficult predicament…they need to maintain a firm focus on their product. Continuous augmentation and adding of features to the product is a given due to the investments made to their product.

The new LLM applications have the freedom to play according to LLMs strengths, without inhibitors of an existing framework. You can read more about this predicament .

⭐️ Please follow me on for updates on Conversational AI ⭐️

I’m currently the @ . I explore and write about all things at the intersection of AI and language; ranging from , , , Development Frameworks, and more.

https://www.linkedin.com/in/cobusgreyling

--

--

Cobus Greyling
Cobus Greyling

Written by Cobus Greyling

I’m passionate about exploring the intersection of AI & language.

Responses (2)