What Constitutes A Large Language Model Application?

Large Language Model Applications are also referred to as LLM Apps, Generative Apps or “Gen Apps” for short.

Cobus Greyling
6 min readMar 30

Starting with large models, the term Large Language Models (LLMs) has been used interchangeably with Foundation Models (FMs) and Multimodal Models.

Large Language Models (LLMs) are indeed foundational machine learning models that use deep learning algorithms. These models can receive natural langauge as input, with the ability to process and understand natural language.

LLMs also output natural language, and can perform tasks like classification, summarisation, simplification, entity recognition, etc.

The zero to few shot learning capabilities of LLMs are really a differentiating factor.

Foundation Models can be defined as large models which are not bound to human language text tasks, or text input and output. Consider Whisper which introduces audio to the Large Model landscape.

GPT-4 can be described as one of the first large models with multi-modal capabilities; for instance, images can be used as input medium and also output.

Below, the HuggingFace🤗 model landscape, detailing Computer Vision, NLP, Audio, Tabular data and more. Together with Multimodal models which combine various modalities into one model.

Source

LLM Functionality Landscape

Considering the image below detailing the LLM stack of software and services, the LLM stack can be divided into three main areas:

  1. Models & Hubs
  2. LLM Development Tools
  3. End User Applications

1. Models & Hubs

The first being model suppliers and hubs. The most well-known model supplier is probably OpenAI, followed by Google’s collection of models. Under hubs HuggingFace and Github come to mind.

2. LLM Development Tools

The area of LLM Development tools excites me the most and this is where arguably the most innovation is taking place.

These tools can be divided into two main categories…with the first being prompt engineering or managing tools.

The second are tools which aim to solve for the challenge of chaining a sequence of prompt calls in order to form a sequence of events following each-other.

3. End User Applications

The last category is that of end user applications which are based on Generative AI. These include writing assistants, content, idea and marketing generation tools. Generative and search assistants, data extraction & conversational search and coding assistants can also be added to this category.

More On LLM Development Tools

LLM development tools can be divided between:

  • LLM application building tools (prompt chaining) and
  • Prompt engineering tools.

The simpler of these two implementations, are prompt engineering tools.

Prompt Engineering Tools

Prompt engineering tools can be defined as a GUI which allows for users to create, store, share and manage FM prompts. In most cases these prompts can be exposed via a managed API. It would then be up to the API implementer to string these APIs together to build a user experience.

One of the many prompt engineering tools are a product called Spellbook. As seen in the image below, when setting up a prompt the tasks available are all generative in nature:

  • Classification, Text Extraction, Generation, Summarisation and autocompletion.

Below is the first step in creating a prompt application. This application is premised on a single prompt.

Source

The prompt engineering interface resembles a LLM playground to some degree. Notice the prompt editor which makes use of templating, advantages of templating prompts are:

  1. LLM Prompts can be re-used and shared and programmed.
  2. Templating of generative prompts allow for the programmability of prompts, storage and re-use.
  3. Templates, which acts as a text file, but placeholders for variables and expressions are inserted.
  4. The placeholders are replaced with values at run-time.
  5. Prompts can be used within context and is a measured way of controlling the generated content.

As seen below, from the drop-down, an array of LLM suppliers are available, with the selection based on the task at hand.

Spellbook Prompt Engineering Interface

At the bottom an input can be given and the prompt executed. There is a level of fine-tuning available in the case of Spellbook.

Prompt Chaining Tools

The logic next step is to combine or chain prompts to create an application, or at least a basic sequence of events.

This chained-prompt application (Generative Apps/Gen Apps) will most probably be a conversational interface, from a user perspective. For creating a Gen App, the development interface will have to offer an array of nodes to the user.

Below is a list of eight node types which should act as the bear minimum in terms of a comprehensive development environment.

The communication nodes communicate and transfer data between the Gen App and the user, also between the Gen App and other systems via API calls.

The LLM nodes interact with the LLM for input and output. Filters or classifiers can be added to to filter data and branch out based on LLM output.

Helper nodes are useful for evaluation based business on criteria, or any other applicable condition. Where complexity supersedes a no-code approach, scripting is used to accommodate conditions or data processing.

To ease the technical burden of writing scripts, templates and basic examples can be included for users to edit.

Stack AI is a good case in point of a Gen App builder, in the example below from their website, is an example of user input which is fed to the OpenAI LLM. The input leverages data, which is made searchable via a Pinecone connector.

Stack AI

LLMs are highly versatile, open-ended and by nature unstructured. These characteristics of LLMs make LLMs easy accessible, but it also creates challenges for chain authoring and creating a structured flow premised on LLMs.

An important aspect of Gen Apps (prompt chaining) is task decomposition for larger and more complex tasks.

Further decomposition of tasks into more detailed nodes will make for more granular and scaleable applications.

⭐️ Please follow me on LinkedIn for updates on Conversational AI ⭐️

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.

https://www.linkedin.com/in/cobusgreyling
https://www.linkedin.com/in/cobusgreyling

--

--

Cobus Greyling

Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; NLP/NLU/LLM, Chat/Voicebots, CCAI.