The Anatomy Of Large Language Model (LLM) Powered Conversational Applications

True business value needs to be added to LLM API calls to make any LLM based application successful.

Cobus Greyling
6 min readFeb 14, 2023

--

Successfully scaling the adoption of LLM powered applications lie with two aspects (there might be more you can think of).

1️⃣ The first aspect is a development framework. Traditional chatbot frameworks did well by creating an ecosystem for conversation designers and developers to collaborate and transition easily from design to development. Without any loss in translating conversation designs into code or functionality.

2️⃣ The second aspect is the user experience (UX). Users do not care about the underlying technology, they are looking for exceptional experiences. Hence the investment made to access LLM functionality needs to be translated into a stellar UX.

Development Frameworks

Even-though LLMs are deep-learning systems based, trained on internet-scale datasets, it has reached mainstream notoriety. Especially considering the no-code natural language input method of generative systems.

And with generative systems forming the bedrock of an ever-growing number of applications, it does seem like the predictive capabilities of LLMs are being neglected.

Reasons for this include:

  • In most cases predictive requires more data, and involves some degree of training data preparation.
  • The predictive process involves a pro-code portion.
  • For commercial NLU related applications, traditional NLU systems’ predictive capability on specifically trained data is highly efficient and cost effective.

Having considered LangChain and Dust, there is an emergence of Large Language Model (LLM) apps to build applications on-top of LLMs.

Functionality can be combined (chained together) to create augmented and “intelligent” API calls to LLMs.

A large language model app is a chain of one or multiple prompted calls to models or external services (such as APIs or data sources) in order to achieve a particular task.

~ Dust

Considering the image below, are the six components to creating a LLM application.

Some considerations:

⚫️ LLM Applications surfaced as APIs will become the norm, with an conversational interface utilising multiple applications/LLM based APIs.

⚫️ Within the LLM Application multiple calls to the LLM can be chained together for a certain level of orchestration.

⚫️ However, the chaining of blocks within the LLM App will not have the ability to facilitate a multi turn conversation. These LLM Apps are aimed at completing a certain task, or serving as a small chat utility.

⚫️ For larger implementations a more robust and comprehensive framework will be required.

⚫️ LLM Apps are however a good indicator of how LLM interactions can be automated and how complexity can be added to a generative task.

⚫️ I find it surprising that traditional Conversational AI frameworks have not adopted this methodology (yet).

Templating

The use of templates for prompt engineering has always been inevitable and had to happen. Templating of generative prompts allows for the programmability of prompts, storage and re-use.

Dust makes use of Tera templates, which acts as a text file, but placeholders for variables and expressions are inserted.

The placeholders are replaced with values at run-time. Below is an example of a wedding thank you template from one of the Dust example applications:

Jack and Diane just had their wedding in Puerto Rico and it is time to write thank you cards. For each guest, write a thoughtful, sincere, and personalized thank you note using the information provided below.

Guest Information: ${EXAMPLES.guest}
First, let's think step by step: ${EXAMPLES.step}
Next, let's draft the letter:${EXAMPLES.note}

Guest Information: Name:${INPUT.name},Relationship: ${INPUT.relationship}, Gift:${INPUT.gift}, Hometown: ${INPUT.hometown}
First, let's think step by step:"

Blocks

Within Dust, blocks can be executed in sequence, or parallel, in the image below you see the basic functionality which is encapsulated within a block.

When adding a Block within the Dust workspace, a list of eight block types are surfaced to choose from.

Below you can see the different model providers for an app in Dust, and services available.

End User Applications

The image below shows the typical technology stack for LLM based end user applications. The two key aspects are the user experience, the graphic interface which translates the graphics into how the user feels about and experiences the applications.

And the propriety software which constitutes the “secret sauce”, and encapsulates the company’s competitive advantage.

Examples of such End User Applications are:

Filechat allows you to upload a document, and via word embeddings, you can explore the document in a conversational manner.

PromptLayer describes themselves as the first platform built for prompt engineers. The application compiles a log of prompts and OpenAI API requests. You can track, debug, and replay old completions.

Then there are other prompt companies like Humanloop, Promptable and many-many more.

In Closing

Prompt Engineering based applications are growing (literally) by the day…but what interests me in specific are frameworks like LangChain & Dust (in an upcoming article I will dive into more detail on this) that is the first emergence of LLM based conversational development frameworks.

The basic principles these frameworks implement fascinates me and it will serve as the origins of the Conversational AI frameworks of the future.

⭐️ Please follow me on LinkedIn for updates on Conversational AI ⭐️

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.

https://www.linkedin.com/in/cobusgreyling
https://www.linkedin.com/in/cobusgreyling

--

--