Chaining Large Language Model Prompts

This article considers some of the advantages and challenges of Prompt Chaining in the context of LLMs.

Cobus Greyling
4 min readMay 2, 2023

--

What is prompt chaining?

Prompt Chaining, also referred to as Large Language Model (LLM) Chaining is the notion of creating a chain consisting of a series of model calls. This series of calls follow on each other with the output of one chain serving as the input of another.

Each chain is intended to target small and well scoped sub-tasks, hence a single LLM is used to address multiple sequenced sub-components of a task.

In essence prompt chaining leverages a key principle in prompt engineering, known as chain of thought prompting.

The principle of Chain of Thought prompting is not only used in chaining, but also in Agents and Prompt Engineering.

Chain of thought prompting is the notion of decomposing a complex task into refined smaller tasks, building up to the final answer.

⭐️ Please follow me on LinkedIn for updates on LLMs ⭐️

Transparency, Controllability & Observability

There is a need to for LLMs to address and solve for ambitious and complex tasks. For instance, consider the following question:

List five people of notoriety which were born in the same year as the person regarded as the father of the iPhone?

When answering this question, we expect the LLM to decompose the question and supply a chain of thought or reasoning on how the answer was reached for this more complex task.

Source

Considering the image above, the principle of chaining is illustrated. [A] is a direct instruction via a prompt. While [B] is where chaining principles are used. The task is decomposed into sub-tasks using a chain-of-thought process, with ideation and culminating in an improved output.

Prompt Chaining not only improves the quality of task outcomes, but also introduces system transparency, controllability and a sense of collaboration.

Stephen Broadhurst recently presented a talk on supervision and observability from a LLM perspective. You can read more about the basic principles here.

Users also become more familiar with LLM behaviour by considering the output from subtasks and calibrating their chains in such a way to reach desired expectations. LLM development becomes more observable as alternative chains can be contrasted against each-other and downstream results compared.

⭐️ Please follow me on LinkedIn for updates on Conversational AI ⭐️

Prompt Drift

Considering that each chain’s input is dependant on the output of a preceding chain…the danger exist of prompt drift. Hence there can be drift or cascading of errors or inaccuracies as the process flows from chain to chain.

Source

Prompt drift can also be introduced when changes to prompt wording upstream causes unintended drift in downstream results. A small deviation introduced upstream will grow downstream, with the deviation being exacerbated with each chain.

⭐️ Please follow me on LinkedIn for updates onLLMs ⭐️

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.

Find me on Twitter: https://twitter.com/CobusGreylingZA

https://www.linkedin.com/in/cobusgreyling
https://www.linkedin.com/in/cobusgreyling

--

--