Prompt Drift
The notion to create workflows (chains) which leverage Large Language Models (LLMs) are necessary and needed. But there are a few considerations, one of which is Prompt Drift.
I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.
Chaining or also referred to as Prompt Chaining is the process of making use of a programming tool (in some cases visual) to facilitate the chaining or sequencing of large language model prompts into an application; which mostly creates a conversational UI.
A core feature of prompt chaining is cascading tasks from one chain to another. This cascading of a task will most probably last for the duration of the user conversation.
Prompt Drift is the process of cascading inaccuracies which can be caused by:
- Model-inspired tangents,
- Incorrect problem extraction,
- LLMs’ randomness and creative surprises
Chaining can act as a safeguard against model-inspired tangents, because each step of the Chain defines a clear goal. ~ Source
The image below shows how a single node or prompt, forming part of a larger chain, can be impacted to produce prompt drift.
- The user input can be unexpected or unplanned producing an unforeseen output from the node.
- The previous node output can be inaccurate or produce drift which is exacerbated in the current node.
- The LLM Response can also be unexpected, due to the fact that LLMs are non-deterministic.
One of the ways to counter prompt drift (error cascading) is to ensure the prompt template used is comprehensive and enough contextual information is supplied to negate LLM hallucination.
⭐️ Please follow me on LinkedIn for updates on Conversational AI ⭐️
In Closing
It is important to not see prompt chaining in isolation, but rather consider Prompt Engineering as a discipline which consists of eight legs, as depicted below.
Prompt Engineering is the foundation of Chaining and the discipline of Prompt Engineering is very simple and accessible.
However, as the LLM landscape develops, prompts are becoming programable and incorporated into more complex structures. These structures should be a combination of available affordances.
Hence chaining should be supported by elements like Agents, Pipelines, Chain-of-Thought Reasoning, etc.
I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.