PromptBreeder Evolves & Adapts Prompts For A Given Domain

There has been immense innovation regarding Prompt Strategies, and I have covered these strategies in detail. However, these approaches are often hand-crafted and sub-optimal for highly scaleable and flexible implementations.

5 min readOct 6, 2023

--

Taking a step back…LLMs need to be programmed and currently we have as an avenue of programming Prompt Engineering.

In turn, Prompt Engineering can be delivered at three stages; training time, generation time or using Augmentation Tools.

Gradient-Free implementations are instances where prompts are engineered using different techniques in wording, and different operational approaches to constitute and deliver the prompt.

These approaches are gradient-free as it does not change or fine-tune the base LLM in any way. All of the prompt engineering approaches listed under gradient-free are often very much domain agnostic and hand designed.

Gradient approaches are more machine learning approaches which could be seen as more automated; but at the same time it is an opaque approach with not the same transparency as a pure prompt engineering approach.

A gradient approach like PromptBreeder is an automated self-improvement process and can adapt prompts to a domain at hand. An approach like PromptBreeder directly fine-tune continuous prompt representations.

It needs to be noted that any approach that updates all or a portion of LLM parameters will not scale as models get bigger and, moreover, will not work with the increasing number of LLMs hidden behind an API.

Back To PromptBreeder

PromptBreeder is premised on the notion of Soft Prompts, which are created during the process of prompt tuning.

For some implementations, unlike hard prompts, soft prompts cannot be viewed and edited in text. Prompts often consist of an embedding, a string of numbers, that derives knowledge from the larger model.

For some implementations, a disadvantage is the lack of interpretability of soft prompts. The AI discovers prompts relevant for a specific task but can’t explain why it chose those embeddings. Like deep learning models themselves, soft prompts are opaque.

Soft prompts act as a substitute for additional training data. Researchers recently estimated that a good language classifier prompt is worth hundreds to thousands of extra data points.

PromptBreeder is underpinned by a LLM and evolves a collection task oriented prompts while evaluating the prompts based on a training set.

The process itterates over numerous generations to evolve the task prompts.

Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way.

According to DeepMind, PromptBreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and common sense reasoning benchmarks.

Above is an overview of PromptBreeder. Given a problem description and an initial set of general Thinking Styles and Mutation Prompts, PromptBreeder generates a collection of units of evolution, each unit consisting of typically two task-prompts and a mutation-prompt.

The fitness of a task-prompt is determined by evaluating its performance on a random batch of training data. Over multiple generations, PromptBreeder mutates task-prompts as well as mutation-prompts using five different classes of mutation operators.

The focus is on evolving domain-adaptive task-prompts and increasingly useful mutation-prompts in a self-referential way.

PromptBreeder is a general-purpose, self-referential, self- improvement mechanism that evolves & adapts prompts for a given domain.

Considering the image above, there are numerous versions of self-referential prompt evolution.

(a) Direct: The LLM is directly used to generate variations of P’ of a prompt strategy P.

(b) Mutation-Prompt Guided: Using a mutation prompt M, an LLM can be explicitly prompted to produce variations.

(c) Hyper Mutation: By using a hyper mutation prompt H, we can also evolve the mutation prompt itself, turning the system into a self-referential one

(d) PromptBreeder: improves the diversity of evolved prompts and mutation prompts by generating an initial population of prompt strategies from a set of seed thinking-styles T, mutation-prompts M, as well as a high level description D of the problem domain.

⭐️ Follow me on LinkedIn for updates on Large Language Models ⭐️

I’m currently the Chief Evangelist @ Kore AI. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

LinkedIn

--

--

Cobus Greyling
Cobus Greyling

Written by Cobus Greyling

I’m passionate about exploring the intersection of AI & language. www.cobusgreyling.com

Responses (1)