Least To Most Prompting
Least To Most Prompting for Large Language Models (LLMs) enables the LLM to handle complex reasoning.
I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.
The process of inference is reaching a conclusion based on evidence and reasoning. And in turn reasoning can be engendered with LLMs by providing the LLM with a few examples on how to reason and use evidence.
Let’s first take a step back.
In the case of Chain Of Thought Reasoning (CoT), via a few-shot prompt engineering approach, the LLM can be taught how to decompose the challenge or question into a reasoning pattern. In essence decompose the question, and the LLM is taught that via a few examples.
More on Chain Of Thought Reasoning
The Self-Ask approach not only guides the LLM to follow a chain of thought while reasoning, but aims to have the LLM ask itself questions to reach the final conclusion.
The basic premise of Self-Ask is that even if the LLM does not have an explicit answer to a specific question, the LLM does have enough supporting information. This supporting information to sub-questions can be used to reach a final conclusion.
More on Self-Ask
But what if the problem the LLM needs to solve, is harder than the examples given?
Hence a novel prompting strategy was developed, named least-to-most prompting. This method is underpinned by the following strategy:
- Decompose a complex problem into a series of simpler sub-problems.
- And subsequently solving for each of these sub-questions.
Solving each subproblem is facilitated by the answers to previously solved subproblems.
Hence least to most prompting is a technique of using a progressive sequence of prompts to reach a final conclusion.
Least-to-most prompting can be combined with other prompting techniques like chain-of-thought & self-consistency. For some tasks, the two stages in least-to-most prompting can be merged to form a single-pass prompt. — Source
Here is a practical example prompt:
CUSTOMER INQUIRY:
I just bought a T-shirt from your Arnold collection on March 1st.
I saw that it was on discount, so bought a shirt that was originally $30,
and got 40% off. I saw that you have a new discount for shirts at 50%.
I'm wondering if I can return the shirt and have enough store credit to
buy two of your shirts?
INSTRUCTIONS:
You are a customer service agent tasked with kindly responding to
customer inquiries. Returns are allowed within 30 days.
Today's date is March 29th. There is currently a 50% discount on all
shirts. Shirt prices range from $18-$100 at your store. Do not make up any
information about discount policies.
Determine if the customer is within the 30-day return window.
text-davinci-003
returns the wrong answer as seen below, when prompted directly:
Below the least to most prompting approach is followed and illustrated within the OpenAI playground.
The sequence followed is to first-off ask the model (for consistency we again use text-davinci-003
) the following question:
What subproblems must be solved before answering the inquiry?
Subsequently the LLM generates 4 subproblems which needs to be solved in order for the instruction to be completed.
Below the Least To Most prompting approach is followed, the LLM is prompted with the four subproblems or tasks it identified.
However, the subproblems are posed one at a time, and the previous problem and answer is included in the prompt. Hence building the prompt out one Q/A or subproblem at a time.
The previous prompts are included as a reference and serves a few-shot training purpose.
The scenario presented to the LLM can be seen as ambiguous, but the LLM does a stellar job in following the sequence and reaching the correct answer.
The correct answer to this prompt is:
Yes, the customer can purchase two shirts at the current 50% discount
rate with their store credit, as $18 of store credit will cover the cost
of two shirts at $36.
Both Chain-Of-Thought prompting and Self-Ask prompting via a playground environment supplies the LLM with a single example to follow in one prompt.
However, the principle behind Least-To-Most is to get a breakdown from the LLM, and then in a sequential fashion step through the questions and answers.
Three caveats to add to this are:
- These three prompting methods can be implemented via an autonomous agent. Here is an article on the Self-Ask approach making use of a LangChain Agent. Making use of an Agent introduces a high level of efficiency and autonomy.
- The aim of these approaches is the following; in cases where the LLM does not have a direct answer, it can use powers of reasoning and deduction to use existing knowledge to reach a correct conclusion.
- The Least-To-Most approach chains these subtasks together. However, chaining is susceptible to a prompt vulnerability called cascading, also referred to as Prompt Drift. The notion that an error or inaccuracy from the LLM is transferred from prompt to prompt in the chain.
⭐️ Follow me on LinkedIn for updates on Large Language Models ⭐️
I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.