The LangChain Implementation Of DeepMind’s Step-Back Prompting

LangChain recently created an implementation of Step-Back Prompt Engineering, in this article you will find full working code to run DeepMind’s Step-Back prompting in a notebook.

6 min readOct 26, 2023

--

Step-Back Prompting was coined by research from Google DeepMind. Step-Back Prompting is a prompting approach enabling LLMs to perform abstractions, derive high-level concepts & first principles from which accurate answers can be derived.

Considering the image below, step-back prompting is a more technical prompt engineering approach where the original question needs to be distilled into a step-back question. Subsequently, the step-back answer is used for the final answer.

I always maintained that this prompting technique is too complex for a static prompt engineering approach or even a prompt template approach. I always felt it would work best in a chained or autonomous agent implementation.

I was very pleased to discover that LangChain has an implementation of Step-Back Prompting.

LangChain modified the prompts slightly to work better with chat models.

Below is the installations required, you will need an OpenAI API Key to run this notebook.

Notice how the few shot examples are defined with input and output.

Notice also the system prompt:

You are an expert at world knowledge. Your task is to step back and paraphrase a question to a more generic step-back question, which is easier to answer. Here are a few examples:

Consider below the installations and imports required and how the examples are defined:

pip install langchain
pip install openai
pip install langchainhub
pip install duckduckgo-search

######################################

import os
import openai
os.environ['OPENAI_API_KEY'] = str("xxxxxxxxxxxxxxxxxxxxxxxxxxx")

######################################

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate, FewShotChatMessagePromptTemplate
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnableLambda

######################################

# Few Shot Examples
examples = [
{
"input": "Could the members of The Police perform lawful arrests?",
"output": "what can the members of The Police do?"
},
{
"input": "Jan Sindel’s was born in what country?",
"output": "what is Jan Sindel’s personal history?"
},
]
# We now transform these to example messages
example_prompt = ChatPromptTemplate.from_messages(
[
("human", "{input}"),
("ai", "{output}"),
]
)
few_shot_prompt = FewShotChatMessagePromptTemplate(
example_prompt=example_prompt,
examples=examples,
)

######################################

prompt = ChatPromptTemplate.from_messages([
("system", """You are an expert at world knowledge. Your task is to step back and paraphrase a question to a more generic step-back question, which is easier to answer. Here are a few examples:"""),
# Few shot examples
few_shot_prompt,
# New question
("user", "{question}"),
])

######################################

question_gen = prompt | ChatOpenAI(temperature=0) | StrOutputParser()

######################################

question = "was chatgpt around while trump was president?"

######################################

question_gen.invoke({"question": question})

The question was asked, was chatgpt around while trump was president?

The step-back question generated is: what is the timeline of ChatGPT’s existence?

what is the timeline of ChatGPT's existence?

The DuckDuckGo Search API is used:

from langchain.utilities import DuckDuckGoSearchAPIWrapper

search = DuckDuckGoSearchAPIWrapper(max_results=4)

def retriever(query):
return search.run(query)

######################################

retriever(question)

And the response:

This includes content about former President Donald Trump. According to further tests, ChatGPT successfully wrote poems admiring all recent U.S. presidents, but failed when we entered a query for ... On Wednesday, a Twitter user posted screenshots of him asking OpenAI's chatbot, ChatGPT, to write a positive poem about former President Donald Trump, to which the chatbot declined, citing it ... While impressive in many respects, ChatGPT also has some major flaws. ... [President's Name]," refused to write a poem about ex-President Trump, but wrote one about President Biden ... After ChatGPT wrote a poem praising President Biden, but refused to write one praising former president Donald Trump, the creative director for Sen. Ted Cruz (R-Tex.), Leigh Wolf, lashed out.

The step back question:

retriever(question_gen.invoke({"question": question}))

And the response:

ChatGPT is a fine-tuned version of GPT-3.5, a family of large language models that OpenAI released months before the chatbot. GPT-3.5 is itself an updated version of GPT-3 , which appeared in 2020. ChatGPT, which stands for Chat Generative Pre-trained Transformer, is a large language model-based chatbot developed by OpenAI and launched on November 30, 2022, which enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. Successive prompts and replies, known as prompt engineering, are considered at each conversation stage as a ... OpenAI released an early demo of ChatGPT on November 30, 2022, and the chatbot quickly went viral on social media as users shared examples of what it could do. Stories and samples included ... March 14, 2023 - Anthropic launched Claude, its ChatGPT alternative. March 20, 2023 - A major ChatGPT outage affects all users for several hours. March 21, 2023 - Google launched Bard, its ...

The code below runs the step-back context:

from langchain import hub

response_prompt = hub.pull("langchain-ai/stepback-answer")

######################################

chain = {
# Retrieve context using the normal question
"normal_context": RunnableLambda(lambda x: x['question']) | retriever,
# Retrieve context using the step-back question
"step_back_context": question_gen | retriever,
# Pass on the question
"question": lambda x: x["question"]
} | response_prompt | ChatOpenAI(temperature=0) | StrOutputParser()

######################################

chain.invoke({"question": question})

With the correct response below:

No, ChatGPT was not around while Donald Trump was president. ChatGPT was developed by OpenAI and launched in June 2020, after Donald Trump had already left office. Therefore, it did not have the opportunity to generate content or respond to queries specifically about Donald Trump during his presidency.

And below the code with the normal context:

response_prompt_template = """You are an expert of world knowledge. I am going to ask you a question. Your response should be comprehensive and not contradicted with the following context if they are relevant. Otherwise, ignore them if they are not relevant.

{normal_context}

Original Question: {question}
Answer:"""
response_prompt = ChatPromptTemplate.from_template(response_prompt_template)

######################################

chain = {
# Retrieve context using the normal question (only the first 3 results)
"normal_context": RunnableLambda(lambda x: x['question']) | retriever,
# Pass on the question
"question": lambda x: x["question"]
} | response_prompt | ChatOpenAI(temperature=0) | StrOutputParser()

######################################

chain.invoke({"question": question})

And a subsequent incorrect answer.

Yes, ChatGPT was around while Donald Trump was president. However, based on the provided context, it appears that ChatGPT refused to write a positive poem about former President Donald Trump but wrote one about President Biden.

What I love about LangChain in general is how quick they are at implement new prompting techniques and strategies. Together with frameworks like autonomous agents and more.

⭐️ Follow me on LinkedIn for updates on Large Language Models ⭐️

I’m currently the Chief Evangelist @ Kore AI. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

LinkedIn

--

--