Self-Ask Prompting

Self-Ask Prompting is a progression from Chain Of Thought Prompting. Below are a few practical examples and an implementation of Self-Ask using both manual prompting and the LangChain framework.

Cobus Greyling
4 min readJul 26, 2023

--

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

Considering the image below, it is evident that Self-Ask Prompting is a progression from Direct and Chain-Of-Thought prompting.

The interesting thing about self-ask prompting is that the LLM reasoning is shown explicitly and the LLM also decomposes the question into smaller follow-up questions.

The LLM knows when the final answer is reached and can move from follow up intermediate answers to a final answer.

Source

Below is a practical example from the OpenAI playground, making use of the completion mode and the text-davinci-003 model.

To some extent self-ask makes the output of the LLM more conversational while surfacing decomposed, explicit reasoning is also more informative.

The Self-Ask approach allows for an LLM to give answers to a question it was not explicitly trained on. The model might not have the direct answer to a question, but answers to sub-questions will exist in the LLM dataset.

Hence the model have individual, yet disparate, facts needed to answer the question, but lacks the ability to compose these into a final answer. Self-Ask Prompting guides the LLM in this direction.

Compositional reasoning lets models go beyond memorisation of directly observed facts to deduce previously unseen knowledge. ~ Source.

The search engine results which form part of the LangChain Agent below, act as a contextual reference for the LLM to extract and compose a response from.

Below is the complete Python code to run a self-ask agent within the LangChain framework. The agent makes use of OpenAI a SerpAPI for web search.

pip install langchain
pip install openai
pip install google-search-results

import os
os.environ['OPENAI_API_KEY'] = str("xxxxxxxxxxxxxxxxxxxx")
os.environ["SERPAPI_API_KEY"] = str("xxxxxxxxxxxxxxxxxxxx")

from langchain import OpenAI, SerpAPIWrapper
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType

llm = OpenAI(temperature=0)
search = SerpAPIWrapper()
tools = [
Tool(
name="Intermediate Answer",
func=search.run,
description="useful for when you need to ask with search",
)
]

self_ask_with_search = initialize_agent(
tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True
)
self_ask_with_search.run(
"What is the hometown of the reigning men's U.S. Open champion?"
)

Below the output from the agent, showing the follow up questions, intermediate answers and then the final answer once the chain is completed.

> Entering new AgentExecutor chain...
Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Carlos Alcaraz
Follow up: Where is Carlos Alcaraz from?
Intermediate answer: El Palmar, Spain
So the final answer is: El Palmar, Spain

> Finished chain.
El Palmar, Spain

This is how the output from the Colab notebook looks.

⭐️ Follow me on LinkedIn for updates on Conversational AI ⭐️

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

LinkedIn

MEASURING AND NARROWING
THE COMPOSITIONALITY GAP IN LANGUAGE MODELS

--

--

Cobus Greyling
Cobus Greyling

Written by Cobus Greyling

I’m passionate about exploring the intersection of AI & language. www.cobusgreyling.com

No responses yet