Agents, LLMs & Multihop Question Answering

Within a development framework, Agents can have access to a suite of tools; based on the user input and demands. Where predetermined chains do not exist, Agents can service the user request.

Cobus Greyling
5 min readApr 21, 2023

--

Setting The Scene

Accessing and developing on Large Language Models (LLMs) is an ever expanding field…

However, there are a few principles emerging which is being widely implemented.

This image shows the current development stack for LLMs and the development affordances available.

Considering the image above, the four layers just above the LLM are all directly related to different approaches and implementations of prompts.

There has been an evolution in prompt implementation methods.

Prompt chaining is the process of chaining or sequencing a number of prompts to create a larger application. The prompt sequences can be arranged in series or parallel.

But, one of the impediments of prompt chaining, is that chaining is a predetermined flow or sequence of calls to LLMs and other APIs.

What about potential unknown scenarios or user behaviour for which no chain has been developed? Can there be a level of autonomy, where the LLM based application can decide which route will service the user request best? Enter Agents.

Considering an Agent which has access to an extractive model (prompt pipeline), trained on a corpus of data containing information on US presidents…

The extractive model based on the document store does well at the following straight-forward question:

Who was the 1st president of the USA?

But the Agent solely based on an extractive model does not do well to find answers to a question which does not clearly match a phrase in the document store.

In the example below, an ambiguous question is posed to the Agent:

What year was the 1st president of the USA born?

This can be considered as a multihop question, which demands a level of deduction and reasoning.

If the Agent has access to the document store (extractive model) and a LLM, the Agent can decide which tool from its arsenal to use to best service the user request.

Leveraging the LLM, the Agent can also follow a multihop approach to answering the question, by following chain-of-thought reasoning to answer the question.

Below is the response of the Haystack Agent to the question, notice how the question is decomposed, and notice how a Chain-Of-Thought Prompting approach is followed.

Source

Finally

In the case of the Haystack Agent, the exact wording of the description is really important. The Agent leverages this description to understand which tool to use.

If the agent fails to pick the right tool, you can adjust the description.

I changed the description from:

useful for when you need to answer questions related to the presidents of the USA.

to:

useful for when you need to answer questions related to the presidents of the USA when there is no answer in the document store.

Below the code snippet…

from haystack.agents import Tool

search_tool = Tool(name="USA_Presidents_QA",pipeline_or_node=presidents_qa,
description="useful for when you need to answer questions related to the presidents of the USA when there is no answer in the document store.",
output_variable="answers"
)
agent.add_tool(search_tool)

When considering LLM based applications, it is evident that orchestration will play a large role, together with an appropriate balance between prompt pipelines, prompt chaining and agents.

⭐️ Please follow me on LinkedIn for updates on Conversational AI ⭐️

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.

https://www.linkedin.com/in/cobusgreyling
https://www.linkedin.com/in/cobusgreyling

--

--