Controllable Agents For RAG With Human In The Loop Chat
This demo from LlamaIndex is a good example of how the capabilities of agents and RAG are merging & how HITL can be used to solve for long running tasks.
Introduction
As I have written before, whenever RAG implementations need to address more complex inquiries it poses significant challenges, hence the need for agent-like implementations.
One major hurdle for agent implementations is the issue of observability and steerability.
Agents frequently employ strategies such as chain-of-thought or planning to handle user inquiries, relying on multiple interactions with a Large Language Model (LLM).
Yet, within this iterative approach, monitoring the agent’s inner mechanisms or intervening to correct its trajectory midway through execution proves challenging.
To address this issue, LlamaIndex has introduced a lower-level agent specifically engineered to provide controllable, step-by-step execution on a RAG (Retrieval-Augmented Generation) pipeline.
This demonstration underscores LlamaIndex’s goal of showcasing the heightened control and transparency that the new API brings to managing intricate queries and navigating extensive datasets.
Added to this, introducing agentic capabilities on top of a RAG pipeline can allow you to reason over much more complex questions.
Human-In-The-Loop
The Human In The Loop chat capabilities allows for a step-wise approach by a human via a chat interface. While it is possible to ask agents complex questions which demands multiple reasoning steps. These queries can be long running and can in some instances be wrong.
The HITL approach where questions are asked by the user allows for a more granular approach with human feedback after each step. These iterations are not long running and can be steered by the user. While the user is leveraging the agentic RAG capabilities.
LlamaIndex has a complete Colab notebook on this functionality, as seen below, the only change you will need to make is add your OpenAI API key:
import os
import openai
os.environ["OPENAI_API_KEY"] = "<Your OpenAI API Key Goes Here>"
The OpenAI LLM still acts as the backbone for the Agent.
There are also other instances where a Human-In-The-Loop approach is followed…
This approach only momentarily breaks out to a human for assistance and then reverts back to the automated agent. Hence the human is used as a HITL tool amongst other agent tools.
⭐️ Follow me on LinkedIn for updates on Large Language Models ⭐️
I’m currently the Chief Evangelist @ Kore AI. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.