LangChain Agents & LlamaIndex Tools

Agents are autonomous by utilising a Large Language Models (LLM) to decide which sequence of actions to pursue & which tools to use.

Cobus Greyling
4 min readJun 15, 2023

--

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.

Looking at the diagram below, when receiving a request, Agents make use of a LLM to decide on which Action to take.

After an Action is completed, the Agent enters the Observation step.

From Observation step Agent shares a Thought; if a final answer is not reached, the Agent cycles back to another Action in order to move closer to a Final Answer.

There is a whole array of Action options available to the LangChain Agent.

Actions are taken by the agent via various tools. The more tools are available to an Agent, the more actions can be taken by the Agent.

Below is a list of some of the tools available to LangChain agents.

LlamaIndex forms part of this list of tools, with LlamaIndex acting as a framework to access and search different types of data.

Source

Below is an example of creating an agent tool via LlamaIndex.

The LlamaIndex OnDemandLoaderTool is a powerful general agent tool that allows for ad hoc data querying from any data source.

This tool takes in a BaseReader data loader, and when called will 1) load data, 2) index data, and 3) query the data.

The example below shows how you can use theOnDemandLoaderTool to convert our Wikipedia data loader into an accessible search tool for a LangChain agent.

pip install llama_index
pip install langchain
pip install wikipedia

import os
import openai
openai.api_key = 'xxxxxxxxxxxxxxxxxxxxxxx'

from llama_index.tools.ondemand_loader_tool import OnDemandLoaderTool
from llama_index.readers.wikipedia import WikipediaReader
from typing import List

from pydantic import BaseModel

reader = WikipediaReader()

tool = OnDemandLoaderTool.from_defaults(
reader,
name="Wikipedia Tool",
description="A tool for loading and querying articles from Wikipedia",
)

# run only the llama_Index Tool by itself
tool(["Berlin"], query_str="What's the arts and culture scene in Berlin?")

# run tool from LangChain Agent
lc_tool = tool.to_langchain_structured_tool(verbose=True)
lc_tool.run(tool_input={"pages": ["Berlin"], "query_str": "What's the arts and culture scene in Berlin?"})

And the Colab notebook result from running only the LlamaIndex tool:

Initialise LangChain Agent

Initialising and running a LangChain agent with access to the LlamaIndex tool:

from langchain.agents import initialize_agent
from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo", streaming=True)

agent = initialize_agent(
[lc_tool],
llm=llm,
agent="structured-chat-zero-shot-react-description",
verbose=True
)

agent.run("Tell me about the arts and culture of Berlin")

And the agent run results from the notebook:

--

--

Cobus Greyling
Cobus Greyling

Written by Cobus Greyling

I’m passionate about exploring the intersection of AI & language. www.cobusgreyling.com

No responses yet