LangChain Based Plan & Execute AI Agent With GPT-4o-mini
A few months ago I wrote a piece on the plan-and-solve study; which consists of two components: first, creating a plan to break the task into smaller subtasks, and then executing those subtasks according to the plan.
Introduction
I also wrote about the LangChain implementation of that study, which took the form of an Agentic application.
Considering that a Large Language Model usually forms the back-bone of an agentic application or AI agent, I wanted to make use of the recently launched small model from OpenAI, called gpt-4o-mini
.
A good place to track and trace the performance of the model from a token usage, inference latency and accuracy perspective is LangSmith, as seen in the image below.
Plan & Solve Prompting
As has been widely established by now, Chain-of-Thought (CoT) prompting is a highly effective method for querying LLMs using a single zero or few-shot approach.
It excels at tasks requiring multi-step reasoning, where the model is guided through step-by-step demonstrations before addressing the problem with the instruction Let us think step by step
.
However, recent studies have identified three main limitations of CoT prompting:
Calculations
7% failure rate in test examples.
Missing Steps
12% failure rate in sequential events.
Semantic Misunderstanding
27% failure rate in test examples.
To address these issues, Plan-and-Solve (PS) prompting and its enhanced version, Plan-and-Solve with Detailed Instructions (PS+), have been introduced.
PS involves two key steps:
- Creating a plan to break the task into smaller subtasks and then
- Executing these subtasks according to the plan.
LangChain Code Implementation
This simple architecture represents the planning agent framework. It has two main components:
- Planner: Prompts an LLM to create a multi-step plan for a large task.
- Executors: Receive the user query and a step in the plan, then invoke one or more tools to complete that task.
After execution, the agent is prompted to re-plan, deciding whether to provide a final response or generate a follow-up plan if the initial plan was insufficient.
This design minimises the need to call the large planner LLM for every tool invocation.
However, it remains limited by serial tool calling and requires an LLM for each task, as it doesn’t support variable assignment.
The LLM assign is done in the following way:
llm = OpenAI(temperature=0,model_name=’gpt-4o-mini’)
Below the complete Python code for the AI agent. The only changes you will need to make is adding your OpenAI API Key, and langSmith project variables.
### Install Required Packages:
pip install -qU langchain-openai langchain langchain_community langchain_experimental
pip install -U duckduckgo-search
pip install -U langchain langchain-openai
### Import Required Modules and Set Environment Variables:
import os
from uuid import uuid4
### Setup the LangSmith environment variables
unique_id = uuid4().hex[0:8]
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = f"OpenAI_SM_1"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.smith.langchain.com"
os.environ["LANGCHAIN_API_KEY"] = "<LangSmith API Key Goes Here>"
### Import LangChain Components and OpenAI API Key
from langchain.chains import LLMMathChain
from langchain_community.utilities import DuckDuckGoSearchAPIWrapper
from langchain_core.tools import Tool
from langchain_experimental.plan_and_execute import (
PlanAndExecute,
load_agent_executor,
load_chat_planner,
)
from langchain_openai import ChatOpenAI, OpenAI
###
os.environ['OPENAI_API_KEY'] = str("<OpenAI API Key>")
llm = OpenAI(temperature=0,model_name='gpt-4o-mini')
### Set Up Search and Math Chain Tools
search = DuckDuckGoSearchAPIWrapper()
llm = OpenAI(temperature=0)
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
tools = [
Tool(
name="Search",
func=search.run,
description="useful for when you need to answer questions about current events",
),
Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math",
),
]
### Initialize Planner and Executor
model = ChatOpenAI(model_name='gpt-4o-mini', temperature=0)
planner = load_chat_planner(model)
executor = load_agent_executor(model, tools, verbose=True)
agent = PlanAndExecute(planner=planner, executor=executor)
### Invoke the Agent
agent.invoke(
"Who is the founder of SpaceX an what is the square root of his year of birth?"
)
✨✨ Follow me on LinkedIn for updates on Large Language Models
I’m currently the Chief Evangelist @ Kore AI. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.