ReAct: Synergy Between Reasoning & Acting In LLMs

An element of human intelligence is the ability to seamlessly combine task-oriented actions with verbal or inner speech. This inner speech plays an important role in human cognition and enables self-regulation and strategising.

Cobus Greyling
5 min readJun 28, 2023

--

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.

With humans the tight synergy between reasoning & acting allows for humans to learn new tasks quickly and perform robust reasoning and decision making. We can perform this even when unforeseen circumstances, information or uncertainties are faced.

LLMs are able to perform an inner monologue based on environmental feedback. Which allows LLMs to more richly plan and process robotic processes.

~ Source

1: Comparison of 4 prompting methods, (a) Standard, (b) Chain-of-thought (CoT, Reason Only), Act-only, and (d) ReAct (Reason+Act), solving a HotpotQA (Yang et al., 2018) question. 2: Comparison of (a) Act-only and (b) ReAct prompting to solve an AlfWorld (Shridhar et al., 2020b) game. Both domains, omit in-context examples in the prompt, and only show task solving trajectories generated by the model (Act, Thought) and the environment (Obs). Source

LLMs have demonstrated impressive results in chain-of-thought reasoning (CoT) and prompting, and acting (generation of action plans).

The idea of ReAct is to combine reasoning and taking action.

Reasoning enables the model to induce, track and update action plans, while actions allow for gathering additional information from external sources.

Combining these to ideas are named ReAct, and it was applied to a diverse set of language and decision making tasks to demonstrate its effectiveness over state-of-the-art baselines in addition to improved human interpretability and trustworthiness.

⭐️ Please follow me on LinkedIn for updates on Conversational AI ⭐️

According to the study, ReAct overcomes issues of hallucination and error cascading of CoT reasoning by interacting with a knowledge source like Wikipedia. Human-like task-solving trajectories are generated.

As seen below in the sequence of events of a ReAct based Agent, reasoning traces makes the final result of the LLM more interpretable with various references along the thought process.

For ReAct, successfully retrieving informative knowledge via search is critical.

Non- informative search, which counts for 23% of the error cases, derails the model reasoning and gives it a hard time to recover and reformulate thoughts.

Below is an example of a ReAct agent implemented via LangChain. Consider the following complex question:

Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?

The first thought of the LLM Agent is to search David Chanoff and determine the U.S. Navy admiral he collaborated with. Followed by determining the U.S. President:

Here is the complete code to run the ReAct agent based on OpenAI, Wikipedia and LangChain:

pip install openai
pip install wikipedia
pip install langchain

import os, openai
from langchain import OpenAI, Wikipedia
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from langchain.agents.react.base import DocstoreExplorer

os.environ['OPENAI_API_KEY'] = str("xxxxxxxxxxxxxxxxxxxxxxx")

docstore=DocstoreExplorer(Wikipedia())
tools = [
Tool(
name="Search",
func=docstore.search,
description="useful for when you need to ask with search"
),
Tool(
name="Lookup",
func=docstore.lookup,
description="useful for when you need to ask with lookup"
)
]
llm = OpenAI(temperature=0, model_name="text-davinci-002")

react = initialize_agent(tools, llm, agent=AgentType.REACT_DOCSTORE, verbose=True)

question = "Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?"
react.run(question)

And the output from the agent:

> Entering new  chain...

Thought: I need to search David Chanoff and find the U.S. Navy admiral he collaborated with. Then I need to find which President the admiral served under.

Action: Search[David Chanoff]

Observation: David Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.
Thought: The U.S. Navy admiral David Chanoff collaborated with is William J. Crowe. I need to find which President he served under.

Action: Search[William J. Crowe]

Observation: William James Crowe Jr. (January 2, 1925 – October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.
Thought: William J. Crowe served as the ambassador to the United Kingdom under President Bill Clinton, so the answer is Bill Clinton.

Action: Finish[Bill Clinton]

> Finished chain.
Bill Clinton

In Conclusion

A lot has been said about chain of thought reasoning and promting and reasoning.

The fact that many LLMs have a set time stamp and time cutoff in terms of general knowledge is also impacts LLMs negatively.

Having an external data source like Wikipedia plays a big role in the LLM agent being able to take action.

⭐️ Please follow me on LinkedIn for updates on Conversational AI ⭐️

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.

LinkedIn

--

--

Cobus Greyling

I explore and write about all things at the intersection of AI & language; LLMs/NLP/NLU, Chat/Voicebots, CCAI. www.cobusgreyling.com