Symbolic Reasoning & PAL: Program-Aided Large Language Models

LLMs should not only be able to perform mathematical reasoning, but also symbolic reasoning which involves reasoning pertaining to colours and object types.

Cobus Greyling
4 min readMay 24, 2023

--

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI and language. Including NLU design, evaluation & optimisation. Data-centric prompt tuning & LLM observability, evaluation & fine-tuning.

Recently I wrote about the Program-Aided Language Model (PAL) method which uses LLMs to read natural language problems and generate programs as reasoning steps. The code is executed by an interpreter to produce the answer.

The previous article focussed on mathematical reasoning via PAL. In this post I would like to look at symbolic reasoning and PAL. Symbolic reasoning involves reasoning about objects and concepts.

For instance, requiring a LLM to answer questions about object colours on a surface. Hence a task that requires keeping track of relative positions, absolute positions, and the colour of each object.

⭐️ Please follow me on LinkedIn for updates on LLMs ⭐️

Consider the following question:

I have a chair, two potatoes, a cauliflower, a lettuce head, two tables, a cabbage, two onions, and three fridges. How many vegetables do I have?

The LLM should convert the input into a dictionary with entities and values according to their quantities, while filtering out non-vegetable entities.

Finally, the answer is the sum of the dictionary values, below the PAL output from the LLM:

# note: I'm not counting the chair, tables, or fridges
vegetables_to_count = {
'potato': 2,
'cauliflower': 1,
'lettuce head': 1,
'cabbage': 1,
'onion': 2
}
answer = sum(vegetables_to_count.values())

The table below shows the comparison in accuracy of Direct Prompting, Chain-Of-Thought Prompting & PAL. The improvement when CoT is used is significant as can be seen below.

Source

The study also found that CoT was more sensitive to increased complexity than PAL.

⭐️ Please follow me on LinkedIn for updates on LLMs ⭐️

🦜🔗LangChain implementation of PAL

Here is the Python code for the PAL implementation, it its simplest form:

pip install langchain
pip install openai

import os
from langchain.chains import PALChain
from langchain import OpenAI

os.environ['OPENAI_API_KEY'] = str("xxxxxxxxxxxxxxxxxxxxx")
llm = OpenAI(temperature=0,max_tokens=512, model_name='gpt-4-0314')

pal_chain = PALChain.from_math_prompt(llm, verbose=True)

question = "I have a chair, two potatoes, a cauliflower, a lettuce head, two tables, a cabbage, two onions, and three fridges. How many vegetables do I have?"

pal_chain.run(question)

And the output from the PAL Chain:

> Entering new PALChain chain...
"""I have a chair, two potatoes, a cauliflower, a lettuce head, two tables, a cabbage, two onions, and three fridges. How many vegetables do I have?"""
potatoes = 2
cauliflower = 1
lettuce_head = 1
cabbage = 1
onions = 2
total_vegetables = potatoes + cauliflower + lettuce_head + cabbage + onions
result = total_vegetables
return result

> Finished chain.

Below, a snapshot from the Colab Notebook:

Notice how a chain is created on the fly, which is very similar to the autonomous chains created by Agents. There is a definite start to the chain and a chain completion.

Read more about agents here.

⭐️ Please follow me on LinkedIn for updates on LLMs ⭐️

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.

https://www.linkedin.com/in/cobusgreyling
https://www.linkedin.com/in/cobusgreyling

PAL: Program-aided Language Models

--

--

Cobus Greyling
Cobus Greyling

Written by Cobus Greyling

I’m passionate about exploring the intersection of AI & language. www.cobusgreyling.com

No responses yet