Chain-Of-Thought Prompting In LLMs

In principle chain-of-thought prompting allows for the decomposition of multi-step requests into intermediate steps.

Cobus Greyling
3 min readApr 20, 2023

--

Chain-of-thought prompting enables large language models (LLMs) to address complex tasks like common sense reasoning and arithmetic.

Establishing chain-of-thought reasoning via prompt engineering and instructing the LLM accordingly is quite straight-forward to implement.

Inference can be established via chain-of-thought prompting.

Below is a very good illustration of standard LLM prompting on the left, and chain-of-thought prompting on the right.

Source

What is particularly helpful of Chain-Of-Thought Prompting is that by decomposing the LLM input and LLM output, it creates a window of insight and interpretation.

This Window of decomposition allows for manageable granularity for both input and output, and tweaking the system is made easier.

Chain-Of-Thought Prompting is ideal for contextual reasoning like word problems, common-sense reasoning, math word problems and is very much applicable to any task that we as humans can solve via language.

The image below shows a comparison of percentage solve rate based on standard prompting and chain-of-thought prompting.

Chain of thought reasoning provides reasoning demonstration examples via prompt engineering.

Source

In Conclusion

In essence chain of thought reasoning can be achieve by creating intermediate reasoning steps to incorporate in the prompt.

The ability of LLMs to perform complex reasoning improves significantly.

⭐️ Please follow me on LinkedIn for updates on Conversational AI ⭐️

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.

https://www.linkedin.com/in/cobusgreyling
https://www.linkedin.com/in/cobusgreyling

--

--