The Anatomy Of Chain-Of-Thought Prompting (CoT)

The research on Chain-Of-Thought Prompting (CoT) was published on 10 Jan 2023, and during the last year, there has been a significant number of prompting techniques premised on CoT. Launching the Chain-Of-X phenomenon.


With CoT being such a pivotal moment in LLM prompting, what are the underlying principles and structure constituting CoT prompting, and what is the anatomy of CoT Prompting?

Introduction to Chain-of-Thought (COT)

CoT explicitly encourages the LLM to generate intermediate rationales for solving a problem. This is achieved by providing a series of reasoning steps in the demonstrations for the LLM to emulate.

Despite the success of CoT, until recently there has been little understanding of what makes CoT prompting effective and which aspects of the demonstrated reasoning steps contribute to the success of CoT.


Of Late there has been a number of “Chain-Of-X” implementations, illustrating how Large Language Models (LLMs) are capable of decomposing complex problems into a series of intermediate steps.

This lead to a phenomenon which some call Chain-Of-X.

This basic principle was first introduced by the concept of Chain-Of-Thought(CoT) prompting…

The basic premise of CoT prompting is to mirror human problem-solving methods, where we as humans decompose larger problems into smaller steps.

The LLM then addresses each sub-problem with focussed attention hence reducing the likelihood of overlooking crucial details or making wrong assumptions.

The Chain-Of-X approach is very successful in interacting with LLMs.

Component of Chain-Of-Thought

The study found that the validity of reasoning matters only a small portion to the performance. When providing rationales with completely invalid reasoning steps, the LLM can still achieve over 80 to 90% of the performance of CoT.

Are ground truth bridging objects/language templates important? If not, what would be the key aspects that are needed for the LLM to reason properly?

After further examinations, the study identify and formulate other aspects of a CoT rationale , and found that being relevant to the query and correctly ordering the reasoning steps are key for the effectiveness of CoT prompting.


Bridging refers to symbolic items which the model traverses to reach a final conclusion. Bridging can be made up of numbers and equations for arithmetic tasks, or the names of entities in factual tasks.

Language Templates

Language templates are the textual hints that guide the language model to derive and contextualise the correct bridging objects during the reasoning process.


Above is a practical example of Bridging Objects (blue) and Language Templates (red) for creating Chain-of-Thought rationale.


Coherence refers to the correct ordering of steps in a rationale, and is necessary for successful chain of thought. Specifically, as chain of thought is a sequential reasoning process, it is not possible for later steps to be pre- conditions of earlier steps.


Relevance refers to whether the rationale contains corresponding information from the question. For instance, if the question mentions a person named Leah eating chocolates, it would be irrelevant to discuss a different person cutting their hair.

In Conclusion

The allure of CoT prompting is the fact that it is simple, easy inspectable and not opaque like gradient based approaches.

However, as the subsequent Chain-Of-X approaches have shown:

  1. In-Context Learning requires highly contextual information to be injected into the prompt at inference.
  2. A data centric approach is becoming increasingly important, with human-annotated data. Using the right data demands data discovery, data design, data development and data delivery.
  3. As flexibility is introduced, so will complexity be necessitated.
  4. Human observation and inspection will become increasingly important to ensure system integrity.
  5. More complex frameworks in managing prompt injection and multi-inference architecture will have to be introduced.

⭐️ Follow me on LinkedIn for updates on Large Language Models ⭐️

I’m currently the Chief Evangelist @ Kore AI. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.




Cobus Greyling

I explore and write about all things at the intersection of AI & language; LLMs/NLP/NLU, Chat/Voicebots, CCAI.