Chain of Empathy Prompting (CoE)
CoE Prompting enhances Large Language Model (LLM) empathy based on psychotherapy models.
Emergent Abilities
Emergent abilities of LLMs refers to how, via novel zero and few shot prompting, exceptional performance across diverse tasks can be achieved. Often the performance achieved is unexpected, not predicted and were not explicitly trained.
Some of these emergent abilities include tasks that require complex reasoning abilities.
One can argue that CoE prompting forms part of the ambit of emergent abilities.
CoE Prompting
Large Language Models (LLMs) have shown dramatic improvement in text generation performance which closely resembles human levels.
And most of the focus has been on experimentation with logical or arithmetic tasks.
Chain-of-Empathy (CoE) prompting involves cognitive reasoning of human’s emotion based on psychotherapy models.
With CoE context is very important and especially the emotional context of the user utterance. The study states that empathetic understanding on the LLMs side requires cognitive reasoning based on the user’s mental state.
Hence CoE prompting integrates a reasoning process into text generation. It focuses on the user’s emotions and specific factors leading to those emotions; such as cognitive errors, before generating the output.
In Closing
One could argue that CoE is the establishment of emotional context.
One of the most significant findings from this study is the importance of understanding the user’s emotional context and how it affects human-AI communication.
This study also turns the attention to context…with the continuous discovery of emergent abilities, one principle underpin’s most of these methods, and that is the principle of establishing context.
Context is important for any conversation, and one of the first steps in any human-to-human conversation is the establishment of a mutual understanding of the context.
Considering intents, intents are merely a list of predefined classes capturing the initial context of the user input. The challenge with intents are that these needs to be pre-defined based on existing customer conversations.
Also, traditional intents fail with out-of-domain queries and a variety of fall-back remedies have been implemented in the past.
⭐️ Follow me on LinkedIn for updates on Large Language Models ⭐️
I’m currently the Chief Evangelist @ Kore AI. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.