Adapted From Source

Comparing LLM Performance Against Prompt Techniques & Domain Specific Datasets

This study from August 2023 considers 10 different prompt techniques, over six LLMs and six data types.

--

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

This study compared 10 different zero-shot prompt reasoning strategies over six LLMs (davinci-002, davinci-003, GPT-3.5-turbo, GPT-4, Flan-T5-xxl & Cohere command-xlarge) referencing six QA datasets ranging from scientific to medical domains.

Some notable findings were:

  1. As is visible in the graphed data below, some models are optimised for specific prompting strategies and data domains.
  2. Gains from Chain-Of-Thought (CoT) reasoning strategies are effective across domains and LLMs.
  3. GPT-4 has the best performance across data domains and prompt techniques.

The header image depicts the performance of each of the six LLMs used in the study and their respective overall performances.

The image below shows the 10 prompt techniques used in the study, with an example of each prompt, and the score achieved by each prompt technique. The scores shown here are specifically related to the GPT-4 model.

Adapted From Source

The prompt template structure used…

The {instruction} is placed before the question and answer choices.

With the {question} being the multiple-choice question that the model is expected to answer.

The {answer_choices} are the options provided for the multiple-choice question.

The {cot_trigger} is placed after the question.

{instruction}

{question}

{answer_choices}

{cot_trigger}

The image below depicts the performance of the various prompting techniques (vertical) against LLMs performance (horizontal).

Something I found interesting is that Google’s FLan-T5-XXL model does not follow the trend of improved performance with the Zhou prompting technique.

And also the Cohere models seems to have a significant deprecation in performance with the Kojima prompting technique.

Source (Table 14: Accuracy of prompts per model averaged over datasets.)

The table below taken from the paper shows the six datasets with a description of each set.

And the performance of each LLM based on the six datasets. The toughest datasets to navigate for the LLM were MedQA, MedMCQA and arguably OpenBookQA.

Throughout the study it is evident that GPT-4’s performance is stellar. Noticeable is Google’s good performance in OpenBookQA.

Source (Table 10: Accuracy of models per dataset averaged over prompts.)

--

--

Cobus Greyling
Cobus Greyling

Written by Cobus Greyling

I’m passionate about exploring the intersection of AI & language. www.cobusgreyling.com

No responses yet