Prompt Engineering, Text Generation & Large Language Models

Text Generation Is A Meta Capability Of Large Language Models & Prompt Engineering Is Key To Unlocking It. You cannot talk directly to a Generative Model, it is not a chatbot. You cannot explicitly request a generative model to do something. But rather you need a vision of what you want to achieve and mimic the initiation of that vision. The process of mimicking is referred to as prompt design, prompt engineering or casting.

Cobus Greyling
5 min readSep 5, 2022

--

The TL;DR

  • Generation is one of the key functionalities of LLMs which can be leveraged in a myriad of ways.
  • Prompt Engineering is the way in which the data is presented and hence dictate the way the LLM executes on the data.

Considering Generation as one aspect of LLMs…

  • LLMs have a number of functions which can be used to perform numerous language tasks.
  • Generation is a function shared amongst virtually all LLMs.
  • Not only can generation be leveraged extensively by using a small sample of data for few-shot learning, but by utilising prompt engineering, the data can be casted in a certain way, hence determining how the data will be used.

One could argue that few shot learning was popularised by OpenAI due to GPT3 being optimised for this approach. Few-shot learning uses a small set of example data in order to teach the model how to execute tasks.

Please subscribe to get an email when I publish a new article. 🙂

Large Language Models (LLMs)

There are numerous definitions on the web for large language models. GPT3 from OpenAI is most probably the most well known model. But there are other commercial models which are easily accessible like Cohere, GooseAI, OpenAI and AI21labs.

Generation

Generation is not limited to only generating bot messages and responses, but can maintain bot dialog state, contextual awareness and session context. Open-source solutions in this area are BLOOM and EleutherAI (now GooseAI).

The Anatomy Of A Good Prompt

A well engineered prompt has three components…

The context needs to be set, and this describes to the generation model what are the objectives.

The data will be used for the model to learn from.

And the continuation description instructs the generative model on how to continue. The continuation statement is used to inform the LLM on how to use the context and data. It can be used to summarise, extract key words, or have a conversation with a few dialog turns.

Below the prompt engineering elements:

DESCRIPTION:* Context: The context is a description of the data or the function.* Data: The data is the few-shot learning example the Generative model will learn from.* Continuation Description: What are the next steps the bot should execute, this step also helps for iteration on an initial query.EXAMPLE:Sentence: In Cape Town a few landmarks stand out, like Table Mountain, the harbour and Lion's head.  Traveling towards Cape Point is also beautiful. 
Extract Key words from the sentence:

Here is a practical example:

Below the LLM Bloom is accessed via 🤗HuggingFace inference API…you can see the few-shot training data has the context of Sentence.

Followed by the context and a subsequent instruction on how to continue, in this case Extract Key words from the sentence.

The results are impressive, with the key words extracted in blue, and an extra bonus of a label stating that via unsupervised classification the keywords can be classified as Location.

Please subscribe to get an email when I publish a new article. 🙂

A Conversational Example

The example below is a question and answer chatbot, the context is defined as “Facts:” and the data is grouped into a list of facts related to the continent of Africa. The Continuation Example is described with a question and answer sequence.

The selected text indicated below was generated.

Presets

OpenAI has quite a number of presets available to get you off to a flying start in terms of prompt engineering.

Finally

The various LLMs have very similar prompt engineering requirements when it comes to generation models. Two additional elements to keep in mind are that most LLMs have a selection of different model sizes and with numerous settings per model.

*The image for this article was generated on OpenAI’s DALL-E with the prompt: “a Van Gogh painting of a computer that understands human speech on a yellow backdrop”

--

--

Cobus Greyling
Cobus Greyling

Written by Cobus Greyling

I’m passionate about exploring the intersection of AI & language. www.cobusgreyling.com

No responses yet