Six GPT Best Practices For Improved Results

Here are six best practices to improve your prompt engineering results. When interacting with LLMs, you must have a vision of what you want to achieve and mimic the initiation of that vision. The process of mimicking is referred to as prompt design, prompt engineering or casting.

Cobus Greyling
5 min readJul 12, 2023

--

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

Here Are Six Strategies For Better Results

1️⃣ Write Detailed Prompts

To ensure a relevant response, make sure to include any important details or context in your requests. Failing to do so leaves the burden on the model to guess what you truly intend.

As far as possible, OpenAI advises users to provide detailed input to the LLM when performing prompt engineering. For instance, users should specify if they require longer answers, or brief replies.

Also if the LLM responses need to be simplified or if it is intended for exports. The best approach is to demonstrate the required response to the LLM.

2️⃣ Describe To The Model The Persona It Should Adopt

Below is an example of how, within the OpenAI playground, the persona is defined. This determines the style of LLM responses.

3️⃣ Clearly Segment Prompts

A well engineered prompt should have three components…context, data and continuation.

The context needs to be set, and this describes to the generation model what the objectives are.

The data will be used for the model to learn from.

And the continuation description instructs the generative model on how to continue. The continuation statement is used to inform the LLM on how to use the context and data. It can be used to summarise, extract key words, or have a conversation with a few dialog turns.

Below the prompt engineering elements:

DESCRIPTION:
* Context: The context is a description of the data or the function.
* Data: The data is the few-shot learning example the Generative model will learn from.
* Continuation Description: What are the next steps the bot should execute, this step also helps for iteration on an initial query.

EXAMPLE:
Sentence: In Cape Town a few landmarks stand out, like Table Mountain, the harbour and Lion's head. Traveling towards Cape Point is also beautiful.
Extract Key words from the sentence:

With the advent of ChatML, users are mandated to segment prompts, as seen in the example below:

pip install openai
import os
import openai
openai.api_key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
completion = openai.ChatCompletion.create(
model="gpt-4-0613",
messages = [{"role": "system", "content" : "Summarize this message in max 10 words."},
{"role": "user", "content" : "Jupiter is the fifth planet from the Sun and the largest in the Solar System. It is a gas giant with a mass one-thousandth that of the Sun, but two-and-a-half times that of all the other planets in the Solar System combined. Jupiter is one of the brightest objects visible to the naked eye in the night sky, and has been known to ancient civilizations since before recorded history. It is named after the Roman god Jupiter. When viewed from Earth, Jupiter can be bright enough for its reflected light to cast visible shadows, and is on average the third-brightest natural object in the night sky after the Moon and Venus."},
{"role": "assistant", "content" : "I am doing well"}]
)

print(completion)

You can see the model is defined, and within messages, the role of system is defined with a description. The role of user is defined with contents, and the assistant.

4️⃣ Decompose The Sequence Of Steps To Complete The Task

This can also be referred to as chain-of-thought prompting with the aim to solicit chain-of-thought reasoning from the LLM.

In essence chain of thought reasoning can be achieve by creating intermediate reasoning steps to incorporate in the prompt.

Source

The ability of LLMs to perform complex reasoning improves the prompt results significantly.

5️⃣ Provide Examples via Few-Shot Training

The example below shows how a number of examples are given via a few-shot training approach, before the final answer is asked:

pip install openai

import os
import openai
openai.api_key = "xxxxxxxxxxxxxxxxxxxxxx"

completion = openai.ChatCompletion.create(
model="gpt-4-0613",
messages = [{"role": "system", "content" : "You translate corporate jargon into plain English."},
{"role": "user", "content" : "New synergies will help drive top-line growth."},
{"role": "assistant", "content" : "Working well together will make more money."},
{"role": "user", "content" : "Let’s circle back when we have more bandwidth to touch base on opportunities for increased leverage."},
{"role": "assistant", "content" : "When we’re less busy, let’s talk about how to do better."},
{"role": "user", "content" : "This late pivot means we don’t have time to boil the ocean for the client deliverable."}
]
)
print(completion)

6️⃣ Provide The Output Length

You can request the model to generate outputs with a specific target length. This can be specified in terms of the count of words, sentences, paragraphs, or bullet points.

However, asking the model to generate an exact number of words is not very precise.

The model is more accurate in producing outputs with an exact number of paragraphs or bullet points.

⭐️ Follow me on LinkedIn for updates on Conversational AI ⭐️

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

LinkedIn

--

--

Cobus Greyling
Cobus Greyling

Written by Cobus Greyling

I’m passionate about exploring the intersection of AI & language. www.cobusgreyling.com

Responses (1)