OpenAI Has Three New Use Modes, Each With Mode Specific Models
OpenAI has introduced the concept of modes to their playground and development interface. Each mode has a dedicated LLM assigned to it.
An area of prioritisation for OpenAI is the concept of modes…
OpenAI is gradually introducing structure into their LLM environment.
This introduction of structure is taking place in two areas…Prompt Engineering. I wrote extensively about this aspect, please read more about it here.
The second area is that of Model Segmentation…
Considering Model Segmentation, OpenAI has trained models for specific tasks, be it completion, chat, insertion or editing text.
Previously the complete mode was the most commonly known, used and available mode.
Below is a typical complete prompt, shown in the Goose AI playground.
The complete function is being overloaded, if you like, and leveraged for a wide array of tasks, Prompt Engineering is being used to manipulate LLM input to accommodate for specific modes like insert, complete, chat etc.
Prompt engineers have to, through a process of trial and error, discover which prompt templates work best for certain use-cases.
Hence OpenAI opted to introduce different modes for the most common LLM use-cases. A mode can be seen as a pairing of a specific LLM, an endpoint, and custom Python formatting.
So apart from managing different end-points for different tasks as listed below, the most appropriate mode for specific implementations will have to be selected. And the implementation will have to point to a mode specific model.
The image below shows the model endpoints with each model the end-point is compatible with.
Prompt Engineering is also affected as the input template is defined for each mode and will have to be accommodated. This is especially the case with chat, which is most probably the most widely used mode.
Further development and evolution of modes, input templates and model segmentation will surely continue.
Hence applications focussed on prompt creation and management, together with prompt chaining applications will have to continuously update their products to accommodate OpenAI updates.
Are users obliged to make use of modes? Of course not. But the best results will be achieved with a tight alignment between the user task type and the most appropriate mode.
Below you can see the various end-points:
/v1/chat/completions
/v1/completions
/v1/edits
/v1/audio/transcriptionswhisper-1
/v1/audio/translationswhisper-1
/v1/fine-tunes
/v1/embeddings
/v1/moderations
A seen in the screen shot below, from the OpenAI Playground, the four modes can be accessed with Chat, Insert and Edit being currently in Beta.
And below is a table listing the four modes, with the three new modes marked as new.
The mode specific models are also listed on the right. I added the models available for fine-tuning, as there is often the assumption that all of the OpenAI models are fine-tuneable.
Considering the Chat Mode, the models gpt-3.5-turbo
and gpt-3.5-turbo-0301
are generally available, but not the gpt-4
models, yet.
In the Insert row, OpenAI advises that the two insert models be used for insert specific tasks: text-davinci-insert-001
& text-davinci-insert-002
.
Below is a screenshot of the complete OpenAI playground, with the modes visible at the top right.
Taking the chat mode as an example, let’s compare the playground view to the Python code view:
And the same view in the code, notice how system
, user
and assistant
roles are defined:
pip install openai
import os
import openai
openai.api_key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
chat_mode = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages = [{"role": "system", "content" : "You are ChatGPT, a large language model trained by OpenAI. Answer as concisely as possible.\nKnowledge cutoff: 2021-09-01\nCurrent date: 2023-03-02"},
{"role": "user", "content" : "How are you?"},
{"role": "assistant", "content" : "I am doing well"},
{"role": "user", "content" : "What is the mission of the company OpenAI?"}]
)
print(chat_mode)
⭐️ Please follow me on LinkedIn for updates on Conversational AI ⭐️
I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.