OpenAI Introduced Chat Markup Language (ChatML) Based Input To Non-Chat Modes

This is one of the most significant announcements from OpenAI & it is not receiving the attention that it should.

Cobus Greyling
6 min readJul 10, 2023

--

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

OpenAI wants all users to migrate from Text completions to Chat Completions. This means the introduction of ChatML to not only the chat mode, but also completion modes like text summarisation, code completion and general text completion tasks.

Within 4 months since introduction, the Chat Completions API accounts for 97% of API GPT usage…as opposed to Text Completions. ~ OpenAI

This implies that systems making use of OpenAI API’s will have to structure their input in the ChatML format of messages for system, user and assistant. Hence there will be adaption required from Autonomous Agent frameworks and Prompt Chaining IDEs.

The addition of Function Calling was well received and the imposing of ChatML could also have its benefits.

  messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
]
)

This move is also evident within the modes section of the OpenAI playground, where in March 2022 modes were introduced for Edit, Insert and Chat. Now, Chat is standard, with Complete and Edit marked as legacy.

In the past, GPT models consumed unstructured text, like most other LLMs.

The ChatGPT based models were geared towards structured input, based on Chat Markup Language (ChatML for short).

As seen in the practical and working code examples below, ChatML documents are constituted by a sequence of messages.

Each message contains a header, being:

  • System
  • User, or
  • Assistant.

hence who said the phrase. OpenAI will add meta data options to these fields in future.

The contents are the text payload. In future other data types will be included to facilitate a multi-modal approach.

Text Summarisation Code:

pip install openai
import os
import openai
openai.api_key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
completion = openai.ChatCompletion.create(
model="gpt-4-0613",
messages = [{"role": "system", "content" : "Summarize this message in max 10 words."},
{"role": "user", "content" : "Jupiter is the fifth planet from the Sun and the largest in the Solar System. It is a gas giant with a mass one-thousandth that of the Sun, but two-and-a-half times that of all the other planets in the Solar System combined. Jupiter is one of the brightest objects visible to the naked eye in the night sky, and has been known to ancient civilizations since before recorded history. It is named after the Roman god Jupiter. When viewed from Earth, Jupiter can be bright enough for its reflected light to cast visible shadows, and is on average the third-brightest natural object in the night sky after the Moon and Venus."},
{"role": "assistant", "content" : "I am doing well"}]
)

print(completion)

Text Summarisation Result:

{
"id": "chatcmpl-7Zjb73qmWIRhTZ1EdoVoF3htTx6wy",
"object": "chat.completion",
"created": 1688751113,
"model": "gpt-3.5-turbo-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Jupiter is the largest planet and very bright in the sky."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 160,
"completion_tokens": 13,
"total_tokens": 173
}
}

Code Completion Code:

completion = openai.ChatCompletion.create(
model="gpt-4-0613",
messages = [{"role": "system", "content" : "Complete the following code."},
{"role": "user", "content" : "def fibonacci(num):"}]
)

print(completion)

Code Completion Result:

{
"id": "chatcmpl-7ZjeMfmfC7SntvD4EuKZtTmotf6z4",
"object": "chat.completion",
"created": 1688751314,
"model": "gpt-3.5-turbo-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "if num <= 0:\n return []\nelif num == 1:\n return [0]\nelif num == 2:\n return [0, 1]\nelse:\n fib_list = [0, 1]\n for i in range(2, num):\n fib_list.append(fib_list[i - 1] + fib_list[i - 2])\n return fib_list"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 80,
"total_tokens": 100
}
}

Few-Shot Learning Code:

completion = openai.ChatCompletion.create(
model="gpt-4-0613",
messages = [{"role": "system", "content" : "You translate corporate jargon into plain English."},
{"role": "user", "content" : "New synergies will help drive top-line growth."},
{"role": "assistant", "content" : "Working well together will make more money."},
{"role": "user", "content" : "Let’s circle back when we have more bandwidth to touch base on opportunities for increased leverage."},
{"role": "assistant", "content" : "When we’re less busy, let’s talk about how to do better."},
{"role": "user", "content" : "LThis late pivot means we don’t have time to boil the ocean for the client deliverable."}
]
)
print(completion)

Few-Shot Learning Result:

{
"id": "chatcmpl-7ZjixqMS7XUlKLGPQbvX0RgIlNayU",
"object": "chat.completion",
"created": 1688751599,
"model": "gpt-3.5-turbo-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "This sudden change in direction means we don't have enough time to do everything for the client's request."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 107,
"completion_tokens": 21,
"total_tokens": 128
}
}

In Closing…

It is evident that OpenAI intends to bring structure to the format of user input data.

The ChatML based conversational format has extended to encompass generative AI tasks, such as summarisation, code completion and few-shot contextual prompts.

This structure being implemented at a model level will need to be adopted by downstream applications such as autonomous agents and prompt chaining.

With the ability for LLM-based generative AI applications to make use of a variety of LLMs, the integration of Chat Completion is sure to engender advantages; however, it will also complicate the field as a whole.

⭐️ Please follow me on LinkedIn for updates on Conversational AI ⭐️

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.

LinkedIn

--

--

Cobus Greyling
Cobus Greyling

Written by Cobus Greyling

I’m passionate about exploring the intersection of AI & language. www.cobusgreyling.com

No responses yet