OpenAI o1 Reasoning Models

Reasoning models can “think” before responding, generating a detailed internal chain of thought that simulates human-like deliberation. This process allows them to break down complex problems step-by-step, improving both accuracy and depth in their responses.

Cobus Greyling
6 min readOct 4, 2024

--

AI Agents capable of reasoning and breaking down tasks into subtasks represent a major step forward in automation and intelligence.

OpenAI’s o1 model demonstrates this shift with enhanced reasoning abilities, contributing to more complex task handling, but it’s not a fully autonomous agent on its own — it supports reasoning within a broader system.

While these models are becoming more agent-like, offloading too much reasoning to language models poses challenges. The risks around inspectability and observability are real, as understanding how decisions are made is crucial, especially for high-stakes tasks.

o1 Models in the OpenAI Playground

AI Agents with Reasoning Abilities

Yes, AI Agents with reasoning and task decomposition are an important advance. This reflects improvements in planning, problem-solving and also multi-step reasoning.

OpenAI’s o1 Models

Although OpenAI’s o1 models are designed for reasoning capabilities, it cannot be technically classified as an AI Agent but as a reasoning-optimised language model.

LLMs (like o1) can assist agents but don’t operate autonomously in the traditional agent sense, with full decision-making autonomy.

Agentic Nature

Models like o1 are becoming more agentic, able to reason and perform more complex tasks autonomously, but they require orchestration with external systems for executing actions.

Risk of Offloading Reasoning

The risk of offloading reasoning to LLMs is valid. As reasoning becomes more opaque in larger models, inspectability (the ability to understand decisions) and observability (monitoring outcomes) become challenging.

In critical systems, this can indeed affect trust and accountability, especially if the system produces unexpected or incorrect reasoning chains.

Inspectability, Observability & Discoverability

The o1 model documentation notes that reasoning tokens, although invisible via the API, still take up space in the context window and are billed as output tokens.

Improving inspectability and observability by making these tokens visible would greatly enhance the ability to track the model’s decision-making process.

This would help developers understand how the model reaches conclusions, identify biases, and correct errors, especially in critical applications.

Furthermore, by exposing these intermediate steps or chains, it would boost discoverability by enabling better validation, debugging, and trust in AI-generated outcomes.

Hence, by exposing the intermediate reasoning steps, developers and users could better understand how the model arrives at conclusions, which is critical for transparency, debugging and also improving trust in AI systems, particularly in high-stakes scenarios.

This visibility could allow for fine-tuning and identifying errors or biases in the decision-making process, making the model more reliable and easier to validate.

Python Example

Below is a working Python notebook which you can run to experiment with the o1 model in terms of reasoning capabilities.

You can copy and paste the code below as-is…

pip install openai

import openai

# Prompt user for their OpenAI API key
api_key = input("Enter your OpenAI API key: ")

# Set up OpenAI API key
openai.api_key = api_key

# Define a simple reasoning task
def run_reasoning_task():
print("Welcome to the o1 Preview Model Reasoning Task!")

# Prompt user for a reasoning question
question = input("Please enter a reasoning question or task (e.g., 'If John is taller than Tom, and Tom is taller than Alex, who is the tallest?'): ")

# Query the o1 model using the chat completions endpoint
response = openai.chat.completions.create(
model="o1-mini", # Replace with 'o1-preview' if available
messages=[
# The o1-mini model does not support the system role. Remove this message.
# {"role": "system", "content": "You are a reasoning assistant."},
{"role": "user", "content": question}
]
)

# Print the model's response
print("\nModel Response:")
print(response.choices[0].message.content) # Access content using object attributes

# Run the reasoning task
run_reasoning_task()

Below is a question I pose to the o1 model, this question is very ambiguous and complex, and the o1 model does a good job in decomposing the problem and following a logical path of reasoning to solve it.

What is the square root of the year of birth of the man commonly regarded as the father of the iPhone?

Below the first prompt from the model, and the question is asked to the model:

What is the square root of the year of birth of the man 
commonly regarded as the father of the iPhone?

And below the answer, and a fairly OK breakdown of how the response was reached…

Welcome to the o1 Preview Model Reasoning Task!
Please enter a reasoning question or task (e.g., 'If John is taller than Tom, and Tom is taller than Alex, who is the tallest?'): What is the square root of the year of birth of the man commonly regarded as the father of the iPhone?

Model Response:
The man commonly regarded as the father of the iPhone is **Steve Jobs**, the co-founder and former CEO of Apple Inc. Steve Jobs was born in the year **1955**.

To calculate the square root of 1955:

\[
\sqrt{1955} \approx 44.21
\]

**Therefore, the square root of the year of birth of Steve Jobs is approximately 44.21.**

A screenshot from the notebook:

In Conclusion

Reasoning models are evolving to become more agentic, with the ability to decompose complex tasks or questions into a series of logical steps.

These models create a dynamic sequence of steps, or “chains,” which can be followed and adapt in real-time to the specific context.

This capacity to break down complex problems and follow multi-step processes allows reasoning models to handle challenges that traditional approaches often fail to address.

These models will enhance not only automation of complex tasks but also human-AI collaboration in complex problem-solving scenarios.

By enabling more structured and interpretable reasoning, they push AI systems closer to true agentic autonomy.

✨✨ Follow me on LinkedIn ✨✨

Chief Evangelist @ Kore.ai | I’m passionate about exploring the intersection of AI and language. From Language Models, AI Agents to Agentic Applications, Development Frameworks & Data-Centric Productivity Tools, I share insights and ideas on how these technologies are shaping the future.

Additional Resource:

https://platform.openai.com/docs/guides/reasoning

https://openai.com/index/introducing-openai-o1-preview/

--

--