Transforming Documentation Into Actionable Flows with OpenAI Reasoning Models

There are a number of advantages to a process where documents and knowledge can be converted into a routine…

6 min readJan 23, 2025

--

In Short

What I love about this implementation of the OpenAI reasoning model, is that it takes knowledge articles and converts them into a sequence of events with conditions.

The o1 model, with its advanced reasoning capabilities, is seemingly well suited for creating routines that convert knowledge articles into process flows.

Its ability to handle complex, structured information without extensive prior training allows it to deconstruct intricate knowledge articles — such as those containing multi-step instructions, described decision trees, or diagrams — into actionable routines.

By leveraging its zero-shot capabilities, o1 can efficiently interpret and break down tasks into clear, manageable steps without requiring extensive prompting or fine-tuning.

By leveraging their ability to identify patterns and relationships, LLMs bridge the gap between textual information and actionable, structured representations, making them ideal for tasks like creating process flows from knowledge articles.

Articles to Routines

Manually converting knowledge base articles into actionable routines or process flows is a complex and time-intensive process, particularly for companies aiming to build an automated pipeline.

Each routine must address diverse user scenarios, with clearly defined actions.

O1 has shown the ability to deconstruct articles and convert them into routines with zero-shot efficiency, meaning the LLM can interpret and follow instructions without needing extensive examples or prior task-specific training.

This significantly reduces the effort required for prompting, as the structure of the routine itself provides the necessary guidance for the LLM to execute each step.

By breaking tasks into distinct actions and incorporating function calls where appropriate, O1’s methodology enables the LLM to handle even complex workflows seamlessly.

This approach results in more effective, scalable solutions, particularly for enhancing customer service operations.

Symbolic Reasoning

Symbolic reasoning capabilities of LLMs enable them to interpret unstructured text and transform it into structured, logical flows.

These models can break down complex instructions or descriptions into step-by-step processes, allowing for seamless conversion into workflows, decision trees, or process diagrams.

Internal knowledge base articles are often complex and designed for human interpretation, transforming these documents into routines simplifies and structures each instruction, guiding the LLM through a sequence of small, manageable tasks.

This granular approach minimises ambiguity, enabling the LLM to process information systematically while reducing the likelihood of hallucination or straying from the intended workflow.

By breaking down complexity, routines help ensure more accurate and reliable performance from LLMs in processing these documents.

Data Source

The data source for the practical demonstration is shown below, with a few policies defined from OpenAI. The information is in CSV format.

Working Notebook

Below is working Python code which you can copy as-is into a Notebook, the code will prompt you for your OpenAI API Key. You can see below that the o1-preview model is defined.

The prompt is also defined after which the CSV file is fetched from the GitHub repository.

From here we define the routine generation function and the results can be displayed in a number of ways.

# Install required libraries
!pip install openai==0.28 pandas requests

# Import necessary libraries
import openai
import pandas as pd
import requests
import io

# Prompt the user for the OpenAI API key
api_key = input("Please enter your OpenAI API key: ").strip()
openai.api_key = api_key

# Define the model and prompt
MODEL = 'o1-preview'

CONVERSION_PROMPT = """
You are a helpful assistant tasked with converting an external-facing help center article into an internal-facing, programmatically executable routine optimized for an LLM. Please follow these instructions:

1. **Review the customer service policy carefully** to ensure every step is accounted for.
2. **Organize the instructions into a logical, step-by-step order**, using the specified format.
3. **Use the following format**:
- **Main actions are numbered** (e.g., 1, 2, 3).
- **Sub-actions are lettered** under their relevant main actions (e.g., 1a, 1b).
- **Specify conditions using clear 'if...then...else' statements**.
- **For instructions requiring more information from the customer**, provide polite and professional prompts.
- **For actions requiring data from external systems**, write a step to call a function using backticks for the function name (e.g., `call the check_delivery_date function`).
- **Define any new functions** by providing a brief description of their purpose and required parameters.
- **The step prior to case resolution should always be to ask if there is anything more you can assist with**.
- **End with a final action for case resolution**: calling the `case_resolution` function should always be the final step.
4. **Ensure compliance** by making sure all steps adhere to company policies, privacy regulations, and legal requirements.
5. **Handle exceptions or escalations** by specifying steps for scenarios that fall outside the standard policy.

Please convert the customer service policy into the formatted routine, ensuring it is easy to follow and execute programmatically.
"""

# Fetch the CSV file from the GitHub repository
url = "https://raw.githubusercontent.com/openai/openai-cookbook/main/examples/data/helpcenter_articles.csv"
response = requests.get(url)

if response.status_code == 200:
csv_data = io.StringIO(response.text)
articles = pd.read_csv(csv_data)
print("CSV file loaded successfully.")
else:
raise Exception("Failed to fetch the CSV file. Please check the URL.")

# Define the routine generation function
def generate_routine(policy_content):
try:
response = openai.ChatCompletion.create(
model=MODEL,
messages=[
{"role": "user", "content": f"{CONVERSION_PROMPT}\n\nPOLICY:\n{policy_content}"}
]
)
return response.choices[0].message['content']
except Exception as e:
print(f"An error occurred: {e}")
return None

# Process articles and generate routines
def process_article(article):
routine = generate_routine(article['content'])
return {"policy": article['policy'], "routine": routine}

# Convert articles to a dictionary format for processing
articles_dict = articles.to_dict(orient="records")

# Generate routines
results = [process_article(article) for article in articles_dict]

# Store and display the results
df = pd.DataFrame(results)
display(df)

The data frame can be used to graph the information…

Below the routine is shown in a spreadsheet, notice the conditional settings, with an If-Else approach.

Finally

Routines like this can be integrated into agentic systems to handle specific customer issues.

For example, if a customer needs help setting up prepaid billing, a classifier can identify the right routine and provide it to the LLM to assist the customer.

The system can either guide the user through the setup or complete the task for them.

Chief Evangelist @ Kore.ai | I’m passionate about exploring the intersection of AI and language. From Language Models, AI Agents to Agentic Applications, Development Frameworks & Data-Centric Productivity Tools, I share insights and ideas on how these technologies are shaping the future.

--

--

Cobus Greyling
Cobus Greyling

Written by Cobus Greyling

I’m passionate about exploring the intersection of AI & language. www.cobusgreyling.com

Responses (1)