Using MCP with OpenAI & MCP Servers
How MCP Servers Supercharge vertical AI Agent Integration
Personally, I liked the whole idea of function calling, allowing large language models (LLMs) to interact with external tools, APIs, and data sources to perform tasks beyond their training data.
Function calling bridged a gap, enabling AI to execute predefined actions. But as I have said before, function calling just formatted the data for a specific task, it was still contained within the LLM and a function did not make a call to any outside service or API.
But what if we could take this a step further?
Enter the Model Context Protocol (MCP), a standardised framework that amplifies function calling by providing a unified interface for LLMs to access a vast ecosystem of services.
With MCP, LLMs dynamically decide which service to call — or even sequence multiple calls — based on the task, making AI more autonomous and versatile.
Websites like mcpservers.org are at the forefront of this revolution, offering a curated collection of MCP servers to supercharge AI workflows.
What is MCP and Why It Matters
MCP is an open standard designed to streamline how LLMs interact with external resources.
Unlike traditional function calling, which often requires custom integrations for each tool, MCP provides a consistent protocol for accessing diverse services — think web scraping, database queries, or cloud storage management — through a single interface.
Hosted at https://mcpservers.org, a community-driven platform, MCP servers are production-ready or experimental tools that empower AI Agents to perform specialised tasks securely and efficiently.
The Power of mcpservers.org
The mcpservers.org platform is a treasure trove of over 300 MCP servers, each tailored for specific use cases.
For this demo, I made use of a Web Content Retrieval remove MCP server called Fetch.
The Fetch MCP Server grabs HTML, JSON, or Markdown from websites, perfect for real-time research or content analysis.
These servers are 100% open-source, encouraging developers to contribute new tools via GitHub.
The video below shows how easily the remote MCP server can be implemented via the OpenAI console.
When a remote MCP server is referenced, the tools available within the MCP server is listed. And also a flag if permission should be asked prior to calling the MCP server.
As seen below, Fetch is an MCP server that provides web content fetching capabilities. This server enables LLMs to retrieve and process content from web pages, converting HTML to markdown for easier consumption.
And a user query which invokes the Fetch MCP remote server.
Orchestration of Multiple MCP Clients
When building sophisticated AI applications, an LLM SDK must intelligently orchestrate multiple Model Context Protocol (MCP) clients to ensure the most appropriate service handles each request.
An LLM SDK chooses between multiple MCP clients using a decision engine based on capabilities, cost, performance, and user-defined logic. Queuing is supported as a way to manage rate limits, availability, and latency, often backed by retry and failover strategies.
The SDK can implement a decision engine, usually configurable or plugin-based, that routes requests.
Considering the example below, I asked a compound question which touches both MCP servers:
What is the difference between LangChain and LangGraph. Also, give me the basic information on Kore.ai’s website.
From the image below it is clear how the tools are invoked and queued.
In Conclusion
Integrating MCP with OpenAI and dedicated MCP servers offers a powerful approach to streamline multi-AI Agent workflows.
By leveraging MCP’s modular architecture and OpenAI’s advanced language models, developers can create scalable, efficient systems tailored to complex tasks.
As AI continues to evolve, tools like MCP will play a pivotal role in simplifying deployment and enhancing collaboration between AI Agents, paving the way for more innovative and accessible AI solutions.
Chief Evangelist @ Kore.ai | I’m passionate about exploring the intersection of AI and language. From Language Models, AI Agents to Agentic Applications, Development Frameworks & Data-Centric Productivity Tools, I share insights and ideas on how these technologies are shaping the future.